by Adam Bertram
An intelligent IT support agent that uses Azure AI Search for knowledge retrieval, Microsoft Entra ID integration for user management, and Jira for ticket creation. The agent can answer questions using internal documentation and perform administrative tasks like password resets. How It Works The workflow operates in three main sections: Agent Chat Interface: A chat trigger receives user messages and routes them to an AI agent powered by Google Gemini. The agent maintains conversation context using buffer memory and has access to multiple tools for different tasks. Knowledge Management: Users can upload documentation files (.txt, .md) through a form trigger. These documents are processed, converted to embeddings using OpenAI's API, and stored in an Azure AI Search index with vector search capabilities. Administrative Tools: The agent can query Microsoft Entra ID to find users, reset passwords, and create Jira tickets when issues need escalation. It uses semantic search to find relevant internal documentation before responding to user queries. The workflow includes a separate setup section that creates the Azure AI Search service and index with proper vector search configuration, semantic search capabilities, and the required field schema. Prerequisites To use this template, you'll need: n8n cloud or self-hosted instance Azure subscription with permissions to create AI Search services Microsoft Entra ID (Azure AD) access with user management permissions OpenAI API account for embeddings Google Gemini API access Jira Software Cloud instance Basic understanding of Azure resource management Setup Instructions Import the template into n8n. Configure credentials: Add Google Gemini API credentials Add OpenAI API credentials for embeddings Add Microsoft Azure OAuth2 credentials with appropriate permissions Add Microsoft Entra ID OAuth2 credentials Add Jira Software Cloud API credentials Update workflow parameters: Open the "Set Common Fields" nodes Replace <azure subscription id> with your Azure subscription ID Replace <azure resource group> with your target resource group name Replace <azure region> with your preferred Azure region Replace <azure ai search service name> with your desired service name Replace <azure ai search index name> with your desired index name Update the Jira project ID in the "Create Jira Ticket" node Set up Azure infrastructure: Run the manual trigger "When clicking 'Test workflow'" to create the Azure AI Search service and index This creates the vector search index with semantic search configuration Configure the vector store webhook: Update the "Invoke Query Vector Store Webhook" node URL with your actual webhook endpoint The webhook URL should point to the "Semantic Search" webhook in the same workflow Upload knowledge base: Use the "On Knowledge Upload" form to upload your internal documentation Supported formats: .txt and .md files Documents will be automatically embedded and indexed Test the setup: Use the chat interface to verify the agent responds appropriately Test knowledge retrieval with questions about uploaded documentation Verify Entra ID integration and Jira ticket creation Security Considerations Use least-privilege access for all API credentials Microsoft Entra ID credentials should have limited user management permissions Azure credentials need Search Service Contributor and Search Index Data Contributor roles OpenAI API key should have usage limits configured Jira credentials should be restricted to specific projects Consider implementing rate limiting on the chat interface Review password reset policies and ensure force password change is enabled Validate all user inputs before processing administrative requests Extending the Template You could enhance this template by: Adding support for additional file formats (PDF, DOCX) in the knowledge upload Implementing role-based access control for different administrative functions Adding integration with other ITSM tools beyond Jira Creating automated escalation rules based on query complexity Adding analytics and reporting for support interactions Implementing multi-language support for international organizations Adding approval workflows for sensitive administrative actions Integrating with Microsoft Teams or Slack for notifications
by Jimleuk
This n8n workflow builds an appointment scheduling AI agent which can Take enquiries from prospective customers and help them book an appointment by checking appointment availability Where no appointment is booked, the Agent is able to send follow-up messages to re-engage leads. After an appointment is booked, the agent is able reschedule or even cancel the booking for the user without human intervention. For small outfits, this workflow could contribute the necessary "man-power" required to increase business sales. The sample Airtable can be found here: https://airtable.com/appO2nHiT9XPuGrjN/shroSFT2yjf87XAox 2024-10-22 Updated to Cal.com API v2. How it works The customer sends an enquiry via SMS to trigger our workflow. For this trigger, we'll use a Twilio webhook. The prospective or existing customer's number is logged in an Airtable Base which we'll be using to track all our enquries. Next, the message is sent to our AI Agent who can reply to the user and decide if an appointment booking can be made. The reply is made via SMS using Twilio. A scheduled trigger which runs every day, checks our chat logs for a list of prospective customers who have yet to book an appointment but still show interest. This list is sent to our AI Agent to formulate a personalised follow-up message to each lead and ask them if they want to continue with the booking. The follow-up interaction is logged so as to not to send too many messages to the customer. Requirements A Twilio account to receive customer messages. An Airtable account and Base to use as our datastore for enquiries. Cal.com account to use as our scheduling service. OpenAI account for our AI model. Customising this workflow Not using Airtable? Swap this out for your CRM of choice such as hubspot or your own service. Not using Cal.com? Swap this out for API-enabled services such as Acuity Scheduling or your own service.
by Simon
Address Validation Workflow About This workflow automates the process of validating and correcting client shipping addresses in Billbee, ensuring accurate delivery information. It's ideal for e-commerce businesses looking to save time and reduce errors in their order fulfillment process. The workflow uses Billbee, an order management platform for small to medium-sized online retailers, and the Endereco API for address validation. Who Is This For? E-Commerce Businesses**: Streamline order fulfillment by automatically correcting common shipping address errors. Warehouse Teams**: Reduce manual work and ensure packages are shipped to the correct address. Small to Medium-Sized Retailers**: Businesses using Billbee to manage orders and requiring efficient, automated solutions for address validation. How it Works Trigger: Workflow starts via a Billbee Webhook when an order is imported. Fetch Data: Retrieve the client's shipping address using the Order ID. Validate Address: Send the address to the Endereco API for validation and correction (e.g., house number errors). Conditional Actions: Valid Address: Update the address in Billbee. Invalid Address: Tag the order with "Validation Error." Track Status: Add tags in Billbee for processed orders. Setup Steps API Keys: Obtain Billbee Developer/User API Keys and Endereco API Key. Billbee Rule: Create an automation rule: Trigger: Order imported. Action: Call External URL with OrderId to trigger n8n workflow. Optional: Use a secondary trigger (e.g., order state changes to "gepackt") for manual corrections. Customization Options Filter Delivery Addresses: Customize filters to exclude specific delivery types, such as pickup shops ("Postfiliale," "Paketshop," or "Packstation"). Filters can be adjusted within Billbee or in the workflow. Error Handling: Configure additional actions for orders that fail validation, such as notifying your team or flagging orders for manual review. Order Tags: Define custom tags in Billbee to better track order statuses (e.g., "Address Corrected," "Validation Error"). Trigger Types: Use additional triggers such as changes to order states (e.g., "gepackt" or "In Fulfillment") for manual corrections or validations. Address Fields: Modify the workflow to focus on specific address components, such as postal codes, city names, or country codes. Validation Rules: Adjust Endereco API settings or add custom logic to refine validation criteria based on your business needs. API Documentation Endereco**: Endereco API Docs Billbee**: Billbee API Docs
by Angel Menendez
Enhance Security Operations with the Venafi Slack CertBot! Venafi Presentation - Watch Video Our Venafi Slack CertBot is strategically designed to facilitate immediate security operations directly from Slack. This tool allows end users to request Certificate Signing Requests that are automatically approved or passed to the Secops team for manual approval depending on the Virustotal analysis of the requested domain. Not only does this help centralize requests, but it helps an organization maintain the security certifications by allowing automated processes to log and analyze requests in real time. Workflow Highlights: Interactive Modals**: Utilizes Slack modals to gather user inputs for scan configurations and report generation, providing a user-friendly interface for complex operations. Dynamic Workflow Execution**: Integrates seamlessly with Venafi to execute CSR generation and if any issues are found, AI can generate a custom report that is then passed to a slack teams channel for manual approval with the press of a single button. Operational Flow: Parse Webhook Data**: Captures and parses incoming data from Slack to understand user commands accurately. Execute Actions**: Depending on the user's selection, the workflow triggers other actions within the flow like automatic Virustotal Scanning. Respond to Slack**: Ensures that every interaction is acknowledged, maintaining a smooth user experience by managing modal popups and sending appropriate responses. Setup Instructions: Verify that Slack and Qualys API integrations are correctly configured for seamless interaction. Customize the modal interfaces to align with your organization's operational protocols and security policies. Test the workflow to ensure that it responds accurately to Slack commands and that the integration with Qualys is functioning as expected. Need Assistance? Explore Venafi's Documentation or get help from the n8n Community for more detailed guidance on setup and customization. Deploy this bot within your Slack environment to significantly enhance the efficiency and responsiveness of your security operations, enabling proactive management of CSR's.
by Krupal Patel
🔧 Workflow Summary This system automates LinkedIn lead generation and enrichment in six clear stages: 1. Lead Collection (via Apollo.io) Automatically pulls leads based on keywords, roles, or industries using Apollo’s API. Captures name, job title, company, and LinkedIn profile URL. You can kick off the workflow via form, webhook, WhatsApp, Telegram, or any other custom trigger that passes search parameters. 2. LinkedIn Username Extraction Extracts usernames from LinkedIn profile URLs using a script step. These usernames are required for further enrichment using RapidAPI. 3. Email Retrieval (via Apollo.io User ID) Fetches verified work email using the Apollo User ID. Email validity is double-checked using www.mails.so filtering out undeliverable or inactive emails by checking MX records and deliverability. 4. Profile Summary (via LinkedIn API on RapidAPI) Enriches lead data by pulling bio/summary details to understand their background and expertise. 5. Activity Insights (Posts & Reposts) Collects recent posts or reposts to help craft personalised messages based on what they’re currently engaging with. 6. Leads Sheet Update All data is written into a Google Sheet. New columns are populated dynamically without erasing existing data. ⸻ ✅ Smart Retry Logic Each workflow is equipped with a fail-safe system: Tracks status per row: ✅ done, ❌ failed, ⏳ pending Failed rows are automatically retried after a custom delay (e.g., 2 weeks). Ensures minimal drop-offs and complete data coverage. 📊 Google Sheets Setup Make a copy of the following: Template 1: Apollo Leads Scraper & Enrichment Template 2: Final Enriched Leads The system appends data (like emails, bios, activity) step by step. 🔐 API Credentials Needed 1. Apollo API Sign up and generate API key at Apollo Developer Portal Be sure to enable the “Master API Key” toggle so the same key works for all endpoints. 2. LinkedIn Data API (via RapidAPI) Subscribe at RapidAPI - LinkedIn Data Use your key in the x-rapidapi-key header. 3. Mails.so API Get your API Key from mails.so dashboard 🛠️ Troubleshooting – LinkedIn Lead Machine ✅ Common Mistakes & Fixes 1. API Keys Not Working Make sure API keys for Apollo, RapidAPI, and mails.so are correct. Apollo “Master API Key” must be enabled. Keys should be saved as Generic Credentials in n8n. 2. Leads Not Found Check if the search query (keyword/job title) is too narrow. Apollo might return empty results if the filters are incorrect. 3. LinkedIn URLs Missing or Invalid Ensure Apollo is returning valid LinkedIn URLs. Improper URLs will cause username extraction and enrichment steps to fail. 4. Emails Not Coming Through Apollo may not have verified emails for all leads. mails.so might reject invalid or expired email addresses. 5. Google Sheet Not Updating Make sure the Google Sheet is shared with the right Google account (linked to n8n). Check if the column names match and data isn’t blocked due to formatting. 6. Status Columns Not Changing Each row must have done, failed, or pending in the status column. If the status doesn’t update, the retry logic won’t trigger. 7. RapidAPI Not Returning Data Double-check if username is present and valid. Make sure the RapidAPI plan is active and within limits. 8. Workflow Not Running Check if the trigger node (form, webhook, etc.) is connected and active. Make sure you’re passing the required inputs (keyword, role, etc.). Need Help? Contact www.KrupalPatel.com for support and custom workflow development
by n8n Team
This workflow sends a new Clockify invoice to a Notion database of your choosing when a new invoice is created in Clockify. Prerequisites Notion account and Notion credentials. Clockify account. How it works On new invoice in Clockify webhook node will trigger when a new invoice is created in Clockify. Setup is involved. Create database page Notion node will create a database page with the information specified from the Clockify trigger. You can add additional fields if required by following the setup. Setup This workflow requires that you set up a webhook in Clockify. Follow the steps below to set up the webhook: Create a Clockify webhook by going to the webhooks section in Clockify. Create the webhook specifying the "Invoice created" event and paste in the URL provided from On new invoice in Clockify webhook step. You will also have to set up a Notion database: In Notion, create a new database. Add the following columns to the database: Invoice number (renamed from "Name") Issue date (with type "Date") Due date (with type "Date") Amount (with type "Number") Add any other fields you require to the database. Share the database to n8n. By default, the workflow will fill all the fields provided above, except for any other additional fields you add.
by Matt F.
Overview This automation template is designed to streamline your payment processing by automatically triggering upon a successful Stripe payment. The workflow retrieves the complete payment session and filters the information to display only the customer name, customer email, and the purchased product details. This template is perfect for quickly integrating Stripe transactions into your inventory management, CRM, or notification systems. Step-by-Step Setup Instructions Stripe Account Configuration: Ensure you have an active Stripe account. Connect your Stripe Credentials. Retrieve Product and Customer Data: Utilize Stripe’s API within the automation to fetch the purchased product details. Retrieve customer information such as: email and full name. Integration and Response: Map the retrieved data to your desired format. Trigger subsequent nodes or actions such as sending a confirmation email, updating a CRM system, or logging the transaction. Pre-Conditions and Requirements Stripe Account:** A valid Stripe account with access to API keys and webhook configurations. API Keys:** Ensure you have your Stripe secret and publishable keys ready. Customization Guidance Data Mapping:** Customize the filtering node to match your specific data schema or to include additional data fields if needed. Additional Actions:** Integrate further nodes to handle post-payment actions like sending SMS notifications, updating order statuses, or generating invoices. Enjoy seamless integration and enhanced order management with this automation template!
by PUQcloud
Setting up n8n workflow Overview The Docker MinIO WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Mario
Purpose This workflow synchronizes three entities from Notion to Clockify, allowing tracked time to be linked to client-related projects or tasks. Demo & Explanation How it works On every run active Clients, Projects and Tasks are retrieved from both Notion and Clockify before being compared by the Clockify ID, which is again stored in Notion for reference Potential differences are then applied to Clockify If an item has been archived or closed in Notion, it is also marked as archived in Clockify All entities are processed sequentially, since they are related hierarchically to each other By default this workflow runs once per day or when called via webhook (e.g. embedded into a Notion Button) Prerequisites A set of Notion databases with a specific structure is required to use this workflow You can either start with this Notion Template or adapt your system based on the requirements described in the big yellow sticky note of this workflow template Setup Clone the workflow and select the belonging credentials Follow the instructions given in the yellow sticky notes Activate the workflow Related workflows: Backup Clockify to Github based on monthly reports Prevent simultaneous workflow executions with Redis
by Fabrizio Terzi
AI-Driven Handbook Generator with Multi-Agent Orchestration (Pyragogy AI Village) This n8n workflow is a modular, multi-agent AI orchestration system designed for the collaborative generation of Markdown-based handbooks. Inspired by peer learning and open publishing workflows, it simulates a content pipeline where specialized AI agents act in defined roles, enabling true AI–human co-creation and iterative refinement. This project is a core component of Pyragogy, an open framework dedicated to ethical cognitive co-creation, peer AI–human learning, and human-in-the-loop automation for open knowledge systems. It implements the master orchestration architecture for the Pyragogy AI Village, managing a complex sequence of AI agents to process input, perform review, synthesis, and archiving, with a crucial human oversight step for final approval. How It Works: A Deep Dive into the Workflow's Architecture The workflow orchestrates a sophisticated content generation and review process, ideal for creating AI-driven knowledge bases or handbooks with human oversight. Webhook Trigger & Input:* The process begins when the workflow receives a JSON input via a *Webhook** (specifically at /webhook/pyragogy/process). This input typically includes details like the handbook's title, initial text, and relevant tags. Database Verification:* It first verifies the connection to a *PostgreSQL database** to ensure data persistence. Meta-Orchestrator:* A powerful *Meta-Orchestrator** (powered by gpt-4o from OpenAI) analyzes the initial request. Its role is to dynamically determine and activate the optimal sequence of specialized AI agents required to fulfill the input, ensuring tasks are dynamically routed and assigned based on each agent’s responsibility. Agent Execution & Iteration:** Each activated agent executes its step using OpenAI or custom endpoints. This involves: Content Generation: Agents like the Summarizer and the Synthesizer generate new content or refine existing text. Peer Review Board: A crucial aspect is the Peer Review Board, comprised of AI agents like the Peer Reviewer, the Sensemaking Agent, and the Prompt Engineer. This board evaluates the output for quality, coherence, and accuracy. Reprocessing & Redrafting: If the review agents flag a major_issue, they trigger redrafting loops by generating specific feedback for the Synthesizer. This mechanism ensures iterative refinement until the content meets the required standards. Human-in-the-Loop (HITL) Review:* For final approval, particularly for the Archivist agent's output, a *human review process* is initiated. An email is sent to a human reviewer, prompting them to approve, reject, or comment via a "Wait for Webhook" node. This ensures *human oversight** and quality control. Content Persistence & Versioning:** If the content is approved by the human reviewer: It's saved to a PostgreSQL database (specifically to the handbook_entries and agent_contributions tables). Optionally, the content can be committed to a GitHub repository for version control, provided the necessary environment variables are configured. Notifications:* The final output and the sequence of executed agents can be sent as a notification to *Slack**, if configured. Observe the dynamic loop: orchestrate → assign → generate → review (AI/human) → store Included AI Agents This workflow leverages a suite of specialized AI agents, each with a distinct role in the content pipeline: Meta-Orchestrator:** Determines the optimal sequence of agents to execute based on the input. Summarizer Agent:** Summarizes text into key points (e.g., 3 key points). Synthesizer Agent:** Synthesizes new text and effectively incorporates reprocessing feedback from review agents. Peer Reviewer Agent:** Reviews generated text, highlighting strengths, weaknesses, and suggestions, and indicates major_issue flags. Sensemaking Agent:** Analyzes input within existing context, identifying patterns, gaps, and areas for improvement. Prompt Engineer Agent:** Refines or generates prompts for subsequent agents, optimizing their output. Onboarding/Explainer Agent:** Provides explanations of the process or offers guidance to users. Archivist Agent:** Prepares content for the handbook, manages the human review process, and handles archiving to the database and GitHub. Setup Steps & Prerequisites To get this powerful workflow up and running, follow these steps: Import the Workflow: Import the pyragogy_master_workflow.json (or generate-collaborative-handbooks-with-gpt4o-multi-agent-orchestration-human-review.json) into your n8n instance. Connect Credentials: Postgres: Set up a Postgres Pyragogy DB credential (ID: pyragogy-postgres). OpenAI: Configure an OpenAI Pyragogy credential (ID: pyragogy-openai) for all OpenAI agents. GPT-4o is highly suggested for optimal performance. Email Send: Set up a configured email credential (e.g., for sending human review requests). Define Environment Variables: Define essential environment variables (an .env.template is included in the repository). These include: API base for OpenAI. Database connection details. (Optional) GitHub: For content persistence and versioning, configure GITHUB_ACCESS_TOKEN, GITHUB_REPOSITORY_OWNER, and GITHUB_REPOSITORY_NAME. (Optional) Slack: For notifications, configure SLACK_WEBHOOK_URL. Send a sample payload to your webhook URL (/webhook/pyragogy/process): { "title": "History of Peer Learning", "text": "Peer learning is an educational approach where students learn from and with each other...", "tags": ["education", "pedagogy"], "requireHitl": true } Ideal For This workflow is perfectly suited for: Educators and researchers exploring AI-assisted publishing and co-authoring with AI. Knowledge teams looking to automate content pipelines for internal or external documentation. Anyone building collaborative Markdown-driven tools or AI-powered knowledge bases. Documentation & Contributions: An Open Source and Collaborative Project This workflow is an open-source project and community-driven. Its development is transparent and open to everyone. We warmly invite you to: Review it:** Contribute your analysis, identify potential improvements, or report issues. Remix it:** Adapt it to your specific needs, integrate new features, or modify it for a different use case. Improve it:** Propose and implement changes that enhance its efficiency, robustness, or capabilities. Share it back:** Return your contributions to the community, either through pull requests or by sharing your implementations. Every contribution is welcome and valued! All relevant information for verification, improvement, and collaboration can be found in the official repository: 🔗 GitHub – pyragogy-handbook-n8n-workflow
by Amjid Ali
Overview This workflow template automates lead management and customer inquiry processing by integrating ERPNext, AI agents, and email notifications. It streamlines the process of capturing leads, analyzing inquiries, and generating actionable responses. The workflow uses ERPNext to capture inquiries, analyzes them with AI, and notifies the appropriate team or individual, all while maintaining a professional approach. What This Template Does ERPNext Webhook Integration: Captures leads and inquiries through ERPNext webhooks. Triggers the workflow when a new lead is created. AI-Powered Inquiry Analysis: Uses AI to extract key details from lead notes (e.g., customer name, organization, inquiry summary). Classifies inquiries as valid or invalid based on relevance to products, services, or solutions. Contact Assignment: Matches inquiries to the appropriate contact(s) using a Google Sheets database or ERPNext contact information. Handles multiple contacts if required. Email Notifications: Generates professional email notifications for valid inquiries. Sends emails to the appropriate contact(s) with inquiry details and action steps. Invalid Lead Handling: Identifies invalid inquiries (e.g., unrelated to products or services) and flags them for follow-up or dismissal. Custom Email Formatting: Converts plain text into professionally formatted HTML emails. Ensures that communication is clear, concise, and visually appealing. How It Works Step 1: Capture Lead Data Webhook in ERPNext:** Create a webhook in ERPNext for the "Lead" DocType. Set the trigger to on_insert to capture new leads in real-time. Lead Details:** The workflow fetches lead details, including notes, contact information, and the source of the lead. Step 2: Validate and Analyze Inquiry AI Agent for Analysis:** An AI agent analyzes the lead notes to extract key details and classify the inquiry as valid or invalid. The analysis includes checking the relevance of the inquiry to products, services, or solutions offered by the company. Invalid Leads:** If the inquiry is invalid, the workflow flags it and stops further processing. Step 3: Assign Contact(s) Google Sheets Integration:** Uses a Google Sheets database to map products, services, or solutions to responsible contacts. Ensures that inquiries are directed to the right person or team. Multiple Contacts:** Handles cases where multiple contacts are responsible for a particular product or service. Step 4: Generate and Send Email Notifications AI-Generated Emails:** The workflow generates a professional email summarizing the inquiry. Emails include details like customer name, organization, inquiry summary, and action steps. Custom HTML Formatting:** Emails are converted to HTML for a polished and professional appearance. Send Notifications:** Sends email notifications through Microsoft Outlook or another configured email client. Optionally, notifies via WhatsApp or SMS for urgent inquiries. Step 5: Post-Inquiry Actions ERPNext Record Updates:** Updates the lead record in ERPNext with relevant details, including inquiry status and contact information. Setup Instructions Prerequisites ERPNext: A configured ERPNext instance with lead data and a webhook for the "Lead" DocType. Google Sheets: A sheet mapping products, services, or solutions to responsible contacts. AI Integration: Credentials for OpenAI or other supported AI platforms. Email Client: Credentials for Microsoft Outlook or another email client. Step-by-Step Setup ERPNext Configuration: Create a webhook for the "Lead" DocType in ERPNext. Test the webhook with sample data to ensure proper integration. Workflow Import: Import the workflow template into n8n. Configure nodes with your API credentials for ERPNext, Google Sheets, and AI tools. Google Sheets Integration: Prepare a Google Sheet with columns for product, service, or solution and the responsible contact(s). Link the sheet to the workflow. AI Agent Configuration: Customize the AI agent’s prompts to align with your business’s products and services. Adjust criteria for valid and invalid inquiries as needed. Email Setup: Configure the email client node with your email service credentials. Customize the email template for your organization. Testing: Run the workflow with sample leads to validate the entire process. Check email notifications, contact assignments, and record updates in ERPNext. Dos and Don’ts Dos: Test Thoroughly:** Test the workflow with various scenarios before deploying in production. Secure Credentials:** Keep API and email credentials secure to avoid unauthorized access. Customize Prompts:** Tailor AI prompts to match your business needs and language style. Use Professional Email Templates:** Ensure emails are clear and well-formatted. Don’ts: Skip Validation:** Always validate inquiry data to avoid sending irrelevant notifications. Overload the Workflow:** Avoid adding unnecessary nodes that can slow down processing. Ignore Errors:** Monitor logs and address errors promptly for a smooth workflow. Resources GET n8n Now N8N COURSE n8n Book YouTube Tutorial:** Watch the full step-by-step tutorial on setting up this workflow: SyncBricks YouTube Channel Courses and Training:** Learn more about ERPNext and AI automation through my comprehensive courses: SyncBricks LMS Support and Contact:** Email: amjid@amjidali.com Website: SyncBricks LinkedIn: Amjid Ali
by Matthieu
LinkedIn Profile Tracker Automation Who is this for? This template is ideal for sales teams, recruiters, business development professionals, and relationship managers who need to monitor changes in their network's LinkedIn profiles. Perfect for agencies tracking client personnel changes, HR teams monitoring talent movements, sales professionals staying updated on prospect job changes, and content teams tracking influencer activity. What problem does this workflow solve? Manually checking LinkedIn profiles for updates like job changes, status modifications, profile edits, or latest posts is extremely time-consuming and easy to miss. This automation eliminates the need for constant manual monitoring while ensuring you never miss important changes that could signal new business opportunities, relationship updates, or content engagement opportunities. What this workflow does This workflow automatically monitors a list of LinkedIn profiles on a weekly schedule, detects any changes in: Personal information** (name, headline, summary) Job status** (hiring/open to work flags) Latest work experience** (new positions, company changes) Recent posts** (latest content activity) When changes are detected, it immediately sends Slack notifications with before/after comparisons and updates your tracking database to maintain historical records of all profile evolution. Setup Create a Ghost Genius API account and get your API key for LinkedIn profile scraping Configure HTTP Request nodes with Header Auth credentials using your Ghost Genius API key Set up your Google Sheets database with columns: Firstname, Lastname, LinkedIn URL, ID Tagline, Summary, Latest experience Open to work?, Hiring?, Latest post Configure Slack webhook integration for real-time notifications Set up credentials for Google Sheets and Slack following n8n documentation Add LinkedIn profile URLs to your Google Sheet to start monitoring How to customize this workflow Modify the schedule trigger** to check profiles daily, bi-weekly, or monthly based on your monitoring needs Customize Slack notification messages** to include additional context, mentions, or custom formatting Add email notifications** alongside Slack alerts for critical changes like job transitions Set up filtered notifications** to only alert on specific types of changes (e.g., job changes only, posts from key influencers) Add post content analysis** to detect mentions of your company or competitors Integrate with CRM systems** to automatically update lead records when profile changes occur