by Dariusz Koryto
FTP to Google Drive Transfer Template What This Template Does This workflow automatically transfers files from an FTP server to Google Drive. It's perfect for: Backing up files from remote servers Migrating data from FTP to cloud storage Automating file synchronization tasks Creating scheduled backups of server content How It Works The workflow follows these steps: Manual Trigger - You start the process by clicking "Execute" Lists FTP Directory - Scans the specified FTP folder for all items Filters Files Only - Separates actual files from directories (folders) Downloads Files - Retrieves each file as binary data from the FTP server Uploads to Google Drive - Stores all downloaded files in your specified Google Drive folder Requirements Before using this template, you'll need: FTP Server Access**: Server address, username, and password Google Drive Account**: With OAuth2 authentication set up in n8n n8n Instance**: Self-hosted or cloud version Setup Instructions Step 1: Configure FTP Credentials In n8n, go to Settings → Credentials Create a new FTP credential Enter your FTP server details: Host: Your FTP server address Port: Usually 21 for FTP Username: Your FTP username Password: Your FTP password Test the connection and save Step 2: Set Up Google Drive Authentication Create a new Google Drive OAuth2 credential Follow n8n's Google Drive setup guide: Create a Google Cloud project Enable Google Drive API Create OAuth2 credentials Add your n8n callback URL Authorize the connection in n8n Step 3: Configure the Workflow Update FTP Path: Open the "List FTP Directory" node Change the path parameter from /_instalki to your desired FTP folder Set Google Drive Folder: Open the "Upload to Google Drive" node Replace the folderId with your target Google Drive folder ID To find folder ID: Open the folder in Google Drive and copy the ID from the URL Assign Credentials: Ensure both FTP nodes use your FTP credential Assign your Google Drive credential to the upload node How to Use Test First: Run the workflow manually with a few test files Monitor Execution: Check the execution log for any errors Verify Upload: Confirm files appear in your Google Drive folder Schedule (Optional): Add a schedule trigger if you want automatic runs Customization Options Filter Specific File Types Add a condition after "Filter Files Only" to process only certain file extensions: {{ $json.name.endsWith('.pdf') || $json.name.endsWith('.jpg') }} Add Error Handling Insert error-handling nodes to manage failed downloads or uploads gracefully. Organize by Date Modify the Google Drive upload to create date-based folders automatically. File Size Limits Add checks for file size before attempting upload (Google Drive has limits). Troubleshooting Common Issues: FTP Connection Failed**: Check server address, port, and credentials Google Drive Upload Error**: Verify OAuth2 setup and folder permissions Files Not Found**: Ensure the FTP path exists and contains files Large Files**: Consider Google Drive's file size limitations (15GB for free accounts) Tips: Test with small files first Check n8n execution logs for detailed error messages Ensure your Google Drive has sufficient storage space Verify FTP server allows multiple concurrent connections Security Notes Never hardcode credentials in the workflow Use n8n's credential system for all authentication Consider using SFTP instead of FTP for better security Regularly rotate your FTP passwords Review Google Drive sharing permissions Next Steps Once you have this basic transfer working, you might want to: Add email notifications for successful/failed transfers Implement file deduplication checks Create logs of transferred files Set up automatic cleanup of old files Add file compression before upload
by Ahmed Saadawi
📝 Sync MySQL Rows to Google Sheet Description: This n8n template automates the process of syncing new records from a MySQL database table into a Google Sheet, ideal for reporting, backup, or lightweight dashboards. It is designed for teams or individuals who need to periodically export new data rows from a custom database (e.g., CRM, registrations, surveys) into a structured Google Sheet for further analysis, sharing, or archiving—without duplicates. 🛠️ What This Workflow Does: Runs every 15 minutes** via a schedule trigger. Selects unsynced rows** (sync = 0) from a MySQL table (fifa25_customers). Checks if records exist** to prevent unnecessary writes. Appends records to a Google Sheet**, mapping fields like name, email, phone, gender, and more. Updates the MySQL table** to mark those rows as synced (sync = 1) to avoid reprocessing. Fully annotated using sticky notes for easier understanding and onboarding. 📋 Setup Instructions: Create or select a Google Sheet and make sure the columns match the following: id, name, phone, birthdate, email, region, gender, datatime Ensure your MySQL table (fifa25_customers) has a sync column (default = 0 for new rows). Connect your MySQL and Google Sheets credentials inside n8n. (Optional): Add custom filtering or column transformations as needed. 👤 Who Is It For? Marketers syncing leads to a spreadsheet Ops teams pulling user data from internal tools Analysts logging form submissions or customer data Anyone needing lightweight scheduled ETL from MySQL to Sheets 🔐 Credentials Required: MySQL** Google Sheets OAuth2** ✅ Best Practices Followed: Uses IF node to prevent unnecessary processing Updates source database to avoid duplicates Includes sticky notes for clarity All columns are explicitly mapped Works out-of-the-box on any n8n instance with proper creds
by Rizky Febriyan
How It Works This workflow automates the analysis of security alerts from Sophos Central, turning raw events into actionable intelligence. It uses the official Sophos SIEM integration tool to fetch data, enriches it with VirusTotal, and leverages Google Gemini to provide a real-time threat summary and mitigation plan via Telegram. Prerequisite (Important): This workflow is triggered by a webhook that receives data from an external Python script. You must first set up the Sophos-Central-SIEM-Integration script from the official Sophos GitHub. This script will fetch data and forward it to your n8n webhook URL. Tool Source Code: Sophos/Sophos-Central-SIEM-Integration The n8n Workflow Steps Webhook: Receives enriched event and alert data from the external Python script. IF (Filter): Immediately filters the incoming data to ensure only events with a high or critical severity are processed, reducing noise from low-priority alerts. Code (Prepare Indicator): Intelligently inspects the Sophos event data to extract the primary threat indicator. It prioritizes indicators in the following order: File Hash (SHA256), URL/Domain, or Source IP. HTTP Request (VirusTotal): The extracted indicator is sent to the VirusTotal API to get a detailed reputation report, including how many security vendors flagged it as malicious. Code (Prompt for Gemini): The raw JSON output from VirusTotal is processed into a clean, human-readable summary and a detailed list of flagging vendors. AI Agent (Google Gemini): All collected data—the original Sophos log, the full alert details, and the formatted VirusTotal reputation—is compiled into a detailed prompt for Gemini. The AI acts as a virtual SOC analyst to: Create a concise incident summary. Determine the risk level. Provide a list of concrete, actionable mitigation steps. Telegram: The complete analysis and mitigation plan from Gemini is formatted into a clean, easy-to-read message and sent to your specified Telegram chat. Setup Instructions Configure the external Python script to forward events to this workflow's Production URL. In n8n, create Credentials for Google Gemini, VirusTotal, and Telegram. Assign the newly created credentials to the corresponding nodes in the workflow.
by InfraNodus
Using the knowledge graphs instead of RAG vector stores This workflow creates an AI chatbot agent that has access to several knowledge bases at the same time (used as "experts"). These knowledge bases are provided using the InfraNodus GraphRAG using the knowledge graphs and providing high-quality responses without the need to set up complex RAG vector store workflows. The advantages of using GraphRAG instead of the standard vector stores for knowledge are: Easy and quick to set up (no complex data import workflows needed) A knowledge graph has a holistic view of your knowledge base Better retrieval of relations between the document chunks = higher quality responses How it works This template uses the n8n AI agent node as an orchestrating agent that decides which tool (knowledge graph) to use based on the user's prompt. Here's a description step by step: The user submits a question using the AI chatbot (n8n interface, in this case, which can be accessed via a URL or embedded to any website) The AI agent node checks a list of tools it has access to. Each tool has a description of the knowledge it has auto-generated by InfraNodus. The AI agent decides which tool should be used to generate a response. It may reformulate user's query to be more suitable for the expert. The query is then sent to the InfraNodus HTTP node endpoint, which will query the graph that corresponds to that expert. Each InfraNodus GraphRAG expert provides a rich response that takes the whole context into account and provides a response from each expert (graph) along with a list of relevant statements retrieved using a combination or RAG and GraphRAG. The n8n AI Agent node integrates the responses received from the experts to produce the final answer. The final answer is sent back to the user's chat (or a webhook endpoint) How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for each expert (using PDF / content import options) in InfraNodus For each graph, go to the workflow, paste the name of the graph into the body name field. Keep other settings intact or learn more about them at the InfraNodus access points page. Once you add one or more graphs as experts to your flow, add the LLM key to the OpenAI node and launch the workflow Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key Customizing this workflow You can use this same workflow with a Telegram bot, so you can interact with it using Telegram. There are many more customizations available. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20174217658396-Using-InfraNodus-Knowledge-Graphs-as-Experts-for-AI-Chatbot-Agents-in-n8n Also check out the video tutorial with a demo:
by damo
Overview This workflow leverages the KIE. AI Veo3 model to generate AI videos from simple text descriptions. Users interact via a form interface, inputting a prompt (e.g., a scene description), and the system automatically submits the request to the KIE. AI API, monitors the generation status in real time, and retrieves the final video output. It's ideal for content creators, marketers, or developers exploring text-to-video AI creation, supporting intelligent video generation with minimal setup. Prerequisites A KIE. AI account and API key: Sign up at KIE.AI to obtain your free or paid API key. An active n8n instance (cloud or self-hosted) with HTTP Request and form submission capabilities. Basic knowledge of AI prompts for video generation to achieve optimal results. Setup Instructions Obtain API Key: Register at KIE. AI and generate your API key. Store it securely—do not share it publicly. Configure the Form: In the "On Form Submission" node, ensure fields like "prompt" (for video description) and "api_key" are set up. Example prompt: "A serene mountain landscape at sunset with birds flying." Test the Workflow: Click "Execute Workflow" in n8n. Access the generated form URL, submit your prompt and API key. The workflow will poll the API every 10 seconds until the video is ready, then display the results. Handle Outputs: The final node formats and displays the video file URL for download or embedding. Customization Tips Enhance Prompts**: Include specifics like duration, style (e.g., realistic, animated), actions, and visual elements to improve AI video quality. Keywords for SEO**: This template focuses on AI video generation, text-to-video models, Veo3 API integration, and automated workflows.
by Ranjan Dailata
Who this is for? Extract Amazon Best Seller Electronic Info is an automated workflow that extracts best seller data from Amazon's Electronics section using Bright Data Web Unlocker, transform it into structured JSON using Google Gemini's LLM, and forwards a fully structured JSON response to a specified webhook for downstream use. This workflow is tailored for: eCommerce Analysts** Who need to monitor Amazon best-seller trends in the Electronics category and track changes in real-time or on a schedule. Product Intelligence Teams** Who want structured insights on competitor offerings, including rankings, prices, ratings, and promotions. AI-powered Chatbot Developers** Who are building assistants capable of answering product-related queries with fresh, structured data from Amazon. Growth Hackers & Marketers** Looking to automate competitive research and surface trending product data to inform pricing strategies. Data Aggregators and Price Trackers** Who need reliable and smart scraping of Amazon data enriched with AI-driven parsing. What problem is this workflow solving? Keeping up with Amazon's best sellers in Electronics is a time-consuming, error-prone task when done manually.This workflow automates the process, ensuring: Automating Data Extraction from Amazon Best Sellers using Bright Data, ensuring reliable access to real-time, structured data. Enhancing Raw Data with Google Gemini, turning product lists into structured JSON using the Google Gemini LLM. Sending Results to a Webhook, enabling seamless integration into dashboards, databases, or chatbots. What this workflow does The workflow performs the following steps: Extracts Amazon Best Seller Electronics page info using Bright Data's Web Unlocker API. Processes the unstructured content using Google Gemini's Flash Exp model to extract structured product data. Sends the structured information to a webhook endpoint. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Amazon URL with the Bright Data zone by navigating to the Amazon URL with the Bright Data Zone node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a market researcher, e-commerce entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Change the Amazon Category** Update the Amazon URL with the topic of your interest such as Computers & Accessories, Home Audio, etc. Customize the Gemini Prompt** Update the Gemini prompt to get different styles of output — comparison tables, summaries, feature highlights, etc. Send Output to Other Destinations** Replace the Webhook URL to forward output to: Google Sheets Airtable Slack or Discord Custom API endpoints
by Anurag
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This workflow automates document processing and structured table extraction using the Nanonets API. You can submit a PDF file via an n8n form trigger or webhook—the workflow then forwards the document to Nanonets, waits for asynchronous parsing to finish, retrieves the results (including header fields and line items/tables), and returns the output as an Excel file. Ideal for automating invoice, receipt, or order data extraction with downstream business use. How It Works A document is uploaded (via n8n form or webhook). The PDF is sent to the Nanonets Workflow API for parsing. The workflow waits until processing is complete. Parsed results are fetched. Both top-level fields and any table rows/line items are extracted and restructured. Data is exported to Excel format and delivered to the requester. Setup Steps Nanonets Account: Register for a Nanonets account and set up a workflow for your specific document type (e.g., invoice, receipt). Credentials in n8n: Add HTTP Basic Auth credentials in n8n for the Nanonets API (never store credentials directly in node parameters). Webhook/Form Configuration: Option 1: Configure and enable the included n8n Form Trigger node for document uploads. Option 2: Use the included Webhook node to accept external POSTs with a PDF file. Adjust Workflow: Update any HTTP nodes to use your credential profile. Insert your Nanonets Workflow ID in all relevant nodes. Test the Workflow: Enable the workflow and try with a sample document. Features Accepts documents via n8n Form Trigger or direct webhook POST. Securely sends files to Nanonets for document parsing (credentials stored in n8n credentials manager). Automatically waits for async processing, checking Nanonets until results are ready. Extracts both header data and all table/line items into a tabular format. Exports results as an Excel file download. Modular nodes allow easy customization or extension. Prerequisites Nanonets account** with workflow configured for your document type. n8n** instance with HTTP Request, Webhook/Form, Code, and Excel/Spreadsheet nodes enabled. Valid HTTP Basic Auth credentials** saved in n8n for API access. Example Use Cases | Scenario | Benefit | |-----------------------|--------------------------------------------------| | Invoice Processing | Automated extraction of line items and totals | | Receipt Digitization | Parse amounts and charges for expense reports | | Purchase Orders | Convert scanned POs into structured Excel sheets | Notes You must set up credentials in the n8n credentials manager—do not store API keys directly in nodes. All configuration and endpoints are clearly explained with inline sticky notes in the workflow editor. Easily adaptable for other document types or similar APIs—just modify endpoints and result mapping.
by Yang
What this workflow does This workflow extracts product details—like name, price, discount, and rating— from website screenshots using Dumpling AI. It starts when a new product page URL is added to a Google Sheet, captures a screenshot of that page, extracts visible product info from the image, and writes the results back into the sheet. What problem is this workflow solving? Many product pages block traditional scraping tools or use unstructured layouts. This workflow bypasses HTML limitations by using visual AI extraction, making it reliable even when content is embedded in images or hard to parse with code. Who is this for? This is ideal for eCommerce researchers, pricing analysts, marketers, or anyone building a product database from websites without needing to code or maintain complex scrapers. Setup Create a Google Sheet with a column named "Site" (or update the trigger). Add your product page URLs in this column—one per row. Connect your Google Sheets and Dumpling AI credentials in n8n. Ensure your Dumpling AI account has API access for screenshots and extraction. How to customize the workflow Prompt adjustment**: In the “Extract Text from Screenshot” node, you can modify the prompt to extract other information like brand name, delivery time, or availability. Add more fields**: After the extraction, edit the “Format Extracted Data” node to map additional fields from the response to your Google Sheet columns. Change output destination**: You can easily replace the Google Sheets module with Airtable, Notion, or another app if preferred. > ⚠️ This works best when the product data is clearly visible in the screenshot. > It won’t extract info that’s hidden behind popups or loaded via user interaction.
by vinci-king-01
How it works This workflow automatically scrapes real estate listings from Zillow and sends them to a Telegram channel. Key Steps Scheduled Trigger - Runs the workflow at specified intervals to find new listings. AI-Powered Scraping - Uses ScrapeGraphAI to extract property information from Zillow. Data Formatting - Processes and structures the scraped data for Telegram messages. Telegram Integration - Sends formatted listing details to your specified Telegram channel. Set up steps Setup time: 5-10 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key. Set up Telegram connection - Connect your Telegram account and specify the target channel. Customize the Zillow URL - Update the URL to target specific locations or search criteria. Adjust schedule - Modify the trigger timing based on how frequently you want to check for new listings.
by explorium
Google Sheets Company Enrichment with Explorium MCP Template Download the following json file and import it to a new n8n workflow: google\_sheets\_enrichment.json Overview This n8n workflow template enables automatic enrichment of company information in your Google Sheets. When you add a new company or update existing company details (name or website), the workflow automatically fetches additional business intelligence data using Explorium MCP and updates your sheet with: Business ID NAICS industry code Number of employees (range) Annual revenue (range) Key Features Automatic Triggering**: Monitors your Google Sheet for new rows or updates to company name/website fields Smart Processing**: Only processes new or modified rows, not the entire sheet Data Validation**: Ensures both company name and website are present before processing Error Handling**: Processes each row individually to prevent one failure from affecting others Powered by AI**: Uses Claude Sonnet 4 with Explorium MCP for intelligent data enrichment Prerequisites Before setting up this workflow, ensure you have: n8n instance (self-hosted or cloud) Google account with access to Google Sheets Anthropic API key for Claude Explorium MCP API key Installation & Setup Step 1: Import the Workflow Create a new workflow. Download the workflow JSON from above. In your n8n instance, go to Workflows → Add Workflow → Import from File Select the JSON file and click Import Step 2: Create Google Sheet Create a new google sheet (or make a copy of this template) Your Google Sheet must have the following columns (exact names): name - Company name website - Company website URL business_id - Will be populated by the workflow naics - Will be populated by the workflow number_of_employees_range - Will be populated by the workflow yearly_revenue_range - Will be populated by the workflow Step 3: Configure Google Sheets Credentials You'll need to set up two Google credentials: Google Sheets Trigger Credentials: Click on the Google Sheets Trigger node Under Credentials, click Create New If working on n8n Cloud, Click the 'Sign in with Google' button Grant permissions to read and monitor your Google Sheets If working on n8n Instance, Follow the OAuth2 authentication process here Fill the Client ID and Client Secret fields Google Sheets Update Credentials: Click on the Update Company Row node Under Credentials, select the same credentials or create new ones (The same you did above) Ensure permissions include write access to your sheets Step 4: Configure Anthropic Credentials Click on the Anthropic Chat Model node Under Credentials, click Create New Enter your Anthropic API key Save the credentials Step 5: Configure Explorium MCP Credentials Click on the MCP Client node Under Credentials, click Create New (Header Auth) Fill the Name field with api_key Fill the Value field with your Explorium API Key Save the credentials Step 6: Link Your Google Sheet In the Google Sheets Trigger node: Select your Google Sheet from the dropdown Select the worksheet (usually "Sheet1") In the Update Company Row node: Select the same Google Sheet and worksheet Ensure the matching column is set to row_number Step 7: Activate the Workflow Click the Active toggle in the top right to activate the workflow The workflow will now monitor your sheet every minute for changes How It Works Workflow Process Flow Google Sheets Trigger: Polls your sheet every minute for new rows or changes to name/website fields Filter Valid Rows: Validates that both company name and website are present Loop Over Items: Processes each company individually AI Agent: Uses Explorium MCP to: Find the company's business ID Retrieve firmographic data (revenue, employees, NAICS code) Format Output: Structures the data for Google Sheets Update Company Row: Writes the enriched data back to the original row Trigger Behavior First Activation**: May process all existing rows to establish a baseline Ongoing Operation**: Only processes new rows or rows where name/website fields change Polling Frequency**: Checks for changes every minute Usage Adding New Companies Add a new row to your Google Sheet Fill in the name and website columns Within 1 minute, the workflow will automatically: Detect the new row Enrich the company data Update the remaining columns Updating Existing Companies Modify the name or website field of an existing row The workflow will re-process that row with the updated information All enrichment data will be refreshed Monitoring Executions In n8n, go to Executions to see workflow runs Each execution shows: Which rows were processed Success/failure status Detailed logs for troubleshooting Troubleshooting Common Issues All rows are processed instead of just new/updated ones Ensure the workflow is activated, not just run manually Manual test runs will process all rows First activation may process all rows once No data is returned for a company Verify the company name and website are correct Check if the company exists in Explorium's database Some smaller or newer companies may not have data available Workflow isn't triggering Confirm the workflow is activated (Active toggle is ON) Check that changes are made to the name or website columns Verify Google Sheets credentials have proper permissions Authentication errors Re-authenticate Google Sheets credentials Verify Anthropic API key is valid and has credits Check Explorium Bearer token is correct and active Error Handling The workflow processes each row individually, so if one company fails to enrich: Other rows will still be processed The failed row will retain its original data Check the execution logs for specific error details Best Practices Data Quality: Ensure company names and websites are accurate for best results Website Format: Include full URLs (https://example.com) rather than just domain names Batch Processing: The workflow handles multiple updates efficiently, so you can add several companies at once Regular Monitoring: Periodically check execution logs to ensure smooth operation API Limits & Considerations Google Sheets API**: Subject to Google's API quotas Anthropic API**: Each enrichment uses Claude Sonnet 4 tokens Explorium MCP**: Rate limits may apply based on your subscription Support For issues specific to: n8n platform**: Consult n8n documentation or community Google Sheets integration**: Check n8n's Google Sheets node documentation Explorium MCP**: Contact Explorium support for API-related issues Anthropic/Claude**: Refer to Anthropic's documentation for API issues Example Use Cases Sales Prospecting: Automatically enrich lead lists with company size and revenue data Market Research: Build comprehensive databases of companies in specific industries Competitive Analysis: Track and monitor competitor information Investment Research: Gather firmographic data for potential investment targets
by Jimleuk
This n8n template offers a simple yet capable chatbot assistant who can answer course enquiries over SMS. Given the right access to data, AI Agents are capable of planning and performing relatively complex research tasks to get their answers. In this example, the agent must first understand the database schema, retrieve lists of values before generating it's own query to search over the database. Checkout the example database here - https://airtable.com/appO5xvP1aUBYKyJ7/shr8jSFDaghubDOrw How it works A Twilio trigger gives us the ability to receive SMS input into our workflow via webhook. The message is then directed to our AI agent who is instructed to assist the user and use the course database as reference. The database is an Airtable base. The agent autonomously figures out which tool it needs to use and generates it's own "filter_by_formula" query to search over the available courses. On successful search results, the Agent can then use this information to answer the user's query. The Agent's output is logged in a second sheet of the Airtable base. We can use this later for analysis and lead gen. Finally, the response is sent back to the user through SMS using Twilio. How to use Ensure your Twilio number is set to forward messages to this workflow's webhook URL. Configure and update the course database as required. If you're not interested in courses, you can swap this out for inventory, deliveries or any other data relevant to your business. Ask questions like: "Can you help me find suitable courses to fill my Wednesday mornings?" "Which courses are being instructed by profession Lee?" "I'm interested in creative arts. What courses are available which could be relevant to me?" Requirements Twilio for SMS receiving and sending OpenAI for LLM and Agent Airtable for Course Database Customising this workflow Add additional tools and expand the range of queries the agent is able to answer or assist with. Not using Airtable? This technique also works with SQL databases like PostgreSQL.
by n8n Team
This workflow creates a Jira issue when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as comments to the issue in Jira. Prerequisites Zendesk account and Zendesk credentials. Jira account and Jira credentials. Jira project to create issues in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new issue in Jira. The Jira issue key is then saved in one of the ticket's fields (in setup we call this "Jira Issue Key"). The next time a comment is added to the ticket, the workflow retrieves the Jira issue key from the ticket's field and adds the comment to the issue in Jira. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the “Endpoint URL” field, and the “Request method” should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give the trigger a name such as “New tickets”. Under “Conditions” in “Meet ALL of the following conditions”, add “Status is New”. Under “Actions”, select “Notify active webhook” and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the Jira issue key. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the text field option and give the field a name such as “Jira Issue Key". Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.