by Oneclick AI Squad
This comprehensive n8n workflow automates the entire travel business call management process, from initial customer inquiries to trip bookings and marketing outreach. The system handles incoming calls, validates trip details, processes bookings, captures leads, and manages outbound marketing campaigns to promote trip organizer services. It streamlines the complete sales cycle while maintaining organized data records for business intelligence. Essential Information The system operates across four distinct workflows to handle different aspects of travel call management. All call data is automatically captured and stored in organized spreadsheets for analysis and follow-up. The workflow validates trip details before processing to ensure data accuracy and prevent booking errors. Outbound marketing campaigns are automatically triggered based on lead detection and formatting. System Architecture Call Handling Pipeline**: The Detect Incoming Call node captures all incoming customer calls, followed by the Validate Trip Details node which verifies and processes trip information, and the Deliver Organizer Info node that provides relevant trip organizer details to callers. Booking Management Flow**: The Capture Voice Input node records customer booking requests, the Update Booking Record node processes and stores booking information, and the Send Booking Confirmation node delivers confirmation details to customers. Lead Generation Process**: The Detect New Lead node identifies potential customers from call data, the Format Lead Information node structures the lead data for marketing use, and the Initiate Marketing Outreach node launches targeted marketing campaigns. Data Management System**: The Receive Call Response node collects call interaction data, the Log User Input node records customer information in spreadsheets, and the Relay Response to System node ensures data synchronization across all components. Implementation Guide Import the workflow into n8n and configure phone system integration for call detection and voice capture. Set up spreadsheet connections for booking records, lead management, and call logging. Configure marketing automation tools for outbound campaign management. Test each workflow section independently before enabling the complete system. Monitor call handling accuracy and adjust validation rules as needed. Technical Dependencies Phone system API or telephony service for call detection and voice processing Spreadsheet service (Google Sheets, Excel Online) for data storage and management Marketing automation platform for outbound campaign execution Voice recognition service for capturing and processing customer input CRM integration for lead management and customer tracking Database & Sheet Structure Call Tracking Sheet**: Columns should include Call_ID, Customer_Phone, Call_Time, Call_Duration, Call_Status, Trip_Interest, Organizer_Assigned Booking Records Sheet**: Required columns are Booking_ID, Customer_Name, Customer_Phone, Destination, Travel_Dates, Group_Size, Booking_Status, Confirmation_Sent Lead Management Sheet**: Essential columns include Lead_ID, Customer_Name, Phone_Number, Email, Trip_Preference, Lead_Source, Lead_Status, Marketing_Campaign_Sent Trip Organizer Database**: Contains Organizer_ID, Organizer_Name, Specialization, Contact_Info, Availability_Status, Performance_Rating Marketing Outreach Log**: Tracks Campaign_ID, Lead_ID, Campaign_Type, Send_Date, Response_Status, Follow_up_Required Customization Possibilities Adjust the Validate Trip Details node to include specific travel validation rules or partner requirements. Modify the Format Lead Information node to match your CRM system's data structure and marketing campaign formats. Configure the Initiate Marketing Outreach node to integrate with your preferred marketing platforms and campaign templates. Customize the data logging structure in the Log User Input node to capture additional customer information or booking details. Add additional validation steps or approval workflows between booking capture and confirmation sending.
by Brian Burnett
Basics Provides a mechanism to save all your workflows into a github repository and path (of your choosing). These can then be shared through your entire org and used to track changes (if you make any sad 'oopsies'. Flow Obtains and creates listing of currently configured workflows. Iterates through each workflow looking at the following Github source (if present) Actual workflow code (from N8N) Workflow code is sorted and compared for any changes If changed (or new) the workflows are saved / archived into github. Configuration Most of the configuration is done in the Globals node which houses the repo detail for github nodes. The only other dependency is that it by default looks for a GitHub credential, if you use something other than that precise wording you will need to change the credential used on the respective nodes. We gave it 'Manage' rights, but that was only so that it was able to override a requirement for checks to complete? Most would probably only need 'Write' privileges. Background Well, so we initially started using N8N just as a kubernetes-based service housed with its DB running inside the pod. Worked great for getting to know N8N and we jut kept all our workflows and credentials listed in a readme. Fast forward about a year... We have migrated this into our 'production' toolsets and maintain a bunch of team worflows inside it (not company-wide, but LOTS of team fun). While trying to spin a copy of our production RDS database, the ++actual++ production database was deleted, and in doing so AWS was nice enough to wipe our snapshots too!! Yea! Thankfully it only took us a few hours to get everything back up and running thanks to this, so I'm sharing it for everyone to benefit. We have used it to restore old workflows, changes, and now to test our full DR proceedures! (Ok, I might have taken that a bit far)
by Miquel Colomer
Do you want to discover company-related information to enrich a signup process? This workflow enriches any company by name using the uProc Get Company by Name tool. This tool combines Google Maps and emails research on the internet to return results. You get no results if the company has no presence on Google Maps. You need to add your credentials (Email and API Key - real -) located at Integration section to n8n. You can replace node "Create Company Item" with any other supported service returning Company names and countries, like Hubspot, Google Sheets, MySQL, or Typeform. You can set up the uProc node with several parameters: country: the country name you want to use. name: the name of the company you need to locate. Every "uProc" node returns the next fields per every located company: name: Contains the company's given name. email: Contains the company's given email. cif: Contains company's cif number. address: Contains company's formatted address. city: Contains the city location of the company. state: Contains province location of the company. county: Contains state location of the company country: Contains country location of the company zipcode: Contains zipcode code of the company phone: Contains phone number of the company website: Contains website of the company latitude: Contains latitude of the company longitude: Contains longitude of the company Next, you can save results to a CRM or Google Sheets, and prepare returned email or phone to launch an email or telemarketing campaign.
by Harshil Agrawal
This workflow allows you to send position updates of the ISS every minute to a queue using the AWS SQS node. Cron node: The Cron node will trigger the workflow every minute. HTTP Request node: This node will make a GET request to the API https://api.wheretheiss.at/v1/satellites/25544/positions to fetch the position of the ISS. This information gets passed on to the next node in the workflow. Set node: We will use the Set node to ensure that only the data that we set in this node gets passed on to the next nodes in the workflow. AWS SQS: This node will send the data from the previous node to the iss-position queue. If you have created a queue with a different one, you can use that queue instead.
by Harshil Agrawal
This workflow allows you to send position updates of the ISS every minute to a table in Google BigQuery. Cron node: The Cron node will trigger the workflow every minute. HTTP Request node: This node will make a GET request to the API https://api.wheretheiss.at/v1/satellites/25544/positions to fetch the position of the ISS. This information gets passed on to the next node in the workflow. Set node: We will use the Set node to ensure that only the data that we set in this node gets passed on to the next nodes in the workflow. Google BigQuery: This node will send the data from the previous node to the position table in Google BigQuery. If you have created a table with a different name, use that table instead.
by Solomon
Based on Jonathan's work. Check out his templates. This workflow will backup your credentials to GitHub. It uses a CLI command to export all credentials. It then loops over the data, checks in GitHub to see if a file exists that uses the credential's ID. Once checked it will: update the file on GitHub if it exists; create a new file if it doesn't exist; ignore if it's the same. Config Options repo.owner - Github owner repo.name - Github repository name repo.path - Path within the Github repository ā The credentials are all decrypted. Make sure you save them safely or tweak the CLI command to store them encrypted.== Check out my other templates š https://n8n.io/creators/solomon/
by Harshil Agrawal
Note: This workflow uses the internal API which is not official. This workflow might break in the future. The workflow executes every night at 23:59. You can configure a different time bin the Cron node. Configure the GitHub nodes with your username, repo name, and the file path. In the HTTP Request nodes (making a request to localhost:5678), create Basic Auth credentials with your n8n instance username and password.
by Lorena
This workflow synchronizes data both ways between Pipedrive and HubSpot. Cron node** schedules the workflow to run every minute. Pipedrive* and *Hubspot nodes** pull in both lists of persons from Pipedrive and contacts from HubSpot. Merge1* and *Merge2 nodes** with the option Remove Key Matches identify the items that uniquely exist in HubSpot and Pipedrive, respectively. Update Pipedrive* and *Update HubSpot nodes** take those unique items and add them in Pipedrive and HubSpot, respectively.
by Jonathan
This workflow uses a Hubspot Trigger to check for new companies. It then checks the companies website exists using the HTTP node. If it doesn't, a message is sent to Slack. To configure this workflow you will need to set the credentials for the Hubspot and Slack Nodes. You will also need to select the Slack channel to use for sending the message.
by Jonathan
This workflow takes Dialpad call information for an answered call and pushes it into Syncro as either a ticket or an update to an existing ticket. You will need to have a workflow for each technician at this time. It also saves call/ticket information to a Google Sheet to be queried by the dialpad_to_syncro_timer.json workflow. This will match to inbound and outbound calls, so if that's not desired you need to add in an IF to only proceed on either inbound or outbound calls. > This workflow is part of an MSP collection, The original can be found here: https://github.com/bionemesis/n8nsyncro
by Tom
This workflow shows a low code approach to creating a HTML table based on Google Sheets data. It's similar to this workflow, but allows fully customizing the HTML output. To run the workflow: Make sure you have a Google Sheet with a header row and some data in it. Grab your sheet ID: Add it to the Google Sheets node: Activate the workflow or execute it manually Visit the URL provided by the webhook node in your browser (production URL if the workflow is active, test URL if the workflow is executed manually)
by n8n Team
This n8n workflow automates the handling of security detections from CrowdStrike, streamlining incident response and notification processes. The workflow is triggered daily at midnight by the Schedule Trigger node. It begins by fetching recent security detections from CrowdStrike using an HTTP Request node. The response is then split into individual detections for further processing. Each detection is enriched by querying the CrowdStrike API for detailed information using another HTTP Request node. The workflow then processes these detections sequentially using the Split In Batches node. Next, it looks up behavioral information associated with each detection in VirusTotal using two HTTP Request nodes. One node queries VirusTotal based on SHA256 values, and the other based on IOC (Indicator of Compromise) values. The workflow includes a 1-second pause using the Wait node to prevent rate limiting when making requests to the VirusTotal API. Subsequently, the workflow sets fields with relevant details from both CrowdStrike and VirusTotal, including detection links, confidence scores, filenames, usernames, and more. These details are concatenated using an Item Lists node for each detection. The final step involves creating Jira issues for each detection, including summaries with CrowdStrike alert severity and hostnames, as well as descriptions that incorporate information from CrowdStrike and VirusTotal. Information about this issue is then sent via a Slack message to a Slack user. Potential issues during setup might include configuring the Schedule Trigger node to trigger at the correct time zone and handling potential rate limiting from the VirusTotal API, which could lead to throttled requests. Additionally, the note about a possible typo in the URL for the Virustotal nodes should be addressed to ensure correct API calls. The Jira node may need to be replaced with the latest version for compatibility. Properly configuring API credentials and handling errors that may occur during API requests are essential for a smooth workflow operation. Careful testing with sample data is recommended to validate the workflow's functionality and ensure it aligns with your organization's security incident response processes.