by Ai Lin ⌘
🎯 What It Does: This project lets you talk to Siri (via Apple Shortcuts) and record or query your daily spending. The shortcut sends your message to an n8n Webhook, which uses AI to decide whether it’s for writing or reading finance data, then replies with a human-friendly message — all powered by n8n + AI + Google Sheets. ⸻ 🌐 PART 1: n8n Setup 🧩 1. Create a Webhook Trigger in n8n • Add a node: Webhook • Set HTTP Method: POST • Set Path: siri-finance • Enable “Respond to Webhook” = ✅ 🧠 2. Add AI Agent Node (e.g. OpenAI, Ollama, Gemini) • Use system prompt like: You are a finance assistant. Decide if the user wants to record or read transactions. If it's recording, return a JSON object with date, type, name, amount, and expense/income. If it's reading, return date range and type (Expense/Income). Always reply with a human-friendly summary. • Input: {{ $json.text }} (from webhook) • Output: structured json.output 🧮 3. (Optional) Add Logic to write to DB / Supabase / Google Sheets • Append tool: Adds a new row • Read tool: Queries past data Now your n8n flow is ready! ⸻ 📱 PART 2: iOS Shortcut Setup ⚙️ 1. Create a new Shortcut • Name it: 記帳助理 (or Finance Bot) • Add Action: Ask for Input • Prompt: “請說出你的記帳內容” • Input Type: Text • Add Action: Get Contents of URL • Method: POST • URL: https://your-n8n-domain/webhook/siri-finance • Headers: Content-Type: application/json • Request Body: { "text": "Provided Input" } • Replace "Provided Input" with Magic Variable → Input Result 🔊 2. Show Result • Add Action: Show Result • Content: Get Contents of URL 🗣️ 3. Optional: Add “Speak Text” • If you want Siri to speak it back, add Speak Text after Show Result. ⸻ ✅ Example Usage • You: “Hey Siri, 開支$50 早餐” • Siri: “已記錄支出:項目 早餐,金額 $50,已寫入” Or • You: “查一下我過去7日用了幾多錢” • Siri: “你過去7日總支出為 $7684.64,包括:⋯⋯” ⸻ 📦 Files to Share You can package the following: • .shortcut file export • Sample n8n workflow .json • Optional Supabase schema / Google Sheet template ⸻ 💡 Tips for Newcomers • Keep your Webhook public but protect with token if needed. • Ensure you handle emoji and newline safely for iOS compatibility. • Add logging nodes in n8n to help debug Siri messages. ⸻ 🗣️ Optional Project Name “Siri 記帳助理” / “Finance VoiceBot” A simple voice-first way to manage your daily expenses.
by Oneclick AI Squad
📘 Student Absence Alerts & Attendance Tracking Automation Automatically alerts parents about student absences and tracks 30-day attendance patterns to identify risks and trends. 🔧 Main Components Daily Attendance Check – 10:30 AM** Triggers the workflow every day at 10:30 AM. Read Today’s Attendance** Retrieves current-day attendance records from the source Excel or database. Read Student Contacts** Reads contact details (email, phone) of students for alert delivery. Process Absent Students** Identifies students who are absent and unexcused for the day. Prepare Absence Email** Generates customized email content for absent students. Send Absence Email** Sends an absence alert email to the student’s parent/guardian. Prepare Absence SMS** Formats WhatsApp-friendly message for alerts. Send Absence WhatsApp** Sends the WhatsApp message using API (e.g., Facebook Graph). Generate Attendance Report** Prepares a daily attendance summary with absence level classifications. Save Attendance Report** Appends the generated report to a historical attendance sheet. ⚠️ Alert Logic Based on the past 30-day absence pattern, the system classifies students into: | Level | Absences in 30 Days | Status | | --------- | ------------------- | -------------- | | 🔴 High | 5+ | Critical Alert | | 🟡 Medium | 3–4 | Warning | | 🟢 Low | 1–2 | Low Risk | 📊 Tracking Features 🔢 Attendance Rate Calculation – Tracks each student's attendance percentage 🔍 Pattern Analysis – Detects recurring absenteeism trends 🚨 Risk Identification – Flags high-risk students for early intervention 📈 Historical Reporting – Maintains daily logs for future reference ✅ Essential Prerequisites Excel sheet or database with daily attendance logs Excel sheet or database with student contact details SMTP credentials for sending emails WhatsApp API integration (e.g., Facebook Graph or Twilio) Storage access for saving attendance reports 📁 Required Excel File Structures Attendance Sheet (daily_attendance.xlsx) | Student ID | Date | Status | | ---------- | ---------- | ------ | | ST101 | 2025-08-06 | Absent | Contacts Sheet (student_contacts.xlsx) | Student ID | Name | Email | Phone | | ---------- | ---------- | --------------------------------------------- | ------------- | | ST101 | Aryan Shah | aryan@example.com | +919123456789 | 🧾 Expected Input Format Example { "studentId": "ST101", "name": "Aryan Shah", "email": "aryan@example.com", "phone": "+919123456789", "status": "Absent", "date": "2025-08-06" } 🚀 Key Features ⏰ Scheduled Daily Execution – Automated tracking at 10:30 AM ✉️ Multi-Channel Notifications – Email + WhatsApp alerts to parents 📊 Absence Pattern Monitoring – 30-day trend analysis 🧠 Risk-Based Alerts – Smart classification into alert levels 🗂️ Daily Reports – Easy to audit attendance summary logs ⚙️ Quick Setup Guide Import Workflow JSON into n8n. Configure schedule trigger at 10:30 AM. Set Excel file paths in "Read Today’s Attendance" and "Read Student Contacts". Customize absence thresholds in the “Process Absent Students” node. Add SMTP details for the “Send Absence Email” node. Integrate WhatsApp API in the “Send Absence WhatsApp” node. Test with mock data and review reports. Activate the workflow. 🔧 Parameters to Configure | Parameter | Description | | ---------------------- | -------------------------------------- | | attendance_file_path | Path to today's attendance records | | contacts_file_path | Path to student contacts sheet | | smtp_user | Email username for SMTP server | | smtp_password | Password for SMTP server | | whatsapp_api_url | Endpoint for sending WhatsApp messages | | alert_thresholds | Absence count thresholds for alerts |
by Hunyao
What it does Captures token usage and cost from your AI Agent/LLM. Logs model, tokens, cost, tool use, and conversation I/O to Google Sheets for simple observability and billing. Perfect for Developers adding usage monitoring to AI agents. Teams needing cost transparency in prototypes. How it works Chat Trigger collects user input for the AI Agent. A Set node injects metadata like workflow, execution, and client IDs. LangChain Code node returns a configured Chat model with a callback that reads usage metadata. The callback computes input, output, and total costs based on per‑million token prices you define. It appends token metrics to a Google Sheet via the Google Sheets Tool. The Agent records intermediate tool calls. An If node checks whether a tool was used. When tools are used, the workflow logs input, output, tool name, and metadata to an Observability sheet. How to use SELF-HOSTED N8N ONLY - the Langchain Code node is only available in the self-hosted version of n8n. It is not available in n8n cloud. Requirements Self-hosted version of n8n If you have any questions in running the workflow, see the attached video: https://youtu.be/JSulRS128MA
by Marth
How It Works: The 5-Node Monitoring Flow This concise workflow efficiently captures, filters, and delivers crucial cybersecurity-related mentions. 1. Monitor: Cybersecurity Keywords (X/Twitter Trigger) This is the entry point of your workflow. It actively searches X (formerly Twitter) for tweets containing the specific keywords you define. Function:** Continuously polls X for tweets that match your specified queries (e.g., your company name, "Log4j," "CVE-2024-XXXX," "ransomware"). Process:** As soon as a matching tweet is found, it triggers the workflow to begin processing that information. 2. Format Notification (Code Node) This node prepares the raw tweet data, transforming it into a clean, actionable message for your alerts. Function:** Extracts key details from the raw tweet and structures them into a clear, concise message. Process:** It pulls out the tweet's text, the user's handle (@screen_name), and the direct URL to the tweet. These pieces are then combined into a user-friendly notificationMessage. You can also include basic filtering logic here if needed. 3. Valid Mention? (If Node) This node acts as a quick filter to help reduce noise and prevent irrelevant alerts from reaching your team. Function:** Serves as a simple conditional check to validate the mention's relevance. Process:** It evaluates the notificationMessage against specific criteria (e.g., ensuring it doesn't contain common spam words like "bot"). If the mention passes this basic validation, the workflow continues. Otherwise, it quietly ends for that particular tweet. 4. Send Notification (Slack Node) This is the delivery mechanism for your alerts, ensuring your team receives instant, visible notifications. Function:** Delivers the formatted alert message directly to your designated communication channel. Process:* The notificationMessage is sent straight to your specified *Slack channel** (e.g., #cyber-alerts or #security-ops). 5. End Workflow (No-Op Node) This node simply marks the successful completion of the workflow's execution path. Function:** Indicates the end of the workflow's process for a given trigger. How to Set Up Implementing this simple cybersecurity monitor in your n8n instance is quick and straightforward. 1. Prepare Your Credentials Before building the workflow, ensure all necessary accounts are set up and their respective credentials are ready for n8n. X (Twitter) API:* You'll need an X (Twitter) developer account to create an application and obtain your Consumer Key/Secret and Access Token/Secret. Use these to set up your *Twitter credential** in n8n. Slack API:* Set up your *Slack credential* in n8n. You'll also need the *Channel ID** of the Slack channel where you want your security alerts to be posted (e.g., #security-alerts or #it-ops). 2. Import the Workflow JSON Get the workflow structure into your n8n instance. Import:** In your n8n instance, go to the "Workflows" section. Click the "New" or "+" icon, then select "Import from JSON." Paste the provided JSON code (from the previous response) into the import dialog and import the workflow. 3. Configure the Nodes Customize the imported workflow to fit your specific monitoring needs. Monitor: Cybersecurity Keywords (X/Twitter):** Click on this node. Select your newly created Twitter Credential. CRITICAL: Modify the "Query" parameter to include your specific brand names, relevant CVEs, or general cybersecurity terms. For example: "YourCompany" OR "CVE-2024-1234" OR "phishing alert". Use OR to combine multiple terms. Send Notification (Slack):** Click on this node. Select your Slack Credential. Replace "YOUR_SLACK_CHANNEL_ID" with the actual Channel ID you noted earlier for your security alerts. (Optional: You can adjust the "Valid Mention?" node's condition if you find specific patterns of false positives in your search results that you want to filter out.) 4. Test and Activate Verify that your workflow is working correctly before setting it live. Manual Test:** Click the "Test Workflow" button (usually in the top right corner of the n8n editor). This will execute the workflow once. Verify Output:** Check your specified Slack channel to confirm that any detected mentions are sent as notifications in the correct format. If no matching tweets are found, you won't see a notification, which is expected. Activate:** Once you're satisfied with the test results, toggle the "Active" switch (usually in the top right corner of the editor) to ON. Your workflow will now automatically monitor X (Twitter) at the specified polling interval.
by Marth
How It Works: The 5-Node Monitoring Flow This concise workflow efficiently captures, filters, and delivers crucial cybersecurity-related mentions. 1. Monitor: Cybersecurity Keywords (X/Twitter Trigger) This is the entry point of your workflow. It actively searches X (formerly Twitter) for tweets containing the specific keywords you define. Function:** Continuously polls X for tweets that match your specified queries (e.g., your company name, "Log4j," "CVE-2024-XXXX," "ransomware"). Process:** As soon as a matching tweet is found, it triggers the workflow to begin processing that information. 2. Format Notification (Code Node) This node prepares the raw tweet data, transforming it into a clean, actionable message for your alerts. Function:** Extracts key details from the raw tweet and structures them into a clear, concise message. Process:** It pulls out the tweet's text, the user's handle (@screen_name), and the direct URL to the tweet. These pieces are then combined into a user-friendly notificationMessage. You can also include basic filtering logic here if needed. 3. Valid Mention? (If Node) This node acts as a quick filter to help reduce noise and prevent irrelevant alerts from reaching your team. Function:** Serves as a simple conditional check to validate the mention's relevance. Process:** It evaluates the notificationMessage against specific criteria (e.g., ensuring it doesn't contain common spam words like "bot"). If the mention passes this basic validation, the workflow continues. Otherwise, it quietly ends for that particular tweet. 4. Send Notification (Slack Node) This is the delivery mechanism for your alerts, ensuring your team receives instant, visible notifications. Function:** Delivers the formatted alert message directly to your designated communication channel. Process:* The notificationMessage is sent straight to your specified *Slack channel** (e.g., #cyber-alerts or #security-ops). 5. End Workflow (No-Op Node) This node simply marks the successful completion of the workflow's execution path. Function:** Indicates the end of the workflow's process for a given trigger. How to Set Up Implementing this simple cybersecurity monitor in your n8n instance is quick and straightforward. 1. Prepare Your Credentials Before building the workflow, ensure all necessary accounts are set up and their respective credentials are ready for n8n. X (Twitter) API:* You'll need an X (Twitter) developer account to create an application and obtain your Consumer Key/Secret and Access Token/Secret. Use these to set up your *Twitter credential** in n8n. Slack API:* Set up your *Slack credential* in n8n. You'll also need the *Channel ID** of the Slack channel where you want your security alerts to be posted (e.g., #security-alerts or #it-ops). 2. Import the Workflow JSON Get the workflow structure into your n8n instance. Import:** In your n8n instance, go to the "Workflows" section. Click the "New" or "+" icon, then select "Import from JSON." Paste the provided JSON code (from the previous response) into the import dialog and import the workflow. 3. Configure the Nodes Customize the imported workflow to fit your specific monitoring needs. Monitor: Cybersecurity Keywords (X/Twitter):** Click on this node. Select your newly created Twitter Credential. CRITICAL: Modify the "Query" parameter to include your specific brand names, relevant CVEs, or general cybersecurity terms. For example: "YourCompany" OR "CVE-2024-1234" OR "phishing alert". Use OR to combine multiple terms. Send Notification (Slack):** Click on this node. Select your Slack Credential. Replace "YOUR_SLACK_CHANNEL_ID" with the actual Channel ID you noted earlier for your security alerts. (Optional: You can adjust the "Valid Mention?" node's condition if you find specific patterns of false positives in your search results that you want to filter out.) 4. Test and Activate Verify that your workflow is working correctly before setting it live. Manual Test:** Click the "Test Workflow" button (usually in the top right corner of the n8n editor). This will execute the workflow once. Verify Output:** Check your specified Slack channel to confirm that any detected mentions are sent as notifications in the correct format. If no matching tweets are found, you won't see a notification, which is expected. Activate:** Once you're satisfied with the test results, toggle the "Active" switch (usually in the top right corner of the n8n editor) to ON. Your workflow will then automatically monitor X (Twitter) at the specified polling interval.
by M Shehroz Sajjad
Monitor BeyondPresence video agent conversations in real-time to automatically score leads (0-100+) based on buying signals and send instant Slack alerts when hot opportunities or competitors are mentioned. This template helps sales teams prioritize leads immediately, never miss competitor mentions, and respond to high-intent prospects while they're still engaged. How it works Real-time webhook** processes each user message as it happens during calls Scoring engine** analyzes for buying signals (+points) and objections (-points) Competitor detection** instantly identifies when alternatives are mentioned Smart routing** sends alerts to different Slack channels based on urgency Hot leads** (70+ score) trigger immediate notifications with recommendations Call summary (Optional)** provides final qualification score when conversation ends Set up steps Connect Slack OAuth2 - Use n8n's built-in Slack integration (no webhooks needed!) Create Slack channels - Set up #sales-hot-leads, #sales-competitors, #sales-qualified Add webhook to BeyondPresence - Copy URL from n8n to BeyondPresence Settings → Webhooks Customize competitors - Edit the scoring node to add your specific competitor names Adjust scoring weights (optional) - Tune point values for your sales process Setup time: 10-15 minutes Requirements: BeyondPresence account, Slack workspace admin access
by Rui Borges
Workflow Purpose This workflow periodically checks a service's availability and sends an SMS notification if the service is down. High-Level Steps Schedule Trigger: The workflow is triggered at a specified interval, such as every minute. HTTP Request: An HTTP request is sent to the URL of the service being monitored. If: The HTTP status code of the response is checked. If the status code is 200 (OK), the workflow ends. If the status code is not 200, indicating a potential issue, an SMS notification is sent using Twilio. Setup Setting up this workflow is relatively straightforward and should only take a few minutes: Create a new n8n workflow. Add the nodes: Schedule Trigger, HTTP Request, If, and Twilio. Configure the nodes: Schedule Trigger: Specify the desired interval. HTTP Request: Enter the URL of the service to be monitored. If: Set the condition to check for a status code other than 200. Twilio: Enter the Twilio account credentials and the phone numbers for sending and receiving the SMS notification. Connect the nodes: Connect the nodes as shown in the workflow diagram. Activate the workflow: Save the workflow and activate it. Additional Notes The workflow can be customized by changing the interval, the URL, the Twilio credentials, and the SMS message. This workflow is a simple example, and more complex workflows can be created to meet specific needs.
by explorium
Salesforce Lead Enrichment with Explorium Template Download the following json file and import it to a new n8n workflow: salesforce\_Workflow.json Overview This n8n workflow monitors your Salesforce instance for new leads and automatically enriches them with missing contact information. When a lead is created, the workflow: Detects the new lead via Salesforce trigger Matches the lead against Explorium's database using name and company Enriches the lead with professional email addresses and phone numbers Updates the Salesforce lead record with the discovered contact information This automation ensures your sales team always has the most up-to-date contact information for new leads, improving reach rates and accelerating the sales process. Key Features Real-time Processing**: Triggers automatically when new leads are created in Salesforce Intelligent Matching**: Uses lead name and company to find the correct person in Explorium's database Contact Enrichment**: Adds professional emails, mobile phones, and office phone numbers Batch Processing**: Efficiently handles multiple leads to optimize API usage Error Handling**: Continues processing other leads even if some fail to match Selective Updates**: Only updates leads that successfully match in Explorium Prerequisites Before setting up this workflow, ensure you have: n8n instance (self-hosted or cloud) Salesforce account with: OAuth2 API access enabled Lead object permissions (read/write) API usage limits available Explorium API credentials (Bearer token) - Get explorium api key Basic understanding of Salesforce lead management Salesforce Requirements Required Lead Fields The workflow expects these standard Salesforce lead fields: FirstName - Lead's first name LastName - Lead's last name Company - Company name Email - Will be populated/updated by the workflow Phone - Will be populated/updated by the workflow MobilePhone - Will be populated/updated by the workflow API Permissions Your Salesforce integration user needs: Read access to Lead object Write access to Lead object fields (Email, Phone, MobilePhone) API enabled on the user profile Sufficient API calls remaining in your org limits Installation & Setup Step 1: Import the Workflow Copy the workflow JSON from the template In n8n: Navigate to Workflows → Add Workflow → Import from File Paste the JSON and click Import Step 2: Configure Salesforce OAuth2 Credentials Click on the Salesforce Trigger node Under Credentials, click Create New Follow the OAuth2 flow: Client ID: From your Salesforce Connected App Client Secret: From your Salesforce Connected App Callback URL: Copy from n8n and add to your Connected App Authorize the connection Save the credentials as "Salesforce account connection" Note: Use the same credentials for all Salesforce nodes in the workflow. Step 3: Configure Explorium API Credentials Click on the Match\_prospect node Under Credentials, click Create New (HTTP Header Auth) Configure the header: Name: Authorization Value: Bearer YOUR_EXPLORIUM_API_TOKEN Save as "Header Auth account" Apply the same credentials to the Explorium Enrich Contacts Information node Step 4: Verify Node Settings Salesforce Trigger: Trigger On: Lead Created Poll Time: Every minute (adjust based on your needs) Salesforce Get Leads: Operation: Get All Condition: CreatedDate = TODAY (fetches today's leads) Limit: 20 (adjust based on volume) Loop Over Items: Batch Size: 6 (optimal for API rate limits) Step 5: Activate the Workflow Save the workflow Toggle the Active switch to ON The workflow will now monitor for new leads every minute Detailed Node Descriptions Salesforce Trigger: Polls Salesforce every minute for new leads Get Today's Leads: Retrieves all leads created today to ensure none are missed Loop Over Items: Processes leads in batches of 6 for efficiency Match Prospect: Searches Explorium for matching person using name + company Filter: Checks if a valid match was found Extract Prospect IDs: Collects all matched prospect IDs Enrich Contacts: Fetches detailed contact information from Explorium Merge: Combines original lead data with enrichment results Split Out: Separates individual enriched records Update Lead: Updates Salesforce with new contact information Data Mapping The workflow maps Explorium data to Salesforce fields as follows: | Explorium Field | Salesforce Field | Fallback Logic | | ------------------- | ---------------- | --------------------------------- | | emails[0].address | Email | Falls back to professions_email | | mobile_phone | MobilePhone | Falls back to phone_numbers[1] | | phone_numbers[0] | Phone | Falls back to mobile_phone | Usage & Monitoring Automatic Operation Once activated, the workflow runs automatically: Checks for new leads every minute Processes any leads created since the last check Updates leads with discovered contact information Continues running until deactivated Manual Testing To test the workflow manually: Create a test lead in Salesforce Click "Execute Workflow" in n8n Monitor the execution to see each step Verify the lead was updated in Salesforce Monitoring Executions Track workflow performance: Go to Executions in n8n Filter by this workflow Review successful and failed executions Check logs for any errors or issues Troubleshooting Common Issues No leads are being processed Verify the workflow is activated Check Salesforce API limits haven't been exceeded Ensure new leads have FirstName, LastName, and Company populated Confirm OAuth connection is still valid Leads not matching in Explorium Verify company names are accurate (not abbreviations) Check that first and last names are properly formatted Some individuals may not be in Explorium's database Try testing with known companies/contacts Contact information not updating Check Salesforce field-level security Verify the integration user has edit permissions Ensure Email, Phone, and MobilePhone fields are writeable Check for validation rules blocking updates Authentication errors Salesforce: Re-authorize OAuth connection Explorium: Verify Bearer token is valid and not expired Check API quotas haven't been exceeded Error Handling The workflow includes built-in error handling: Failed matches don't stop other leads from processing Each batch is processed independently Failed executions are logged for review Partial successes are possible (some leads updated, others skipped) Best Practices Data Quality Ensure complete lead data: FirstName, LastName, and Company should be populated Use full company names: "Microsoft Corporation" matches better than "MSFT" Standardize data entry: Consistent formatting improves match rates Performance Optimization Adjust batch size: Lower if hitting API limits, higher for efficiency Modify polling frequency: Every minute for high volume, less frequent for lower volume Set appropriate limits: Balance between processing speed and API usage Compliance & Privacy Data permissions: Ensure you have rights to enrich lead data GDPR compliance: Consider privacy regulations in your region Data retention: Follow your organization's data policies Audit trail: Monitor who has access to enriched data Customization Options Extend the Enrichment Add more Explorium enrichment by: Adding firmographic data (company size, revenue) Including technographic information Appending social media profiles Adding job title and department verification Modify Trigger Conditions Change when enrichment occurs: Trigger on lead updates (not just creation) Add specific lead source filters Process only leads from certain campaigns Include lead score thresholds Add Notifications Enhance with alerts: Email sales reps when leads are enriched Send Slack notifications for high-value matches Create tasks for leads that couldn't be enriched Log enrichment metrics to dashboards API Considerations Salesforce Limits API calls: Each execution uses \~4 Salesforce API calls Polling frequency: Consider your daily API limit Batch processing: Reduces API usage vs. individual processing Explorium Limits Match API: One call per batch of leads Enrichment API: One call per batch of matched prospects Rate limits: Respect your plan's requests per minute Integration Architecture This workflow can be part of a larger lead management system: Lead Capture → This Workflow → Lead Scoring → Assignment Can trigger additional workflows based on enrichment results Compatible with existing Salesforce automation (Process Builder, Flows) Works alongside other enrichment tools Security Considerations Credentials**: Stored securely in n8n's credential system Data transmission**: Uses HTTPS for all API calls Access control**: Limit who can modify the workflow Audit logging**: All executions are logged with details Support Resources For assistance with: n8n issues**: Consult n8n documentation or community forum Salesforce integration**: Reference Salesforce API documentation Explorium API**: Contact Explorium support for API questions Workflow logic**: Review execution logs for debugging
by Stathis Askaridis
Integrate Xero with FileMaker using Webhooks Workflow Description This n8n workflow automates the integration between Xero and FileMaker, allowing for seamless data transfer between the two platforms. By listening for webhooks from Xero (e.g., new invoices, payments, or contacts), this workflow ensures that data is automatically sent and recorded in a FileMaker database. Who is This For? This workflow template is ideal for: Accountants** who need a streamlined process to sync financial data between Xero and FileMaker. Business Owners** looking to automate data entry and improve accuracy across their systems. Developers** building solutions for clients that require integration between accounting software and databases. Operations Teams** focused on minimizing manual work and improving efficiency. Key Steps Xero Webhook Trigger: The workflow starts by capturing events from Xero via a webhook. Data Processing: Transforms and maps the incoming data to match FileMaker’s required format. FileMaker Node: Utilizes the FileMaker node to create or update records directly in the FileMaker database. Logging & Error Handling: Tracks successful entries and manages any errors with automated alerts. Setup Instructions Set Up the Xero Webhook: Create a webhook in Xero and point it to your n8n webhook node URL. Configure the types of events to trigger the workflow (e.g., new invoices or payments). Xero will then send some test calls to test you are doing proper hash control. Connect the FileMaker Node: Set up your FileMaker node with the appropriate credentials and database configuration. Map the fields between the incoming Xero data and your FileMaker database structure. Customize Data Processing: Adjust data transformations as needed to ensure compatibility with your FileMaker schema. Test and Deploy: Run the workflow with sample data to ensure everything is functioning correctly. Monitor the execution log to verify data transfer and make any adjustments as needed. Error Handling Configuration: Configure error-handling nodes or alerts to notify you of any issues during data processing. Benefits This setup facilitates real-time data synchronization between Xero and FileMaker, reducing the need for manual data entry and improving overall operational efficiency.
by Angel Menendez
Who is this for? This workflow is ideal for IT operations teams or system administrators who use ServiceNow to track incidents and Slack for team communication. It provides real-time updates on new ServiceNow incidents directly in a designated Slack channel, ensuring timely response and collaboration. What problem is this workflow solving? / Use case Manually monitoring ServiceNow for new incidents can be time-consuming and prone to delays. This workflow automates the process, ensuring that team members are instantly notified of new incidents, complete with all relevant details, in a Slack channel. It enhances operational efficiency and incident response time. What this workflow does Schedule or Manual Trigger: The workflow can be triggered manually or set to run automatically every 5 minutes. Retrieve New Incidents: Fetches incidents created in ServiceNow within the last 5 minutes. Error Handling: Posts an error message in Slack if there are issues connecting to ServiceNow. Incident Processing: If new incidents are found, they are sorted in ascending order by their number. Detailed incident information is formatted and sent to a specified Slack channel. No Incidents: If no new incidents are found, the workflow does nothing. Setup ServiceNow API Credentials: Configure ServiceNow Basic Authentication in the workflow to connect to your ServiceNow instance. Slack API Credentials: Add your Slack API credentials to enable message posting. Slack Channel Configuration: Define the Slack channel where notifications should be sent. Ensure the channel ID is correctly set in the Slack node. Adjust the Schedule: Modify the schedule in the Schedule Trigger node to suit your requirements. How to customize this workflow to your needs Notification Format: Customize the Slack message format to include additional or fewer details. Update the Blocks section in the Slack node for personalized messages. Incident Query Parameters: Adjust the sysparm_query parameter in the ServiceNow node to filter incidents based on specific criteria. Error Handling: Modify the error message in the Slack node for more detailed troubleshooting information. Features Real-Time Notifications**: Immediate updates on new ServiceNow incidents. Error Handling**: Alerts in Slack if the workflow encounters issues connecting to ServiceNow. Customizable Notifications**: Flexibility to modify incident details sent to Slack. This workflow streamlines incident management and fosters collaboration by delivering actionable updates directly to your team.
by Ron
Objective In industry and production sometimes machine data is available in databases. That might be sensor data like temperature or pressure or just binary information. In this sample flow reads machine data and sends an alert to your SIGNL4 team when the machine is down. When the machine is up again the alert in SIGNL4 will get closed automatically. Setup We simulate the machine data using a Notion table. When we un-check the Up box we simulate a machine-down event. In certain intervals n8n checks the database for down items. If such an item has been found an alert is send using SIGNL4 and the item in Notion is updates (in order not to read it again). Status updates from SIGNL4 (acknowledgement, close, annotation, escalation, etc.) are received via webhook and we update the Notion item accordingly. This is how the alert looks like in the SIGNL4 app. The flow can be easily adapted to other database monitoring scenarios.
by Francis Njenga
Workflow Documentation: Auto-Retry Engine – Error Recovery Workflow Detailed Description The Auto-Retry Engine: Error Recovery Workflow is designed to automate the process of identifying and retrying failed executions in n8n workflows. By leveraging scheduled triggers, API integrations, and conditional logic, this workflow ensures that any failed executions are automatically retried on an hourly basis. This reduces manual intervention, improves system reliability, and ensures smoother workflow operations. Who is this for? This workflow is ideal for: Automation Engineers**: Managing and maintaining workflows with minimal manual intervention. DevOps Teams**: Ensuring high availability and reliability of automated processes. IT Administrators**: Reducing downtime and improving system performance by automating error recovery. What problem does this workflow solve? Manual Error Handling**: Eliminates the need for manual monitoring and retrying of failed executions. Improved Reliability**: Automatically retries failed executions, reducing downtime and improving workflow success rates. Time Efficiency**: Saves time by automating repetitive error recovery tasks, allowing teams to focus on higher-priority work. What this workflow does This workflow automates the following steps: Scheduled Monitoring: Checks for failed executions hourly using a schedule trigger. Error Filtering: Identifies executions that have failed and filters out those that have already been successfully retried. Authentication: Logs into the n8n instance using API credentials to retrieve session details. Automatic Retry: Retries the failed executions using the n8n API. Batch Processing: Processes multiple failed executions in batches to avoid overloading the system. Setup Prerequisites To use this workflow, you’ll need: n8n Account**: To create and run the workflow. n8n API Credentials**: For logging into the n8n instance and retrying executions. HTTP Request Node**: Configured to interact with the n8n API. Schedule Trigger**: Set to run the workflow hourly. Setup Process Configure Schedule Trigger Set the trigger to run hourly to check for failed executions. Set Login Credentials Add your n8n instance URL, username, and password in the Set Node. Integrate n8n API Use the HTTP Request node to log into the n8n instance and retrieve session details. Retry Failed Executions Configure the HTTP Request node to retry failed executions using the session details. Batch Processing Use the Split in Batches node to process multiple failed executions in batches. How to customize this workflow Tailor the workflow to fit your specific needs: Adjust Schedule Frequency** Modify the schedule trigger to run at different intervals (e.g., every 30 minutes). Add Notifications** Integrate email or Slack notifications to alert teams about failed retries. Refine Error Filtering** Customize the filtering logic to exclude specific types of failed executions. Scale Batch Size** Adjust the batch size in the Split in Batches node to optimize performance. Conclusion The Auto-Retry Engine: Error Recovery Workflow is a powerful tool for automating error recovery in n8n workflows. By reducing manual intervention and ensuring failed executions are retried automatically, this workflow enhances system reliability and operational efficiency. Whether you're managing a few workflows or a complex automation ecosystem, this workflow ensures your processes run smoothly and consistently.