by tanaypant
This is Workflow 1 in the blog tutorial Database activity monitoring and alerting. Prerequisites A Postgres database set up and credentials. Basic knowledge of JavaScript and SQL. Nodes Cron node starts the workflow every minute. Function node generates sensor data (sensor id (preset), a randomly generated value, timestamp, and notification (preset as false) ) Postgres node inserts the data into a Postgres database. You can create the database for this workflow with the following SQL statement: CREATE TABLE n8n (id SERIAL, sensor_id VARCHAR, value INT, time_stamp TIMESTAMP, notification BOOLEAN);
by Raquel Giugliano
This minimal utility workflow connects to the SAP Business One Service Layer API to verify login credentials and return the session ID. It's ideal for testing access or using as a sub-workflow to retrieve the B1SESSION token for other operations. ++⚙️ HOW IT WORKS:++ 🔹 1. Trigger Manually The workflow is initiated using a Manual Trigger. Ideal for testing or debugging credentials before automation. 🔹 2. Set SAP Login Data The Set Login Data node defines four key input variables: sap_url: Base URL of the SAP B1 Service Layer (e.g. https://sap-server:50000/b1s/v1/) sap_username: SAP B1 username sap_password: SAP B1 password sap_companydb: SAP B1 Company DB name 🔹 3. Connect to SAP A HTTP Request node performs a POST to the Login endpoint. The body is structured as: { "UserName": "your_sap_username", "Password": "your_sap_password", "CompanyDB": "your_sap_companydb" } If successful, the response contains a SessionId which is essential for authenticated requests. 🔹 4. Return Session or Error The response is branched: On success → the sessionID is extracted and returned. On failure → the error message and status code are stored separately. ++🛠 SETUP STEPS:++ 1️⃣ Create SAP Service Layer Credentials Although this workflow uses manual inputs (via Set), it's best to define your connection details as environment variables for reuse: SAP_URL=https://your-sap-host:50000/b1s/v1/ SAP_USER=your_sapuser SAP_PASSWORD=your_password SAP_COMPANY_DB=your_companyDB Alternatively, update the Set Login Data node directly with your values. 2️⃣ Run the Workflow Click "Execute Workflow" in n8n. Watch the response from SAP: If successful: sessionID will be available in the Success node. If failed: statusCode and errorMessage will be available in the Failed node. ++✅ USE CASES:++ 🔄 Reusable Login Module Export this as a reusable sub-workflow for other SAP-integrated flows. 🔐 Credential Testing Tool Validate new environments, test credentials before deployment.
by Harshil Agrawal
This workflow stores responses form responses of Typeform in Airtable. The workflow also sends the response to a channel on Slack. You will have to configure the Set node if your form uses different fields.
by Miquel Colomer
This workflow is useful if you have lots of tasks running daily. MySQL node (or the database used to save data shown in n8n - could be Mongo, Postgres, ... -) remove old entries from execution_entity table that contains the history of the executed workflows. If you have multiple tasks executed every minute, 1024 rows will be created every day (60 minutes x 24 hours) per every task. This will increase the table size fastly. SQL query deletes entries older than 30 days taking stoppedAt column as a reference for date calculations. You only have to setup Mysql connection properly and config cron to execute once per day in a low traffic hour, this way
by tanaypant
This workflow gets triggered every Friday at 6 PM with the help of a Cron node. It pulls in data about a random cocktail via the HTTP Request Node and sends the data to a Bannerbear node to create an image based on a template. The image is then finally shared on a specified Rocket.Chat channel.
by Jorge Martínez
Lead Enrichment & Email Discovery from Google Sheets What this workflow does This template automates the enrichment of business leads from a Google Sheet by: Triggering when a row is activated Searching for company information with Serper.dev Generating and validating potential contact pages Scraping company pages with ScrapingBee Extracting emails and updating the sheet Marking rows as finished Prerequisites Google Sheet with columns: business type, city, state, activate Copy the ready-to-use template:** Sheet Template Google Sheets API credentials (from Google Cloud) Serper.dev API key (free tier available) ScrapingBee API key (free tier available) Inputs Google Sheet row:** Must include business type, city, state, activate Set Information Node:** country, country_code, language, result_count (can also be provided via columns in the sheet) Outputs Google Sheet update:** Company names, URLs, found email addresses (comma-separated if multiple), and status updates (Running, Missing information, Finished) Configuration Required Connect Google Sheets node with your Google Cloud credentials Add your Serper.dev API key to the HTTP Request node Add your ScrapingBee API key to the scraping node Adjust search and filtering options as needed How to customize the workflow Send country, country_code, and result_count from the sheet:** Add these as columns in your sheet and update the workflow to read their values dynamically, making your search fully configurable per row. Add more blacklist terms:** Update the code node with additional company names or keywords you want to exclude from the search results. Extract more contact details:** Modify the email extraction code to find other contact info (like phone numbers or social profiles) if needed.
by n8n Team
This n8n workflow is designed to analyze email headers received via a webhook. The workflow splits into two main paths based on the presence of the received and authentication results headers. In the first path, if received headers are present, the workflow extracts IP addresses from these headers and then queries the IP Quality Score API to gather information about the IP addresses, including fraud score, abuse history, organization, and more. Geolocation data is also obtained from the IP-API API. The workflow collects and aggregates this information for each IP address. In the second path, if authentication-results headers are present, the workflow extracts SPF, DKIM, and DMARC authentication results. It then evaluates these results and sets fields accordingly (e.g., SPF pass/fail/neutral). The paths merge their results, and the workflow responds to the original webhook with the aggregated analysis, including IP information and authentication results. Potential issues during setup include ensuring proper configuration of the webhook calls with header authentication, handling authentication and API keys for the IP Quality Score API, and addressing any discrepancies or errors in the logic nodes, such as handling SPF, DKIM, and DMARC results correctly. Additionally, thorough testing with various email header formats is essential to ensure accurate analysis and response.
by Miquel Colomer
Do you want to control the DNS domain entries of your customers or servers? This workflow gets DNS information of any domain using the uProc Get Domain DNS records tool. You can use this workflow to check existing DNS records in real-time to ensure that any domain setup is correct. You need to add your credentials (Email and API Key - real -) located at Integration section to n8n. You can replace "Create Domain Item" with any integration containing a domain, like Google Sheets, MySQL, or Zabbix server. Every "uProc" node returns multiple items with the next fields per every item: type: Contains the DNS record type (A, ALIAS, AAAA, CERT, CNAME, MX, NAPTR, NS, PTR, SOA, SRV, TXT, URL). values: Contains the DNS record values.
by Harshil Agrawal
This workflow generates sensor data, which is used in another workflow for managing factory incident reports. Read more about this use case and how to build both workflows with step-by-step instructions in the blog post How to automate your factory’s incident reporting. Prerequisites AMQP, an ActiveMQ connection, and credentials Nodes Interval node triggers the workflow every second. Set node set the necessary values for the items that are addeed to the queue. AMQP Sender node sends a raw message to add to the queue.
by Greg Lopez
Workflow Information 📌 Purpose 🎯 The intention of this workflow is to integrate New Shopify Orders into MS Dynamics Business Central: Point-of-Sale (POS):** POS orders will be created in Business Central as Sales Invoices given no fulfillment is expected. Web Orders:** This type of orders will be created as Business Central Sales Orders. How to use it 🚀 Edit the "D365 BC Environment Settings" node with your own account values (Company Id, Tenanant Id, Tax & Discount Items). Go to the "Shopify" node and edit the connection with your environment. More help here. Go to the "Lookup Customers" node to edit the Business Central connection details with your environment settings. Set the required filters on the "Shopify Order Filter" node. Edit the "Schedule Trigger" node with the required frequency. Useful Workflow Links 📚 Step-by-step Guide/ Integro Cloud Solutions Business Central REST API Documentation Video Demo Need Help? Contact me at: ✉️greg.lopez@integrocloudsolutions.com 📥 https://www.linkedin.com/in/greg-lopez-08b5071b/
by Juan Carlos Cavero Gracia
This workflow turns any URL (news article, blog post, or even an n8n workflow page) into a vertical short video with your AI avatar explaining it ready for TikTok, Instagram Reels, and YouTube Shorts. It fetches the page, generates a tight 30–45s script and platform-optimized descriptions, captures a dynamic background of the page (animated scroll or static image), composes and renders the video with HeyGen (free split‑screen or paid clean cut‑out), and sends it to Upload-Post with an optional human review step. Note: You can generate full videos end‑to‑end using free trials—no credit card required—for all APIs used in this template (Google Gemini, ScreenshotOne, HeyGen, Upload‑Post).* Who Is This For? Creators & Marketers:** Explain articles, launches, and workflows without filming or editing. Media & Newsletters:** Turn breaking stories into clear, shareable shorts. Agencies:** Scale content creation with review gates and multi-account publishing. Founders & Product Teams:** Maintain an on-brand presence in minutes. What Problem Does It Solve? Making platform-native explainers is slow and inconsistent. This workflow: Writes the script with AI:** ~30s hook-led monologue with key facts. Optimizes per platform:** Tailored captions for TikTok, Reels, and Shorts. Generates the video automatically:** Uses the page itself as background + avatar voiceover. Publishes everywhere:** Optional review, then one-click multi-platform posting. How It Works URL Input: Paste any page to convert (article, blog, or workflow). AI Agent (Gemini): Reads the page and produces a single script (~30s) + platform-specific descriptions. Video Background: Animated scroll capture (9:16) or featured image via ScreenshotOne. HeyGen Composition & Render: Free: split-screen vertical (avatar bottom, background top). Paid: clean avatar cut‑out over video/image (background removal). Render & Poll: Waits for HeyGen to finish and retrieves the final MP4. Human Review (optional): Approve or reject in a simple form. Publish (Upload-Post): Uploads to TikTok, Instagram (Reels), and YouTube Shorts with AI-generated titles/descriptions. Setup Credentials (all offer free trials, no credit card required): HeyGen API (X-Api-Key) + your avatar_id and voice_id. ScreenshotOne API key. Upload-Post (connect your social accounts). Google Gemini (chat model). Variables in “Set Input Vars”: workflow_url: page to convert. background_removal: true (paid) or false (free). background_type: video (animated scroll) or photo (static). Publishing: Choose platforms in Upload-Post; enable review if you want to approve before posting. Requirements Accounts:** n8n, HeyGen, ScreenshotOne, Upload-Post, Google (Gemini). API Keys:** HeyGen, ScreenshotOne, Gemini; Upload-Post credentials. Assets:** An avatar and a voice available in HeyGen. Features URL → Short in minutes:** 9:16 vertical (720×1280). Pro script with hook:** Clear, natural, ~30s. Two render modes:** Split-screen (free) or clean cut‑out (paid). Background from the page:** Animated scroll or main image. Human-in-the-loop:** Approval before going live. Multi-publish:** TikTok, Instagram Reels, YouTube Shorts via Upload-Post. Start free:** Generate videos with free trials across all APIs—no credit card required.
by InfraNodus
Build a Better AI Chatbot for Your Zendesk Knowledge Portal Simple setup, no vector database needed. Uses GraphRAG to enhance user's prompts and provide high-quality and relevant up-to-date responses from your Zendesk knowledge base. Can be embedded on your Zendesk portal, also accesible via a URL. Can be customized and branded in your style. See example at support.noduslabs.com or a screenshot below: Also, compare it to the original Zendesk AI chatbot available at our other website https://infranodus.com — you will see that the quality of responses in this custom chatbot is much better than in the native Zendesk one, plus you save subscription because you won't need to activate their chat option, which is $25 per agent. Workflow Overview In this workflow, we use the n8n AI Agent Node with a custom prompt that: 1) First consults an "expert" graph from the InfraNodus GraphRAG system using the official InfraNodus GraphRAG node that will extract a reasoning ontology and a general context about your product from the graph that you create manually or automatically as described on our support portal. 2) The augmented user prompt is converted by AI agent node in a Zendesk search query that retrieves the most relevant content using their search API via n8n HTTP node. Both the results from the graph and the search results are combined and shown to the user How it works Receives a request from a user via a webhook that connects to the custom n8n chat widget. The request goes to the AI Agent node from n8n with a custom prompt (provided in the workflow) that orchestrates the following procedure: Sends the request to the knowledge graph in your InfraNodus account using the official InfraNodus GraphRAG node that contains a reasoning ontology represented as a knowledge graph based on your Zendesk knowledge support portal. Read more on how to generate this ontology here. Based on the results from InfraNodus, it reformulates the original prompt to include the reasoning logic as well as provide a fuller context to the model. Sends the request to the Zendesk search API using the n8n custom HTTP node with an enhanced search query to retrieve high-quality results. Combines Zendesk search results with InfraNodus ontology to generate a final response to the user. Sends the response back to the webhook, which is then picked up by the n8n chat widget that is shown to the user wherever the widget is embedded (e.g. on your own support portal). How to use • Get an InfraNodus API key and add it into InfraNodus GraphRAG node. • Edit the InfraNodus Graph node to provide the name of the graph that you will be using as ontology (you need to create it in InfraNodus first. • Edit the AI Agent (Support Agent) prompt to modify our custom instructions for your particular use case (do not change it too much as it works quite well and tells the agent what it should do and in what sequence). • Add the API key for your Zendesk account. In order to get it, go to your support portal Admin > Apps & Integrations > API Tokens. Usually it's located at https://noduslabs.zendesk.com/admin/apps-integrations/apis/api-tokens where instead of noduslabs you need to put the name of your support portal. Note: the official n8n Zendesk node does not have an endpoint to search and extract articles from support portal, so we use the custom HTTP node, but you can still connect to it via the Zendesk API key you have installed in your n8n. Support & Tutorials If you wan to create your own reasoning ontology graphs, please, refer to this article on generating your own knowledge graph ontologies. Specifically for this use case: Building ontology for your n8n AI chat bot. You may also be interested to watch this video that explains the logic of this approach in detail: Our support article for this workflow with real-life example: Building an embeddable AI chatbot agent for your Zendesk knowledge portal. To get support and help, contact us via support.noduslabs.com Learn more about InfraNodus at www.infranodus.com