by n8n Team
This is an example workflow that imports an XML file into an SQL database. The ReadBinaryFiles node loads the XML file from the server. Then the Code node extracts the file content from the binary buffer. Afterwards, an XML node converts the XML string into a JSON structure. Finally, in the MySQL node inserts the data records into the SQL table. In the upper part of the workflow there is another MySQL node that is disabled. This node creates a new table with all the required variables based on the sample SQL database: https://www.mysqltutorial.org/mysql-sample-database.aspx
by Yaron Been
Fire Part Crafter Image Generator Description PartCrafter is a structured 3D mesh generation model that creates multiple parts and objects from a single RGB image. Overview This n8n workflow integrates with the Replicate API to use the fire/part-crafter model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters image** (string): Input image for 3D mesh generation Optional Parameters seed** (integer, default: 0): Random seed for reproducibility. Use 0 for random seed num_parts** (integer, default: 16): Number of parts to generate num_tokens** (string, default: 2048): Number of tokens for generation guidance_scale** (number, default: 7): Guidance scale for generation remove_background** (boolean, default: False): Remove background from input image use_flash_decoder** (boolean, default: False): Use flash decoder for faster inference (Tempermental?) num_inference_steps** (integer, default: 50): Number of inference steps How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: fire/part-crafter API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Zacharia Kimotho
How to scrap emails from websites This workflow shows how to quickly build an Email scraping API using n8n. Email marketing is at the core of most marketing strategies, be it content marketing, sales, etc. As such, being able to find contacts in bulk for your business on a large scale is key. There are available tools available in the market that can do this, but most are premium; why not build a custom one with n8n? Usage The workflow gets the data from a website and performs an extraction based on the date around on the website Copy the webhook URL to your browser Add a query parameter eg ?Website=https://mailsafi.com . This should give you a URL like this {{$n8nhostingurl/webhook/ea568868-5770-4b2a-8893-700b344c995e?Website=https://mailsafi.com Click on the URL and wait for the extracted email to be displayed. This will return the email address on the website, or if there is no email, the response will be "workflow successfully executed." Make sure to use HTTP:// for your domains Otherwise, you may get an error.
by Zacharia Kimotho
This workflow helps marketers verify and update data using EffiBotics Email Verifier API. Copy and create a list with emails as on this one https://docs.google.com/spreadsheets/d/1rzuojNGTaBvaUEON6cakQRDva3ueGg5kNu9v12aaSP4/edit#gid=0 The trigger checks for any updates in the number of rows that are present in a sheet and updates the verified emails on Google sheets Once you update a new cell, the new data is read, and the email is checked for its validity before. The results are then updated in real-time on the sheet. Happy Emailing!
by Dataki
This workflow enriches new Pipedrive organization's data by adding a note to the organization object in Pipedrive. It assumes there is a custom "website" field in your Pipedrive setup, as data will be scraped from this website to generate a note using OpenAI. Then, a notification is sent in Slack. ⚠️ Disclaimer This workflow uses a scraping API. Before using it, ensure you comply with the regulations regarding web scraping in your country or state. Important Notes The OpenAI model used is GPT-4o, chosen for its large input token capacity. However, it is not the cheapest model if cost is very important to you. The system prompt in the OpenAI Node generates output with relevant information, but feel free to improve or modify it according to your needs. How It Works Node 1: Pipedrive Trigger - An Organization is Created This is the trigger of the workflow. When an organization object is created in Pipedrive, this node is triggered and retrieves the data. Make sure you have a "website" custom field in Pipedrive (the name of the field in the n8n node will appear as a random ID and not with the Pipedrive custom field name). Node 2: ScrapingBee - Get Organization's Website's Homepage Content This node scrapes the content from the URL of the website associated with the Pipedrive Organization created in Node 1. The workflow uses the ScrapingBee API, but you can use any preferred API or simply the HTTP request node in n8n. Node 3: OpenAI - Message GPT-4o with Scraped Data This node sends HTML-scraped data from the previous node to the OpenAI GPT-4o model. The system prompt instructs the model to extract company data, such as products or services offered and competitors (if known by the model), and format it as HTML for optimal use in a Pipedrive Note. Node 4: Pipedrive - Create a Note with OpenAI Output This node adds a Note to the Organization created in Pipedrive using the OpenAI node output. The Note will include the company description, target market, selling products, and competitors (if GPT-4o was able to determine them). Node 5 & 6: HTML To Markdown & Code - Markdown to Slack Markdown These two nodes format the HTML output to Slack Markdown. The Note created in Pipedrive is in HTML format, as specified by the System Prompt of the OpenAI Node. To send it to Slack, it needs to be converted to Markdown and then to Slack Markdown. Node 7: Slack - Notify This node sends a message in Slack containing the Pipedrive Organization Note created with this workflow.
by Anthony
This workflow allows you to recognize a folder with receipts or invoices (make sure your files are in .pdf, .png, or .jpg format). The workflow can be triggered via the "Test workflow" button, and it also monitors the folder for new files, automatically recognizing them. Video Demo https://youtu.be/mGPt7fqGQD8 1. n8n import glitch After import, the trigger node "When clicking 'Test workflow'" might be disconnected. You need to connect it via 2 arrows to "Google Sheets1" and "Google Drive" nodes. So, the workflow has 2 triggers - via button, and via Google Sheets "new file" event - both of these triggers should be connected to 2 nodes. Here is how it should look like: https://ocr.oakpdf.com/n8n_fix.png 2. Set up RapidAPI HTTP auth key Create new "HTTP header" n8n credential and paste your RapidAPI key from https://rapidapi.com/restyler/api/receipt-and-invoice-ocr-api into it. https://ocr.oakpdf.com/n8n_api_key.png Make sure "HTTP Request" node uses this credential. 3. Set up your Google Auth You need a Google connection to work with your Google Sheets and Google Drive accounts: https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/#finish-your-n8n-credential 4. Set up Google Sheets Copy this Google Sheets document: https://docs.google.com/spreadsheets/d/1G0w-OMdFRrtvzOLPpfFJpsBVNqJ9cfRLMKCVWfrTQBg/edit?usp=sharing Custom document formats and advanced usage Email: contact@scrapeninja.net Linkedin: https://www.linkedin.com/in/anthony-sidashin/
by Marcel Claus-Ahrens
This workflow downloads all files from a specific folder in a S3 Bucket and compresses them so you can download it via n8n or do further processings. Fill in your Credentials and Settings in the Nodes marked with "*". Might serve well as Blueprint or as manual Download for S3 Folders. Since I found it rather tricky to compress all binary files into one zip file I figured might it be an interesting Template. Hint: This is the expression to get every binary key to compress them dynamically. (used in the "Compress"-Node) Enjoy the Workflow! ❤️ https://let-the-work-flow.com Workflow Automation & Development
by Dataki
This workflow demonstrates how to enrich data from a list of companies in a spreadsheet. While this workflow is production-ready if all steps are followed, adding error handling would enhance its robustness. Important notes Check legal regulations**: This workflow involves scraping, so make sure to check the legal regulations around scraping in your country before getting started. Better safe than sorry! Mind those tokens**: OpenAI tokens can add up fast, so keep an eye on usage unless you want a surprising bill that could knock your socks off! 💸 Main Workflow Node 1 - Webhook This node triggers the workflow via a webhook call. You can replace it with any other trigger of your choice, such as form submission, a new row added in Google Sheets, or a manual trigger. Node 2 - Get Rows from Google Sheet This node retrieves the list of companies from your spreadsheet. here is the Google Sheet Template you can use. The columns in this Google Sheet are: Company**: The name of the company Website**: The website URL of the company These two fields are required at this step. Business Area**: The business area deduced by OpenAI from the scraped data Offer**: The offer deduced by OpenAI from the scraped data Value Proposition**: The value proposition deduced by OpenAI from the scraped data Business Model**: The business model deduced by OpenAI from the scraped data ICP**: The Ideal Customer Profile deduced by OpenAI from the scraped data Additional Information**: Information related to the scraped data, including: Information Sufficiency: Description: Indicates if the information was sufficient to provide a full analysis. Options: "Sufficient" or "Insufficient" Insufficient Details: Description: If labeled "Insufficient," specifies what information was missing or needed to complete the analysis. Mismatched Content: Description: Indicates whether the page content aligns with that of a typical company page. Suggested Actions: Description: Provides recommendations if the page content is insufficient or mismatched, such as verifying the URL or searching for alternative sources. Node 3 - Loop Over Items This node ensures that, in subsequent steps, the website in "extra workflow input" corresponds to the row being processed. You can delete this node, but you'll need to ensure that the "query" sent to the scraping workflow corresponds to the website of the specific company being scraped (rather than just the first row). Node 4 - AI Agent This AI agent is configured with a prompt to extract data from the content it receives. The node has three sub-nodes: OpenAI Chat Model: The model used is currently gpt4-o-mini. Call n8n Workflow: This sub-node calls the workflow to use ScrapingBee and retrieves the scraped data. Structured Output Parser: This parser structures the output for clarity and ease of use, and then adds rows to the Google Sheet. Node 5 - Update Company Row in Google Sheet This node updates the specific company's row in Google Sheets with the enriched data. Scraper Agent Workflow Node 1 - Tool Called from Agent This is the trigger for when the AI Agent calls the Scraper. A query is sent with: Company name Website (the URL of the website) Node 2 - Set Company URL This node renames a field, which may seem trivial but is useful for performing transformations on data received from the AI Agent. Node 3 - ScrapingBee: Scrape Company's Website This node scrapes data from the URL provided using ScrapingBee. You can use any scraper of your choice, but ScrapingBee is recommended, as it allows you to configure scraper behavior directly. Once configured, copy the provided "curl" command and import it into n8n. Node 4 - HTML to Markdown This node converts the scraped HTML data to Markdown, which is then sent to OpenAI. The Markdown format generally uses fewer tokens than HTML. Improving the Workflow It's always a pleasure to share workflows, but creators sometimes want to keep some magic to themselves ✨. Here are some ways you can enhance this workflow: Handle potential errors Configure the scraper tool to scrape other pages on the website. Although this will cost more tokens, it can be useful (e.g., scraping "Pricing" or "About Us" pages in addition to the homepage). Instead of Google Sheets, connect directly to your CRM to enrich company data. Trigger the workflow from form submissions on your website and send the scraped data about the lead to a Slack or Teams channel.
by Sirhexalot
This n8n workflow allows you to update user roles in Zammad based on data from an Excel file. The workflow automates role assignments, ensuring efficient and consistent updates. Features Excel Integration**: Import user data from an Excel file containing emails and role assignments. Dynamic Updates**: Match Zammad users by email and update their roles. Error Handling**: Continue workflow execution even if some updates fail. Customizable Variables**: Configure Zammad API URL, API key, and Excel file URL. Usage Import the Workflow: Upload the provided .json file into your n8n instance. Set Variables: zammad_base_url: Your Zammad instance URL. excel_source_url: URL of the Excel file containing user data. Authentication for Zammad Create in the Node "Find Zammad User by email" and "Update User Roles" a Header Auth Authentication Name**: Authorization Value**: Bearer <put here your zammad api token> Run the Workflow: Execute the workflow to update user roles based on the Excel data. Issues and Suggestions For issues or suggestions, visit the GitHub Repository.
by Sirhexalot
This n8n workflow allows you to reset all user roles in Zammad to specified default roles. It ensures consistency in role management across your Zammad instance. Features Retrieve all active users from Zammad. Update each user's roles to predefined default role IDs. Exclude specific users by their IDs from the update process. Simple configuration for default roles and excluded users. Usage Import the Workflow: Upload the provided .json file into your n8n instance. Configure Variables: zammad_base_url: Your Zammad instance URL. zammad_api_key: Your Zammad API key. default_roles: List of default role IDs to apply to all users. exclude_zammad_users_by_id: List of user IDs to exclude from the update. Run the Workflow: Execute the workflow to update roles automatically. Issues and Suggestions For issues or suggestions, visit the GitHub Repository.
by Fernanda Silva
Workflow Description Your workflow is an intelligent chatbot, using ++OpenAI assistant++, integrated with a backend that supports WhatsApp Business, designed to handle various use cases such as sales and customer support. Below is a breakdown of its functionality and key components: Workflow Structure and Functionality Chat Input (Chat Trigger) The flow starts by receiving messages from customers via WhatsApp Business. Collects basic information, such as session_id, to organize interactions. Condition Check (If Node) Checks if additional customer data (e.g., name, age, dependents) is sent along with the message. If additional data is present, a customized prompt is generated, which includes this information. The prompt specifies that this data is for the assistant's awareness and doesn’t require a response. Data Preparation (Edit Fields Nodes) Formats customer data and the interaction details to be processed by the AI assistant. Compiles the customer data and their query into a single text block. AI Responses (OpenAI Nodes) The assistant’s prompt is carefully designed to guide the AI in providing accurate and relevant responses based on the customer’s query and data provided. Prompts describe the available functionalities, including which APIs to call and their specific purposes, helping to prevent “hallucinated” or irrelevant responses. Memory and Context (Postgres Chat Memory) Stores context and messages in continuous sessions using a database, ensuring the chatbot maintains conversation history. API Calls The workflow allows the use of APIs with any endpoints you choose, depending on your specific use case. This flexibility enables integration with various services tailored to your needs. The OpenAI assistant understands JSON structures, and you can define in the prompt how the responses should be formatted. This allows you to structure responses neatly for the client, ensuring clarity and professionalism. Make sure to describe the purpose of each endpoint in the assistant’s prompt to help guide the AI and prevent misinterpretation. Customer Response Delivery After processing and querying APIs, the generated response is sent to the backend and ultimately delivered to the customer through WhatsApp Business. Best Practices Implemented Preventing Hallucinations** Every API has a clear description in its prompt, ensuring the AI understands its intended use case. Versatile Functionality** The chatbot is modular and flexible, capable of handling both sales and general customer inquiries. Context Persistence** By utilizing persistent memory, the flow maintains continuous interaction context, which is crucial for longer conversations or follow-up queries. Additional Recommendations Include practical examples in the assistant’s prompt, such as frequently asked questions or decision-making flows based on API calls. Ensure all responses align with the customer’s objectives (e.g., making a purchase or resolving technical queries). Log interactions in detail for future analysis and workflow optimization. This workflow provides a solid foundation for a robust and multifunctional virtual assistant 🚀
by Yaron Been
Telegram AI Assistant: Summarize Links & Generate Images On Demand This workflow turns any Telegram chat into a smart assistant. By typing simple commands like /summary or /img, users can trigger powerful AI actions—directly from Telegram. ✨ What It Does This automation listens for specific commands in Telegram messages: /help: Sends a help menu explaining available commands. /summary <link>: Fetches a webpage, extracts its content, and summarizes it using OpenAI into 10–12 bullet points. /img <prompt>: Sends the image prompt to OpenAI and replies that the request has been received (designed for future integration with image APIs). 📦 Features ✅ Works instantly in Telegram 🧠 Uses OpenAI for text summarization and image prompt processing 🌐 Scrapes and cleans raw article text before summarizing 📤 Replies directly to the same Telegram thread 🔧 Easily expandable to support more commands 🔧 Use Cases Research Summaries**: Quickly condense articles or reports shared in chat. Content Review**: Get team-friendly TL;DRs of long blog posts or product pages. Creative Brainstorming**: Share visual ideas via /img and get quick prompts logged. Customer Support**: Offer instant answers in group chats (with further extension). Daily Digest Bot**: Connect to news feeds and auto-summarize updates. 🚀 Getting Started Clone this workflow and connect your Telegram Bot. Insert your OpenAI credentials. Deploy and test by messaging /summary https://example.com in your Telegram group or DM. Expand with new commands or connect Stability.ai or other services for real image generation. 🔗 Author & Resources Built by Yaron Been Follow more automations at nofluff.online