by Rajeet Nair
Overview This workflow implements a privacy-preserving AI document processing pipeline that detects, masks, and securely manages Personally Identifiable Information (PII) before any AI processing occurs. Organizations often need to analyze documents such as invoices, forms, contracts, or reports using AI. However, sending documents containing personal data directly to AI models can create serious privacy, compliance, and security risks. This workflow solves that problem by automatically detecting sensitive information, replacing it with secure tokens, and storing the original values in a protected vault database. Only the masked version of the document is sent to the AI model for analysis. If required, a controlled PII re-injection mechanism can restore original values after processing. The workflow also records all operations in an audit log, making it suitable for environments requiring strong compliance such as GDPR, financial services, healthcare, or enterprise document processing systems. How It Works 1. Document Upload A webhook receives a document (typically a PDF) and triggers the workflow. 2. OCR Text Extraction The OCR Extract node extracts the text content from the document so it can be analyzed for sensitive information. 3. PII Detection Multiple detectors analyze the text to identify different types of sensitive data: Email addresses (regex detection) Phone numbers (multi-pattern detection) Identification numbers such as PAN, SSN, or bank accounts Physical addresses detected using an AI model Each detection includes: detected value location in the text confidence score 4. Detection Consolidation All detected PII results are merged into a single dataset. The workflow resolves overlapping detections and removes duplicates to produce a clean list of sensitive values. 5. Tokenization and Secure Vault Storage Each detected PII value is replaced with a secure token, for example: <<EMAIL_7F3A>> <<PHONE_A12B>> The original values are securely stored in a Postgres vault table. This ensures sensitive data is never exposed to AI models. 6. Masked AI Processing The masked document is sent to an AI model for structured analysis. Possible AI tasks include: Document classification Data extraction Document summarization Entity extraction Since all sensitive data has been tokenized, the AI processes the document without seeing any real personal data. 7. Controlled PII Re-Injection After AI processing, the workflow can optionally restore original values from the vault. The Re-Injection Controller determines which fields are allowed to restore PII based on defined permissions. 8. Compliance Audit Logging All events are recorded in an audit table, including: PII detection token generation AI processing PII restoration This provides traceability and compliance reporting. Setup Instructions 1. Configure Postgres Database Create two tables in your database. PII Vault Table Example structure: token original_value type document_id created_at This table securely stores original PII values mapped to tokens. Audit Log Table Example structure: document_id pii_types_detected token_count ai_access_confirmed re_injection_events timestamp actor This table records workflow activity for compliance tracking. 2. Configure AI Model Credentials This workflow supports multiple AI models: Anthropic Claude (used for AI document processing) Ollama local models (used for address detection) Configure credentials in n8n before running the workflow. 3. Configure Webhook Trigger The workflow starts when a document is sent to the webhook: POST /webhook/gdpr-document-upload Upload a PDF file to this endpoint to trigger processing. 4. Configure Alert Notifications (Optional) Replace the placeholder alert webhook URL with your monitoring or alerting system. Example use cases: Slack alert monitoring system incident notification Alerts are triggered if masking fails. Use Cases This workflow is useful for many privacy-sensitive automation scenarios. GDPR-Compliant Document Processing Safely process documents containing personal data without exposing PII to AI models. AI-Powered Document Analysis Use AI to summarize or extract data from documents while maintaining privacy. Enterprise Data Redaction Pipelines Automatically detect and tokenize sensitive data before sending documents to downstream systems. Financial Document Processing Process invoices, contracts, and financial reports securely. Healthcare Document Automation Analyze patient documents while ensuring sensitive data is protected. Requirements To run this workflow you need: n8n** Postgres database** Anthropic Claude API access** Ollama (optional for local AI address detection)** Webhook endpoint for document uploads** Optional integrations: Monitoring or alert system Compliance audit database Key Features Automated PII detection and tokenization AI-safe document processing** Secure vault storage for sensitive data Controlled PII restoration Full audit logging Works with multiple AI models Designed for GDPR and enterprise compliance Summary This workflow creates a secure bridge between sensitive documents and AI systems. By automatically detecting, masking, and securely storing personal data, it enables organizations to safely apply AI to document processing tasks without exposing sensitive information. The combination of tokenization, secure vault storage, controlled re-injection, and audit logging makes this workflow suitable for privacy-sensitive industries and enterprise automation pipelines.
by Angel Menendez
This workflow will allow you at the beginning of each day to copy your google calendar events into Trello so you can take notes, label, or automate your tasks. When deploying this, don't forget to change: Label ID for meeting type under "Create Trello Cards". You should be able to find instructions Here on how to find the label ID. Description for Trello cards under "Create Trello Cards". I currently pull in notes but it should be simple to change to pull the Gcal description instead. You can change the trigger time to fire at a different time.
by Lorena
This workflow is triggered when a new order is created in Shopify. Then: the order information is stored in Zoho CRM, an invoice is created in Harvest and stored in Trello, if the order value is above 50, an email with a discount coupon is sent to the customer and they are added to a MailChimp campaign for high-value customers; otherwise, only a "thank you" email is sent to the customer. Note that you need to replace the List ID in the Trello node with your own ID (see instructions in our docs). Same goes for the Account ID in the Harvest node (see instructions here).
by Lorena
This workflow is triggered when a typeform is submitted, then it saves the sender's information into HubSpot as a new contact. Typeform Trigger: triggers the workflow when a typeform is submitted. Set: sets the fields for the values from Typeform. HubSpot 1: creates a new contact with information from Typeform. IF: filters contacts who expressed their interest in business services. HubSpot 2: updates the contact's stage to opportunity. Gmail: sends an email to the opportunity contacts with informational material. NoOp: takes no action for contacts who are not interested.
by Eduard
Extract data from a webpage (Ycombinator news page) and create a nice list using itemList node. It seems that current version in n8n (0.141.1) requires to extract each variable one by one. Hopefully in a futute it will be possible to create the table using just one itemList node. Another nice feature of the workflow is an automatically generated file name with the resulting table. Check out the "fileName" option of the Spreadsheet File node: "Ycombinator_news_{{new Date().toISOString().split('T', 1)[0]}}.{{$parameter[\"fileFormat\"]}}" The resulting table is saved as .xls file and delivered via email
by Daniel Ng
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. Auto Backup n8n Credentials to Google Drive This workflow automates the backup of all your n8n credentials. It can be triggered manually for on-demand backups or will run automatically on a schedule (default to daily execution). It executes a command to export decrypted credentials, formats them into a JSON file, and then uploads this file to a specified Google Drive folder. This process is essential for creating secure backups of your sensitive credential data, facilitating instance recovery or migration. We recommend you use this backup workflow in conjunction with a restore solution like our "Restore Credentials from Google Drive Backups" template. For more powerful n8n templates, visit our website or contact us at AI Automation Pro. We help your business build custom AI workflow automation and apps. Who is this for? This workflow is designed for n8n administrators and users who require a reliable method to back up their n8n credentials. It is particularly beneficial for those managing self-hosted n8n instances, where direct server access allows for command-line operations. What problem is this workflow solving? / use case Managing and backing up n8n credentials manually can be a tedious task, susceptible to errors and often overlooked. This workflow solves the problem by providing an automated, secure, and consistent way to back up all credential data. The primary use case is to ensure that a recovery point for credentials exists, safeguarding against data loss, assisting in instance migrations, or for general disaster recovery preparedness, ideally on a regular, automated basis. What this workflow does The workflow proceeds through the following steps: Triggers: The workflow includes two types of triggers: Manual Trigger: An "On Click Trigger" allows for on-demand execution whenever needed. Scheduled Trigger: A "Schedule Trigger" is included, designed for automated daily backups. Export Credentials: An "Execute Command" node runs the shell command npx n8n export:credentials --all --decrypted. This command exports all credentials from the n8n instance in a decrypted JSON format. Format JSON Data: The output from the command is processed by a "Code" node ("JSON Formatting Data"). This node extracts, parses, and formats the JSON to ensure it is well-structured. Aggregate Credentials: An "Aggregate" node ("Aggregate Cridentials") combines individual credential entries into a single JSON array. Convert to File: The "Convert To File" node transforms the aggregated JSON array into a binary file, preparing it as n8n_backup_credentials.json. Upload to Google Drive: The "Google Drive Upload File" node uploads the generated JSON file to a specified folder in Google Drive. Step-by-step setup To use this workflow, you'll need to configure a few things: n8n Instance Environment: The n8n instance must have access to the npx command and the n8n-cli package. The "Execute Command" node must be able to run shell commands on the server where n8n is hosted. Google Drive Credentials: In the "Google Drive Upload File" node, select or create your Google Drive OAuth2 API credentials. Google Drive Folder ID: Update the folderId parameter in the "Google Drive Upload File" node with the ID of your desired Google Drive folder. File Name (Optional): The backup file will be named n8n_backup_credentials.json. You can customize this in the "Google Drive Upload File" node. Configure Schedule Trigger: The workflow includes a "Schedule Trigger". Review its configuration to ensure it runs daily at your preferred time. How to customize this workflow to your needs Adjust Schedule:** Fine-tune the "Schedule Trigger" for different intervals (e.g., weekly, hourly) or specific days/times as per your requirements. Notifications:** Add notification nodes (e.g., Slack, Email, Discord) after the "Google Drive Upload File" node to receive alerts upon successful backup or in case of failures. Enhanced Error Handling:** Incorporate error handling branches using "Error Trigger" nodes or conditional logic to manage potential failures. Client-Side Encryption (Advanced):* If your security policy requires the backup file itself to be encrypted at rest in Google Drive, you can add a step *before uploading. Insert a "Code" node or use an "Execute Command" node with an encryption utility (like GPG) to encrypt the n8n_backup_credentials.json file. Remember that you would then need a corresponding decryption process. Dynamic File Naming:** Modify the "Google Drive Upload File" node to include a timestamp in the filename (e.g., n8n_backup_credentials_{{$now.toFormat('yyyyMMddHHmmss')}}.json) to keep multiple versions of backups. Important Note on Credential Security To simplify the setup and use of this backup workflow, the exported credentials are stored in the resulting JSON file in a decrypted state. This means the backup file itself is not further encrypted by this workflow. Consequently, it is critically important to: Ensure the Google Drive account used for backups is highly secure (e.g., strong password, two-factor authentication). Restrict access to the Google Drive folder where these backups are stored to only authorized personnel.
by TheUnknownEntity
I'm currently trialing a 4 day work week for all staff at my company, and one of the major impacts on productivity is interruptions. As such, I opted to use N8N to create a workflow to monitor my Google Calendar and when an event starts, to update my Slack status with an emote and the title of the calendar task. Additionally I opted to include to change the colour of Philips Hue lamp located in my living room where my wife is currently working so she know's if she can interrupt me or not. My calendar is built on the theory behind the Diary Detox system and as such the Slack Status reflect the colours involved. This was achieved using the emote aliases for the relevant colour circles. The Philips Hue lamp status is changed via the local API with Home Assistant. This is a very similiar process to controlling it with something like the Streamdeck, but the workflow calls the Webhook instead of the Streamdeck. This process can be found in lots of Youtube videos such as this. This gives my wife a very quick and easy way to know if she can interrupt me in my office (when the lights are Green or Blue) or when I'm busy (Red). Please Note: The above images are not intended to be an incentive to create your own Squid Games. Additionally, when integrating Slack with N8N, there are 2 x APIs which can be used. Typically the Bot User OAuth Token is used, however in order for your Status to be updated, the User OAuth Token must be used with the users.profile:read and users.profile:write permissions enabled. For clarity, I have removed the Webhooks from the Workflow as this would allow any person to control my lights. These can be inserted in the HTTP Request nodes. Each node responds to a different automation within the Home Assistant infrastructure. Acknowledgement: I would also credit Jon (Discord) aka 8668 (Workflows) for writing the Function node which turns the ColorID into a named variable.
by Ron
This sample workflow allows you to forward alerts from TheHive 5 to SIGNL4 in order to send reliable alerts to your team. There are two nodes for testing the TheHive connection ("TheHive Read Alerts" and "TheHive Create Alert"). The node "TheHive Webhook Request" will receive requests for new alerts from TheHive. You need to configure the webhook and the notifications in TheHive accordingly. The node "SIGNL4 Send Alert" sends the alert to SIGNL4 and the node "SIGNL4 Resolve Alert" will close the alert in SIGNL4 in case it has been closed in TheHive.
by n8n Team
This workflow generates CSV files containing a list of 10 random users with specific characteristics using OpenAI's GPT-4 model. It then splits this data into batches, converts it to CSV format, and saves it to disk for further use. The execution of the workflow begins from here when triggered manually. "OpenAI" Node. This uses the OpenAI API to generate random user data. The input to the OpenAI API is a fixed string, which asks for a list of 10 random users with some specific attributes. The attributes include a name and surname starting with the same letter, a subscription status, and a subscription date (if they are subscribed). There is also a short example of the JSON object structure. This technique is called one-shot prompting. "Split In Batches" Node. This node is used to handle the OpenAI responses one by one. "Parse JSON" Node. This node converts the content of the message received from the OpenAI node (which is in string format) into a JSON object. "Make JSON Table" Node. This node is used to convert the JSON data into a tabular format, which is easier to handle for further data processing. "Convert to CSV" Node. This node converts the table format data received from the "Make JSON Table" node into CSV format and assigns a file name. "Save to Disk" Node. This node is used to save the CSV generated in the previous node to disk in the ".n8n" directory. The workflow is designed in a circular manner. So, after saving the file to disk, it goes back to the "Split In Batches" node to process the OpenAI output, until all batches are processed.
by n8n Team
This is an example workflow that imports an XML file into an SQL database. The ReadBinaryFiles node loads the XML file from the server. Then the Code node extracts the file content from the binary buffer. Afterwards, an XML node converts the XML string into a JSON structure. Finally, in the MySQL node inserts the data records into the SQL table. In the upper part of the workflow there is another MySQL node that is disabled. This node creates a new table with all the required variables based on the sample SQL database: https://www.mysqltutorial.org/mysql-sample-database.aspx
by Nskha
Overview This workflow automates the process of notifying users about new emails via Telegram and temporarily hosting the email content on a secret HTML page. It is ideal for users who need immediate notifications and a secure, temporary web view of their email content. Use Cases Immediate notification of new emails in Telegram with the ability to preview the email content in a secure, temporary HTML format. Automation for users who need to keep track of their emails without constantly checking their email client. Useful for teams or individuals who require instant updates on critical emails and prefer a quick preview through a web interface. My Personal Use Case: Secure Subscription Sharing From time to time, I find myself wanting to share my paid subscriptions with friends, but giving out OTP codes manually or sharing my email isn't a good idea due to security concerns. I attempted to use the IMAP node to integrate with Telegram secret channel for this purpose but encountered numerous problems, such as difficulties in scraping content from emails. Additionally, the Telegram API sometimes rejects certain special characters found within email contents. After facing these challenges, I discovered that rendering emails as HTML pages and sharing them directly is the most effective solution. This approach bypasses the issues with character limitations and content scraping, providing a seamless way to share subscription benefits securely. Services/APIs Used | Service/API | Node Type | Description | |----------------------|-------------------------|------------------------------------------------------| | IMAP Email Server | Email Trigger (IMAP) | Triggers the workflow on receiving a new email. | | Telegram API | Telegram | Sends notifications and manages messages in Telegram.| | GitHub Gist API | HTTP Request (Github Gist) | Temporarily hosts email content on GitHub Gist. | | GitHub Gist API (Deletion) | HTTP Request (Github Gist β) | Deletes the temporary GitHub Gist after a specified time. | | Wait | Wait | Delays the workflow for a specified period. | Configuration Steps Email Trigger (IMAP): Configure your IMAP email credentials to enable the workflow to check for new emails. Telegram Nodes: Insert your Telegram bot's API credentials and your chat ID in both Telegram nodes to send and delete messages. Github Gist Nodes: Provide your GitHub API credentials to create and delete Gists. Ensure the GitHub token has the necessary permissions to create and delete Gists. Wait Node: Adjust the wait time according to your preference for how long the email content should be accessible via the HTML page. Screenshot Additional Notes Ensure all credentials are securely stored and have the minimum necessary permissions for the workflow to function. Test the workflow with non-sensitive emails to ensure it operates as expected before using it with critical email accounts. Consider the privacy and security implications of temporarily hosting email content on GitHub Gist. For any questions or issues, refer to the respective API documentation for each service used or consult the n8n community for support.
by Yaron Been
Google Veo 3 Video Generator Description Sound on: Googleβs flagship Veo 3 text to video model, with audio Overview This n8n workflow integrates with the Replicate API to use the google/veo-3 model. This powerful AI model can generate high-quality video content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Text prompt for video generation Optional Parameters seed** (integer, default: None): Random seed. Omit for random generations resolution** (string, default: 720p): Resolution of the generated video negative_prompt** (string, default: None): Description of what to discourage in the generated video How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate video content Access the generated output from the final node API Reference Model: google/veo-3 API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of video generation parameters