by Anurag
Description This workflow automates the extraction of structured data from invoices or similar documents using Docsumo's API. Users can upload a PDF via an n8n form trigger, which is then sent to Docsumo for processing and structured parsing. The workflow fetches key document metadata and all line items, reconstructs each invoice row with combined header and item details, and finally exports all results as an Excel file. Ideal for automating invoice data entry, reporting, or integrating with accounting systems. How It Works A user uploads a PDF document using the integrated n8n form trigger. The workflow securely sends the document to Docsumo via REST API. After uploading, it checks and retrieves the parsed document results. Header information and table line items are extracted and mapped into structured records. The complete result is exported as an Excel (.xls) file. Setup Steps Docsumo Account: Register and obtain your API key from Docsumo. n8n Credentials Manager: Add your Docsumo API key as an HTTP header credential (never hardcode the key in the workflow). Workflow Configuration: In the HTTP Request nodes, set the authentication to your saved Docsumo credentials. Update the file type or document type in the request (e.g., "type": "invoice") as needed for your use case. Testing: Enable the workflow and use the built-in form to upload a sample invoice for extraction. Features Supports PDF uploads via n8n’s built-in form or via API/webhook extension. Sends files directly to Docsumo for document data extraction using secure credentials. Extracts invoice-level metadata (number, date, vendor, totals) and full line item tables. Consolidates all data in easy-to-use Excel format for download or integration. Modular node structure, easily extensible for further automation. Prerequisites Docsumo account with API access enabled. n8n instance with form, HTTP Request, Code, and Excel/Convert to File nodes. Working Docsumo API Key stored securely in n8n’s credential manager. Example Use Cases | Scenario | Benefit | |---------------------|-----------------------------------------| | Invoice Automation | Extract line items and metadata rapidly | | Receipts Processing | Parse and digitize business receipts | | Bulk Bill Imports | Batch process bills for analytics | Notes Credentials Security:** Do not store your API key directly in HTTP Request nodes; always use n8n credentials manager. Sticky Notes:** The workflow includes sticky notes for setup, input, API call, extraction, and output steps to assist template users. Custom Columns:** You can customize header or line item extraction by editing the Code node as needed.
by Lucas Peyrin
How it works This template is a complete, hands-on tutorial for building a RAG (Retrieval-Augmented Generation) pipeline. In simple terms, you'll teach an AI to become an expert on a specific topic—in this case, the official n8n documentation—and then build a chatbot to ask it questions. Think of it like this: instead of a general-knowledge AI, you're building an expert librarian. The workflow is split into two main parts: Part 1: Indexing the Knowledge (Building the Library) This is a one-time process you run manually. The workflow automatically scrapes all pages of the n8n documentation, breaks them down into small, digestible chunks, and uses an AI model to create a special numerical representation (an "embedding") for each chunk. These embeddings are then stored in n8n's built-in Simple Vector Store. This is like a librarian reading every book and creating a hyper-detailed index card for every paragraph. Important: This in-memory knowledge base is temporary. It will be erased if you restart your n8n instance, and you will need to run the indexing process again. Part 2: The AI Agent (The Expert Librarian) This is the chat interface. When you ask a question, the AI agent doesn't guess the answer. Instead, it uses your question to find the most relevant "index cards" (chunks) from the knowledge base it just built. It then feeds these specific, relevant chunks to a powerful language model (Gemini) with a strict instruction: "Answer the user's question using ONLY this information." This ensures the answers are accurate, factual, and grounded in your provided documents. Set up steps Setup time: 2 minutes (plus 15-20 minutes for indexing) This template uses n8n's built-in tools, removing the need for an external database. Follow these simple steps to get started. Configure Google AI Credentials: You will need a Google AI API key for the Gemini models. In your n8n workflow, go to any of the three Gemini nodes (e.g., Gemini 2.5 Flash). Click the Credential dropdown and select + Create New Credential. Enter your Gemini API key and save. Apply Credentials to All Nodes: Your new Google AI credential is now saved. Go to the other two Gemini nodes (Gemini Chunk Embedding and Gemini Query Embedding) and select your newly created credential from the dropdown list. Build the Knowledge Base: Find the Start Indexing manual trigger node at the top-left of the workflow. Click its "Execute workflow" button to start the indexing process. ⚠️ Be Patient: This will take 15-20 minutes as it scrapes and processes the entire n8n documentation. You only need to do this once per n8n session. If you restart n8n, you must run this step again. Chat with Your Expert Agent: Once the indexing is complete, Activate the entire workflow using the toggle at the top of the screen. Open the RAG Chatbot chat trigger node (bottom-left) and copy its Public URL. Open the URL in a new tab and start asking questions about n8n! For example: "How does the IF node work?" or "What is a sub-workflow?".
by EoCi - Mr.Eo
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Introduction Tired of spending time crafting the perfect AI prompt? This workflow takes your simple ideas like "write a blog post" and automatically transforms them into detailed, structured prompts that actually work. 🎯 What This Does Automatically converts simple user prompts like "write a blog post" into structured, professional AI prompts with metadata, variables, and clear instructions. Perfect for everybody, all industries and organizations who are wanting to eliminate prompt engineering works. 🔄 How It Works Google Sheets Trigger monitors for new prompts AI Enhancement Pipeline uses Gemini + Groq to add structure & context Field Completion auto-generates missing metadata (topic, categories) Quality Assurance validates & stores complete results 🚀 Setup Requirements AI APIs**: Gemini, Telegram, Groq API keys Google Sheets**: 2 sheets (Main, ModifiedPrompt) 5 minutes setup time** - detailed instructions in blue sticky notes Set up steps Setup time: < 5 minutes Create a Google Spreadsheet with two tabs (sheets): OriginalPrompts and ModifiedPrompts. OriginalPrompts columns: Original Prompt ID | Model | Original Prompt | Created Time ModifiedPrompts columns (example): Modified Prompt ID | Original Prompt ID | Topic | Topic Categories | Modified Prompt | Prompt Title | Prompt Type | Model Used | Improvement Notes | Updated Time | Created Time | isProcessed Add and attach credentials in n8n: Google Sheets OAuth2 (required for getting new prompt) Gemini and Groq API credentials (required for AI Agent) Telegram credential (required for notifications) Save & Activate the workflow. Add a test row to OriginalPrompts, for example: Original Prompt ID: 1 — Original Prompt: "Write a short blog post about AI ethics". Wait ~30–60s and check ModifiedPrompts for the enhanced output. That’s it ! Once it configured, drop short ideas into your sheet and get professional prompts back automatically. Your prompts get better, your AI outputs improve, and you save hours of manual prompt crafting.
by Yaron Been
Creativeathive Lemaar Door Mockedup AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the creativeathive/lemaar-door-mockedup model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: creativeathive/lemaar-door-mockedup API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yaron Been
Spuuntries Ilearnmate Icts AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the spuuntries/ilearnmate-icts model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Optional Parameters seed** (integer, default: None): Seed for reproducibility of example generation and vector training. Set to 0 for random behavior. num_examples_per_side** (integer, default: 3): Number of descriptive examples to generate for each side of the contrast. More examples might lead to better vectors but will increase generation time. attributes_to_generate** (string, default: girly,modestly,verbose,happy): Comma-separated list of attributes for which to generate control vectors (e.g., 'girly,modestly,verbose,happy') How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: spuuntries/ilearnmate-icts API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yaron Been
Justingirard Draft Ui Designer Image Generator Description An experiment: a fine-tuned FLUX model for UI design generation Overview This n8n workflow integrates with the Replicate API to use the justingirard/draft-ui-designer model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: justingirard/draft-ui-designer API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Harshil Agrawal
This workflow demonstrates the use of the Split In Batches node and the Wait node to avoid API rate limits. Customer Datastore node: The workflow fetches data from the Customer Datastore node. Based on your use case, replace it with a relevant node. Split In Batches node: This node splits the items into a single item. Based on the API limit, you can configure the Batch Size. HTTP Request node: This node makes API calls to a placeholder URL. If the Split In Batches node returns 5 items, the HTTP Request node will make 5 different API calls. Wait node: This node will pause the workflow for the time you specify. On resume, the Split In Batches node gets executed node, and the next batch is processed. Replace Me (NoOp node): This node is optional. If you want to continue your workflow and process the items, replace this node with the corresponding node(s).
by Michael Gullo
Automated Binary Data Extraction from Gmail to Google Drive Folder This workflow is designed to automate the process of handling emails with binary attachments. It triggers when a new email arrives in a specified Gmail account (or can be configured with a similar email trigger) and is set to download any binary attachments. The workflow then filters the email to confirm it contains binary data (attachments). If attachments are present, it proceeds to retrieve the full email details, including all binary data. A crucial step is the creation of a new Google Drive folder. This folder is dynamically named using the email's subject and the current timestamp, for example, "[Email Subject] - [Current Timestamp]". Following this, the workflow separates each individual attachment from the email. Finally, these attachments are uploaded into the newly created Google Drive folder, with their original filenames preserved. The overall purpose of this workflow is to automatically organize and store email attachments into a structured Google Drive folder system. This workflow is compatible with any type of binary data found in an email, as the filter is designed to detect any binary data, not just PDFs. How It Works Trigger: The workflow initiates when a new email arrives in a specified Gmail account. Alternatively, it can be configured with a similar email trigger. Download Attachments: The workflow is set to automatically download any binary attachments from the incoming email. Filter Attachments: The workflow then filters the email to confirm it contains binary data (attachments). Retrieve Full Email Details: If attachments are present, the workflow proceeds to retrieve the complete details of the email, including all binary data. Create Google Drive Folder: A new folder is created in Google Drive. This folder is dynamically named using the email's subject and the current timestamp (e.g., "[Email Subject] - [Current Timestamp]"). Split Out Attachments: Each individual binary attachment from the email is separated into its own item within the workflow. Upload to Google Drive: Finally, these separated attachments are uploaded into the newly created Google Drive folder, retaining their original filenames. Need Help? Have Questions? For consulting and support, or if you have questions, please feel free to connect with me on LinkedIn or email michael.gullo@outlook.com.
by Yaron Been
Vcollos Trefilio AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the vcollos/trefilio model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: vcollos/trefilio API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yang
Who is this for? This workflow is perfect for lead generation experts, digital marketers, SEO professionals, and virtual assistants who need to quickly collect local business information based on specific search terms without manually navigating Google Places. What problem is this workflow solving? Manually searching Google Places for business leads is time-consuming and inconsistent. This workflow automates the entire process using Dumpling AI’s Google Places search endpoint, helping users collect accurate and structured business data and log it into a Google Sheet automatically. What this workflow does This workflow runs daily at 1 PM. It starts by reading a list of business-related search terms from a Google Sheet (for example, “dentists in Dallas”). Each term is sent to Dumpling AI’s search-places endpoint, which returns local business listings from Google Places. The data is split, structured, and logged row-by-row in a connected Google Sheet. Nodes Overview Run Every Day at 1 PM A scheduled trigger that executes the workflow daily. Google Sheets (Input) – Fetch Search Terms from Sheet Pulls a list of search terms from a Google Sheet. Each term should describe a business category and location (e.g., “coffee shops in Atlanta”). HTTP Request – Scrape Google Places via Dumpling AI Sends each search term to Dumpling AI’s /search-places endpoint, returning data like business names, phone numbers, websites, ratings, and categories. Split In Batches – Split Places Result Breaks the list of businesses returned for each search term into individual items for processing. Google Sheets (Output) – Save Each Business to Sheet Saves the scraped data into a second Google Sheet. Each row contains: title address rating category phoneNumber website 📝 Notes You must set up Dumpling AI and generate your API key from: Dumpling AI You can change the run schedule in the schedule node to fit your needs (e.g., weekly or hourly).
by Bela
Sync your Google Sheets Data with your Postgres database table, requiring minimal adjustments. Follow these steps: Retrieve Data: Pull data from Google Sheets and PostgreSQL. Compare Datasets: Identify differences, focusing on new or updated entries. Update PostgreSQL: Apply changes to ensure both platforms mirror each other. Automate this process to regularly synchronize data. Before starting, grant necessary access to both Google Sheets and PostgreSQL, and specify the data details for synchronization. This streamlined workflow enhances data consistency across platforms. This example is a one-way synchronization from Google Sheets into your Postgres. With small adjustments, you can make it the other way around, or 2-way.
by Zacharia Kimotho
Create new Clickup Tasks from Slack commands This workflow aims to make it easy to create new tasks on Clickup from normal Slack messages using simple slack command. For example We can have a slack command as /newTask Set task to update new contacts on CRM and assign them to the sales team This will have an new task on Clickup with the same title and description on Clickup For most teams, getting tasks from Slack to Clickup involves manually entering the new tasks into Clickup. What if we could do this with a simple slash command? Step 1 The first step is to Create an endpoint URL for your slack command by creating an events API from the link [below] https://api.slack.com/apps/) STEP 2 Next step is defining the endpoint for your URL Create a new webhook endpoint from your n8n with a POST and paste the endpoint URL to your event API. This will send all slash commands associated with the Slash to the desired endpoint Step 3 Log on to slack API (https://api.slack.com/) and create an application. This is the one we use to run all automation and commands from Slack. Once your app is ready, navigate to the Slash Commands and create a new command This will include the command, the webhook URL and a description of what the slash command is all about Now that this is saved you can do a test by sending a demo task to your endpoint Once you have tested the webhook slash command is working with the webhook, create a new Clickup API that can be used to create new tasks in ClickUp This workflow creates a new task with the start dates on Clikup that can be assigned to the respective team members More details about the document setup can be found on this document below Happy Productivity