by Sirisak Chantanate
Workflow Overview: Extract text from image using AI is worth because you need no code. It incorporates Google Gemini 2.0 Flash model for important text extraction from image. If you code without AI, you have to use multiple condition and may cause a lot of bug but with Google Gemini, you don't need any coding and if the Pay Slip is different, Gemini will extract it automatically. Workflow description: User uses Line Messaging API to send Pay Slip image or message to the chatbot, create Line Business ID from here: Line Business Classify the message which is image or text If the message is Pay Slip image, it will process using Gemini 2.0 Flash EXP and extract important information and response in JSON format without coding by using the following prompt: Analyze image and then return in JSON Response that has the only following value: Status, From, To, Date, Amount To get Google AI Studio API Key, you can find from the following link: Google AI Studio API Key Create Google Sheets which include the fileds (Status, From, To, Date, Amount) that we have created related to the AI prompt Google Sheets as the following example: If the message is text, it will process using Gemini 2.0 Flash EXP model as the AI Assistant else if the message is image, it will extract the important fields then reply to the User and insert into Google Sheets Key Features: Extract text from image with No Code** Without N8N, we have to write code to extract text from image, but with N8N and Google Gemini 2.0 Flash EXP together, we don't need to code and it will process all slip vendors or other document vendors. Multipurpose Chatbot** this chatbot accept both text and image so we don't have to create many chatbot accounts Reduce human error** this workflow let any officer to verify document status when the job ends Note: You can change the information by changing your prompt and also Google Sheets Column names relatively.
by bangank36
This workflow retrieves all users from n8n, compares them against entries in a Google Sheets spreadsheet, and automatically creates new users when needed. Once new users are created, invitation emails are sent automatically. You can trigger the workflow manually or set it to run on a schedule to ensure continuous synchronization. Spreadsheet Template This workflow is designed to work with a Google Sheets structure inspired by Squarespace's newsletter block connection. You can modify the node settings to adapt to a different column format. 👉 Clone the sample sheet here Suggested columns: Submitted On Email Address Name Requirements Credentials To use this workflow, you need: n8n API Key – to update users from n8n. Google Sheets API credentials – Required to get data from a spreadsheet. Configure Your n8n Instance To make this workflow work with your n8n instance, update the API endpoint: 🔧 Edit Global node 👇 Change n8n_url to match your instance URL: Authentication Guide Explore More Templates 👉 Check out my other n8n templates
by PiAPI
What this workflow does? This workflow converts orthographic three-view drawings into 360° rotation videos through PiAPI's GPT-4o-Image and Kling APIs (unofficial). The workflow could be set with our 3D Figurine Orthographic Views workflow for generation. Who is the workflow for? Designers**: Generate inspiration into 3D designs and make them spin to gain concrete details in a efficient way. Online shoppers**: Show protential products from all angles in videos and preview overall texture of models. Content Creators** (including toy bloggers): Make fun videos of collectible models. Step-by-step Instructions 1.Fill in basic params with your X-API-Key of your PiAPI account and 3-View image url. 2.Click test workflow. 3.Get the final video in the last node. Use Case Input Image Output Video
by Sarfaraz Muhammad Sajib
This n8n workflow demonstrates how to build an automated AI chat system using OpenRouter.ai. It includes a manual trigger, sets a model and user message, sends a POST request to the OpenRouter chat API, and summarizes the response. Workflow Steps: Manual Trigger – Starts the workflow when executed manually. Set Node – Defines: Model: mistralai/mistral-small-3.2-24b-instruct:free Message: What is the meaning of life? HTTP Request – Sends a POST request to https://openrouter.ai/api/v1/chat/completions using Bearer Token Authentication with the model and message as JSON. Summarize – Extracts and summarizes the AI’s response (choices[0].message.content). Use Cases: AI chatbot automation Content summarization Testing AI prompts in real-time Educational demos using OpenRouter.ai Lightweight conversational tools with no external server
by n8n Team
This workflow performs several data integration and synchronization tasks between Google Sheets and a MySQL database. Here is a step-by-step description of what this workflow does: Manual Trigger: The workflow starts when the user clicks "Execute Workflow." Schedule Trigger: This node schedules the workflow to run at specific intervals on weekdays (Monday to Friday) between 6 AM and 10 PM. It ensures regular data synchronization. Google Sheet Data: This node connects to a specific Google Sheets document and retrieves data from the "Form Responses 1" sheet, filtering by the "DB Status" column. SQL Get inquiries from Google: This node retrieves data from a MySQL database table named "ConcertInquiries" where the "source_name" is "GoogleForm." Rename GSheet variables: This node renames the columns retrieved from Google Sheets and transforms the data into a format suitable for MySQL, assigning a value for "source_name" as "GoogleForm." Compare Datasets: This node compares the data retrieved from Google Sheets and the MySQL database based on timestamp and source_name fields. It identifies changes and updates. No reply too long?: This node checks if there has been no reply within the last four hours, using the "timestamp" field from the Google Sheets data. DB Status assigned?: This node checks if the "DB Status" field is not empty in the compared dataset. Update GSheet status: If conditions are met in the previous nodes, this node updates the "DB Status" field in Google Sheets with the corresponding value from the MySQL dataset. DB Status in sync?: This node checks if the "source_name" field in Google Sheets is not empty. Sync MySQL data: If conditions are met in the previous nodes, this node updates the "source_name" field in the MySQL database to "GoogleFormSync." Send Notifications: If conditions are met in the "No reply too long?" node, this node sends notifications or performs actions as needed. Sticky Notes: These nodes provide additional information and documentation links for users.
by Rajeet Nair
📖 Description 🔹 How it works This workflow introduces an AI + Human-in-the-Loop pipeline for employee timesheet management. It combines the power of Google Drive, AI (OCR + LLM), and Gmail with a human review step to ensure accuracy and compliance. AI-Powered File Discovery Scans a Google Drive folder for new or updated timesheet files (PDF, Word, Excel, Images). AI Data Extraction Uses OCR and LLM (Mistral) to intelligently read and extract structured data. Supports multiple formats: PDF, Word (DOC/DOCX), Excel (XLS/XLSX), and Image files (JPG, PNG, scanned documents). Creates clean JSON with file details and timesheet logs (date, hours worked, tasks, notes). Smart Data Formatting Converts AI output into a clear HTML summary table for easy review. Flags potential anomalies (missing hours, duplicate dates, irregular entries). Human-in-the-Loop Verification Sends an approval email via Gmail containing: File metadata AI-generated HTML summary JSON attachment of raw extracted data HR/Managers review the summary and approve/reject before final actions occur. Post-Approval Automation (optional) Approved records can be saved in a separate Google Drive folder. Employees or HR receive confirmation emails. ⚙️ Set up steps Connect Credentials Add Google Drive and Gmail credentials in n8n. Configure Mistral (or any LLM) API credentials. Configure Google Drive In the “Search files and folders” node, replace the folderId with your company’s timesheet folder ID. Customize Extraction Schema Sticky notes explain how JSON output is structured. Adapt it for your organization’s needs (e.g., overtime, project codes). Set Up Human Verification Emails Update Gmail node recipients to your HR or approval team. Customize the email body (AI summary + JSON file attached). Activate & Test Enable the workflow. Upload a sample timesheet to trigger the AI + human verification loop. ⚡ Result: A robust AI + Human-in-the-Loop workflow that reduces repetitive data entry, prevents payroll errors, and gives HR full confidence before final approval.
by Eska
Deadlock Match Stats Bot is an automated workflow for n8n designed to send detailed player statistics from the most recent Deadlock match directly to Telegram. When the user sends the /match command to the Telegram bot, the workflow performs the following steps: Loads the HTML content of the player's profile page from deadlocktracker.gg using a preconfigured Steam ID. Extracts the most recent match ID using a regular expression from the embedded JavaScript data. Loads the HTML page for the specified match. Parses the match page using cheerio to extract relevant data for each player, including their nickname, selected hero, and current rank. Formats the collected information into a single message and sends it to the Telegram chat that issued the command.
by Akhil Varma Gadiraju
📬 N8N Contact Form Workflow: Capture, Notify via Email, and Redirect with Confirmation/Error Handling This n8n workflow facilitates contact form submissions through a customizable form that sends an email notification to support and redirects users based on the submission outcome. It is ideal for embedding a functional "Contact Us" form on websites with automated email notifications. ✨ Features Collects first name, last name, email, company name, and a message Sends formatted email notification to the support team Displays success or error confirmation to the user Customizable UI and form behavior Error fallback handling with user-friendly feedback 🧩 Nodes Overview 1. On form submission (Trigger) Type:** formTrigger Displays the contact form to users and triggers the workflow on submission. 2. Send Email to Support Type:** emailSend Sends an HTML email to a support address with the form details. Uses an SMTP credential for sending. 3. If Email Sent Type:** if Checks if the email was sent successfully using the existence of messageId. 4. Confirmation Form Type:** form Displays a “Thank You” HTML message after a successful submission. 5. Redirect Form Type:** form Redirects the user to a specified URL (e.g., LinkedIn profile). 6. Form (Error) Type:** form Displays an error message if email delivery fails. 7. NoOp Nodes End (Success)* and *End (Error)** to mark flow terminations cleanly. ⚙️ Customization Options Change the form fields, title, or descriptions in the formTrigger node. Update the email body or subject in the emailSend node. Redirect to a different URL by editing the Redirect Form node. Modify success and error UI with HTML content in the Confirmation Form and Form. 🧠 Use Cases Website "Contact Us" form integration Lead generation forms for businesses Customer service inquiry collection Feedback or support ticket system 🚀 How to Use Import this workflow into your n8n instance. Configure SMTP credentials for the emailSend node. Publish the formTrigger endpoint (e.g., /contact-us) publicly or embed in your website. Test submission and confirm email delivery and redirects. 🔐 Notes Ensure SMTP credentials are correctly configured in n8n. Make sure your n8n webhook URLs are reachable from your website or frontend. Made with ❤️ using n8n by Akhil.
by Davide
This workflow dynamically chooses between two new powerful Anthropic models — Claude Opus 4 and Claude Sonnet 4 — to handle user queries, based on their complexity and nature, maintaining scalability and context awareness with Anthropic web search function and Think tool. Key Advantages 🔁 Dynamic Model Selection Automatically routes each user query to either Claude Sonnet 4 (for routine tasks) or Claude Opus 4 (for complex reasoning), ensuring optimal performance and cost-efficiency. 🧠 AI Agent with Tool Use The AI agent can utilize a web search tool to retrieve up-to-date information and a Think tool for complex reasoning processes — improving response quality. 📎 Memory Integration Uses session-based memory to maintain conversational context, making interactions more coherent and human-like. 🧮 Built-in Calculation Tool Handles numeric queries using an integrated calculator tool, reducing the need for external processing. 📤 Structured Output Parser Ensures outputs are always well-structured and formatted in JSON, which improves consistency and downstream integrations. 🌐 Web Search Capability Supports real-time information retrieval for current events, statistics, or details not available in the AI’s base knowledge. Components Overview Trigger**: Listens for new chat messages. Routing Agent**: Analyzes the message and returns the best model to use. AI Agent**: Handles the conversation, decides when to use tools. Tools**: web_search for internet queries Think for reasoning Calculator for math tasks Models Used**: claude-sonnet-4-20250514: Optimized for general and business logic tasks. claude-opus-4-20250514: Best for deep, strategic, and analytical queries. How It Works Dynamic Model Selection The workflow begins when a chat message is received. The Anthropic Routing Agent analyzes the user's query to determine the most suitable model (either Claude Sonnet 4 or Claude Opus 4) based on the query's complexity and requirements. The routing agent uses predefined criteria to decide: Claude Sonnet 4: Best for standard tasks like real-time workflow routing, data validation, and routine business logic. Claude Opus 4: Reserved for complex scenarios requiring deep reasoning, advanced analysis, or high-impact decisions. Query Processing and Response Generation The selected model processes the query, leveraging tools like web_search for real-time information retrieval, Think for internal reasoning, and Calculator for numerical tasks. The AI Agent coordinates these tools, ensuring the response is accurate and context-aware. A Simple Memory node retains session context for coherent multi-turn conversations. The final response is formatted and returned to the user without intermediate steps or metadata. Set Up Steps Node Configuration Trigger: Configure the "When chat message received" node to handle incoming user queries. Routing Agent: Set up the "Anthropic Routing Agent" with the system message defining model selection logic. Ensure it outputs a JSON object with prompt and model fields. AI Model Nodes: Link the "Sonnet 4 or Opus 4" node to dynamically use the selected model. The "Sonnet 3.7" node powers the routing agent itself. Tool Integration Attach the "web_search" HTTP tool to enable internet searches, ensuring the API endpoint and headers (e.g., anthropic-version) are correctly configured. Connect auxiliary tools (Think, Calculator) to the "AI Agent" for extended functionality. Add the "Simple Memory" node to maintain conversation history. Credentials Provide an Anthropic API key to all nodes requiring authentication (e.g., model nodes, web search). Testing Activate the workflow and test with sample queries to verify: Correct model selection (e.g., Sonnet for simple queries, Opus for complex ones). Proper tool usage (e.g., web searches trigger when needed). Memory retention across chat turns. Deployment Once validated, set the workflow to active for live interactions. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by WeblineIndia
This smart automation workflow created by the AI development team at WeblineIndia, helps with the daily collection and storage of weather data. Using the OpenWeatherMap API and Airtable, this solution gathers vital weather details such as temperature, humidity, and wind speed. The automation ensures daily updates, creating a dependable historical record of weather patterns for future reference and analysis. Steps: Set Schedule Trigger Configure a Cron node to trigger the workflow daily, for example, at 7 AM. Fetch Weather Data (HTTP Request) Use the HTTP Request node to retrieve weather data from the OpenWeatherMap API. Include your API key and query parameters (e.g., q=London, unit=metric) to specify the city and desired units. Parse Weather Data Utilize a JSON Parse node to extract key weather details, such as temperature, humidity, and wind speed, from the API response. Store Data in Airtable Use the Airtable node to insert the parsed data into the designated Airtable table. Ensure proper mapping of fields like temperature, humidity, and wind speed. Save and Execute Save the workflow and activate it to ensure weather data is fetched and stored automatically every day. Outcome This robust solution, developed by WeblineIndia, reliably collects and archives daily weather data, providing businesses and individuals with an accessible record of weather trends for analysis and decision-making. About WeblineIndia We specialize in creating custom automation solutions and innovative software workflows to help businesses streamline operations and achieve efficiency. This weather data fetcher is just one example of our expertise in delivering value through technology.
by digi-stud.io
Adobe developer API Did you know that Adobe provides an API to perform all sort of manipulation on PDF files : Split PDF, Combine PDF OCR Insert page, delete page, replace page, reorder page Content extraction (text content, tables, pictures) ... The free tier allows up to 500 PDF operation / month. As it comes directly from Adobe, it works often better than other alternatives. Adobe documentation: https://developer.adobe.com/document-services/docs/overview/pdf-services-api/howtos/ https://developer.adobe.com/document-services/docs/overview/pdf-extract-api/gettingstarted/ What does this workflow do The API is a bit painful to use. To perform a transformation on a PDF it requires to Authenticate and get a temporal token Register a new asset (file) Upload you PDF to the registered asset Perform a query according to the transformation requested Wait for the query to be proccessed by Adobe backend Download the result This workflow is a generic wrapper to perform all these steps for any transformation endpoint. I usually use it from other workflow with an Execute Workflow node. Examples are given in the workflow. Example use case This service is useful for example to clean PDF data for an AI / RAG system. My favorite use-case is to extract table as images and forward images to an AI for image recognition / description which is often more accuarate than feedind raw tabular data to a LLM.
by Sherlockes
Purpose of this Template: This template will help us combine multiple RSS sources, curate the content, and send the result to a Telegram channel for easy access. How it Works: We start with two RSS sources from which we primarily want to extract the title, link, and publication date. The workflow will periodically query both sources and use regular expressions and some JavaScript to edit several fields until we have the desired data in the proper format. Once the sources are formatted, we will combine them using the "Merge" node. Since the workflow will run twice a day, we will discard items older than two days. This will be achieved using a "Filter" node. A "Sort" node will arrange the items by publication date so that the most recent ones appear first. To create a flat list in "Markdown" format from the resulting items, we will use a small JavaScript function. This function must escape special characters to ensure they are correctly interpreted. Finally, the list will be sent to the appropriate channel using a "Telegram" node. Configuration Instructions: When opening the workflow for the first time, configure the Telegram credential. In the "RSS" nodes, insert the URLs of the sources to query. In the "Edit Fields" nodes we will have to adjust the regular expressions to obtain the desired result based on our RSS sources. In this case the expressions have been adjusted to obtain the name, size and link of each published file. In the "Sort" node we can modify the maximum age of the elements that we are going to send. In this case, 2 days have been included (2 * 24 * 60 * 60 * 1000 milliseconds) since it is necessary to take into account that Telegram has a maximum message length and if we exceed it, it will return an error instead of sending it. Lastly, include the channel ID in the "Telegram" node where the messages will be sent. Template was created in n8n v1.72.1