by Jimleuk
This n8n template introduces the Dynamic Prompts Ai workflow pattern which are incredible for certain types of data extraction tasks where attributes are unknown or need to remain flexible. The general idea behind this pattern is that the prompts for requested attributes to be extracted live outside the template and so can be changed at any time - without needing to edit the template. This seriously cuts down on maintainance requirements and is reusable for any number of tables at little cost. Check out the video demo I did for n8n Studio here: https://www.youtube.com/watch?v=_fNAD1u8BZw Check out the example Airtable here: https://airtable.com/appAyH3GCBJ56cfXl/shrXzR1Tj99kuQbyL Looking for the Baserow Version? https://n8n.io/workflows/2780-ai-data-extraction-with-dynamic-prompts-and-baserow/ How it works Given we have an "input" field for context and a number of fields for the data we want to extract, this template will run in the background to react to any changes to either the "input" or fields and automatically update the rows accordingly. The key is that Airtable fields have a special property called the "field description". In this pattern, we use this property to allow the user to store a simple prompt describing the data that should exist in the column. Our n8n template reads these column descriptions aka "prompts" to use as instructions to perform tasks on the "input". In this template, the "input" is a PDF of a resume/CV and the columns are attributes a HR person would want to extract from it - such as full name, address, last position, years of experience etc. How to use First publish this template and ensure it's accessible via webhook URL. You then have to run the "create airtable webhooks" mini-flow to configure your Airtable to send change events to the n8n template. This mini-flow exists in the template but you'll have to update the IDs. Check the template for more instructions. Requirements Airtable for Tables/Database OpenAI for LLM and extraction. Feel free to choose another LLM if preferred. Customising this workflow If you're not using files, you can replace the "input" field with anything you like. For example, the "input" could be single line text.
by Jimleuk
This n8n template introduces the Dynamic Prompts AI workflow pattern which are incredible for certain types of data extraction tasks where attributes are unknown or need to remain flexible. The general idea behind this pattern is that the prompts for requested attributes to be extracted live outside the template and so can be changed at any time - without needing to edit the template. This seriously cuts down on maintainance requirements and is reusable for any number of tables at little cost. Check out the n8n Studio Episode here: https://www.youtube.com/watch?v=_fNAD1u8BZw Community post here: https://community.n8n.io/t/dynamic-prompts-with-n8n-baserow-and-airtable/72052 Looking for the Airtable Version? https://n8n.io/workflows/2771-ai-data-extraction-with-dynamic-prompts-and-airtable/ How it works Given we have an "input" field for context and a number of fields for the data we want to extract, this template will run in the background to react to any changes to either the "input" or fields and automatically update the rows accordingly. The key is that Baserow fields have a special property called the "field description". In this pattern, we use this property to allow the user to store a simple prompt describing the data that should exist in the column. Our n8n template reads these column descriptions aka "prompts" to use as instructions to perform tasks on the "input". In this template, the "input" is a PDF of a resume/CV and the columns are attributes a HR person would want to extract from it - such as full name, address, last position, years of experience etc. How to use First publish this template and ensure it's accessible via webhook URL. You then have to complete the "create Baserow webhooks" steps to configure your baserow to send change events to the n8n template. Baserow webhooks are created in the Baserow web interface. Check the template for more instructions. Requirements Baserow for Tables/Database OpenAI for LLM and extraction. Feel free to choose another LLM if preferred. Customising this workflow If you're not using files, you can replace the "input" field with anything you like. For example, the "input" could be single line text.
by Joseph LePage
This n8n workflow template is designed to integrate a DeepSeek AI agent with Telegram, incorporating long-term memory capabilities for personalized and context-aware responses. Here's a detailed breakdown: Core Features Telegram Integration Uses a webhook to receive messages from Telegram users. Validates user identity and message content before processing. AI-Powered Responses Employs DeepSeek's AI models for conversational interactions. Includes memory capabilities to personalize responses based on past interactions. Error Handling Sends an error message if the input cannot be processed. Model Options 🧠 DeepSeek-V3 Chat**: Handles general conversational tasks. DeepSeek-R1 Reasoning**: Provides advanced reasoning capabilities for complex queries. Memory Buffer Window**: Maintains session context for ongoing conversations. Quick Setup 🛠️ Telegram Webhook Configuration Set up a webhook using the Telegram Bot API: https://api.telegram.org/bot{my_bot_token}/setWebhook?url={url_to_send_updates_to} Replace {my_bot_token} with your bot's token and {url_to_send_updates_to} with your n8n webhook URL. Verify the webhook setup using: https://api.telegram.org/bot{my_bot_token}/getWebhookInfo DeepSeek API Configuration Base URL: https://api.deepseek.com Obtain your API key from the DeepSeek platform. Implementation Details 🔧 User Validation The workflow validates the user's first name, last name, and ID using data from incoming Telegram messages. Only authorized users proceed to the next steps. Message Routing Routes messages based on their type (text, audio, or image) using a switch node. Ensures appropriate handling for each message format. AI Agent Interaction Processes text input using DeepSeek-V3 or DeepSeek-R1 models. Customizable system prompts define the AI's behavior and rules, ensuring user-centric and context-aware responses. Memory Management Retrieves long-term memories stored in Google Docs to enhance personalization. Saves new memories based on user interactions, ensuring continuity across sessions.
by Yaron Been
LinkedIn Enrichment & Ice Breaker Generator For SDRs, growth marketers, and founders looking to scale personalized outreach. This workflow enriches LinkedIn profile data using Bright Data and generates AI-powered ice breakers using Claude (Anthropic). It automates research and messaging to help you connect smarter and faster — without manual effort. 🧩 How It Works This workflow combines Google Sheets, Brigt Data, and Claude (Anthropic) to fully automate your outreach research: Trigger Manually trigger the workflow or run it on a schedule (via Manual Trigger or Schedule Trigger). Read Input Sheet Fetches rows from a Google Sheet. Each row must contain at least a Linkedin_URL_Person and row_number. Prepare Input Formats each row for Bright Data’s API using Set and SplitInBatches nodes. Enrich Profile (Bright Data API) Sends LinkedIn URLs to Bright Data’s Dataset API via HTTP Request. Waits for snapshot to be ready using polling logic with Wait, If, and Snapshot Progress nodes. Once ready, retrieves the enriched profile data including: Name City Current company About section Recent posts Update Sheet with Profile Data Writes the retrieved enrichment data into the corresponding row in Google Sheets (via row_number). Generate Ice Breaker (Claude AI) Sends enriched profile content to Claude (Anthropic) using a custom prompt. Focuses on recent posts for crafting relevant, respectful, 1–4-line ice breakers. Update Sheet with Ice Breaker Writes the generated ice breaker to the Ice Breaker 1 column in the original row. ✅ Requirements To use this workflow, you must have the following: Google Sheets A Google account A Google Sheet with at least one sheet/tab containing: Column: Linkedin_URL_Person Column: row_number (used for mapping input and output rows) Bright Data A Bright Data account with access to the Dataset API An active dataset that accepts LinkedIn URLs API key with Dataset API access Anthropic Claude An Anthropic API key (for Claude 3.5 Haiku or other Claude models) n8n Environment Access to HTTP Request, Set, Wait, SplitInBatches, If, and Google Sheets nodes Access to Claude integration (via LangChain nodes: @n8n/n8n-nodes-langchain) Credential manager properly configured with: Google Sheets OAuth2 credentials Bright Data API key Anthropic API key ⚙️ Setup Instructions Step 1: Copy the Google Sheets Template > 📄 Click here to make a copy Fill the Linkedin_URL_Person column with LinkedIn profile URLs you want to enrich Do not modify headers or add filters to the sheet Leave other columns (name, city, about, posts, ice breaker) blank — the workflow fills them Step 2: Connect Your Accounts in n8n Google Sheets: Create a credential under Google Sheets OAuth2 API Bright Data: Add your API key as a credential under HTTP Request (Authorization header) Anthropic: Create a credential for Anthropic API with your Claude key Step 3: Import and Configure the Workflow Import the workflow into your n8n instance. In each Google Sheets node: Select the copied Google Sheet Select the correct tab (usually input or Sheet1) In the HTTP Request node to Bright Data: Paste your Bright Data dataset ID In the Claude prompt node: Optionally adjust the tone and length of the ice breaker prompt Step 4: Run the Workflow Test it using the Manual Trigger node For daily automation, enable the Schedule Trigger and configure interval settings Watch your Google Sheet populate with enriched data and tailored ice breakers 🧠 Tips & Best Practices Bright Data Delay**: Snapshots may take time. The workflow polls the status until complete. Retry Protection**: If and Wait nodes avoid infinite loops by checking snapshot status. Mapping via row_number**: Critical to ensure data is updated in the right row. Prompt Engineering**: You can fine-tune Claude's behavior by editing the text prompt. 🧾 Output Example Once complete, each row in your Google Sheet will contain: | Linkedin_URL_Person | Name | City | Company | Recent Post | Ice Breaker | |---------------------|------|------|---------|-------------|--------------| | linkedin.com/... | Jane Doe | NYC | ACME Corp | “Why AI should replace meetings” | "Loved your post about AI and meetings — finally someone said it!" | 💬 Support & Feedback Questions? Want to tweak the prompt or expand the enrichment? 📧 Email: Yaron@nofluff.online 📺 YouTube: @YaronBeen 🔗 LinkedIn: linkedin.com/in/yaronbeen
by Agniva Mahata
How it Works: Trigger: The workflow is triggered by a webhook, initiated by an Airtable automation. This automation sends the Book or Chapter record ID and the desired action (e.g., "Generate Book Details," "Generate Chapters," "Generate Chapter Research," "Generate Chapter Content"). Action Routing: A "Switch" node directs the workflow based on the action query parameter received from the webhook. This determines which part of the book creation process will be executed. Data Retrieval: The workflow fetches the relevant book or chapter data from Airtable using the provided recordId. AI Processing: Book Details Generation: If the action is "Generate Book Details," an AI Agent (powered by a Large Language Model (LLM) like Google Gemini and the Perplexity search tool) researches the book idea. It focuses on crafting a compelling book description, identifying the target audience, and conducting general book research to maximize bestseller potential. The research brief is then saved back to Airtable. Chapter Generation: If the action is "Generate Chapters," an LLM generates 7-10 chapter titles and descriptions based on the book idea and previous research. A structured output parser ensures the chapter data is in the correct format. The chapters are then split into individual items and saved as separate records in the "Chapter" table in Airtable, linked to the main book record. Chapter Research Generation: If the action is "Generate Chapter Research," another AI Agent conducts in-depth research on a specific chapter, using the Perplexity search tool multiple times. It focuses on finding stories, case studies, historical events, and expert perspectives to make the chapter engaging and credible. The research is saved back to the "Chapter" record in Airtable. Chapter Content Generation: If the action is "Generate Chapter Content," an LLM writes the full content of the chapter, using the research gathered in the previous step, the overall book research, and the chapter description. The generated content is saved back to the "Chapter" record in Airtable. Airtable Updates: In each of the AI processing steps, the workflow updates the corresponding Airtable record (either "Book" or "Chapter") with the generated results (research, chapter details, or content) and sets the "Action" field back to "Idle." Set Up Steps: Airtable Setup (Estimated time: 10-15 minutes): Copy the Airtable base blueprint: https://airtable.com/appfkz4KUlKvOjtbp/shra78TlDfqLRdSfT. This will create the "Book" and "Chapter" tables with the necessary fields. In the "Book" table, create three Airtable Automations: Trigger: When a record matches conditions -> Action is Generate Book Details Action: Run a script. Use the following script: let autoRoute = input.config(); await fetch(autoRoute.webhookUrl + "?recordId=" + autoRoute.recordId + "&action=" + autoRoute.action); In the script action's configuration, add three "Input variables": webhookUrl (map it to your n8n webhook URL, obtained in the next step) recordId (map it to the Airtable record ID) action (map it to Action) Repeat this process to create two more automations in the "Book" table, identical except triggered when Action is Generate Chapters, respectively. In the "Chapter" table, create two Airtable Automations: Trigger: When a record matches conditions -> Action is Generate Chapter Research Action: Run a script (use the same script as above, with the same input variables). Create a second automation, identical except triggered when Action is Generate Chapter Content. n8n Setup (Estimated time: 15-20 minutes): Import the provided JSON workflow into n8n. Webhook Node: Copy the "Test URL" from the Webhook node. This is the webhookUrl you'll use in the Airtable automations. Important: Once you've tested and are ready to go live, switch to the "Production URL." Airtable Nodes: Configure all Airtable nodes (there are eight). You'll need to connect your Airtable account using OAuth 2. Select the correct Base ("Book Agency \[v1] Cobuild" or whatever you named it) and Table ("Book" or "Chapter") for each node. The field mappings are already defined in the template, but double-check them. LLM Nodes (Google Gemini & OpenAI): Connect your Google Gemini and OpenAI accounts to the respective LLM nodes. You'll need API keys for both. You may also configure different LLM Models. Perplexity Nodes Connect your Perplexity AI API to the Perplexity nodes. You'll need API keys for that. Activate the workflow. Testing (Estimated Time: 5-10 minutes): Go to your Airtable "Book" table. Create a New Record. Fill in the "Idea" field with a book concept. Change the "Action" field to "Generate Book Details". The Airtable automation should trigger, sending a request to your n8n webhook. Monitor the n8n execution log to see the workflow in action. Check the Airtable record to see if the "Research" field is populated. Repeat the testing for Generate Chapters, Generate Chapter Research and Generate Chapter Content.
by Davide
This workflow is designed to analyze YouTube videos by extracting their transcripts, summarizing the content using AI models, and sending the analysis via email. This workflow is ideal for content creators, marketers, or anyone who needs to quickly analyze and summarize YouTube videos for research, content planning, or educational purposes. How It Works: Trigger: The workflow starts with a manual trigger, allowing you to test it by clicking "Test workflow." You can also set a YouTube video URL manually or dynamically. YouTube Video ID Extraction: The workflow extracts the YouTube video ID from the provided URL using a custom JavaScript function. This ID is necessary for fetching the transcript. Transcript Generation: The video ID is sent via an HTTP request to generate the transcript. You need to replace APIKEY with a free API key from the service. Transcript Validation: The workflow checks if a transcript exists for the video. If a transcript is available, it proceeds; otherwise, it stops. Full Text Extraction: If a transcript exists, the workflow combines all transcript segments into a single text variable for further analysis. AI-Powered Analysis: The full transcript is passed to an AI model (DeepSeek, OpenAI, or OpenRouter) for analysis. The AI generates a structured summary, including a title and key points, formatted in markdown. Email Notification: The analysis results (title and summary) are sent via email using SMTP credentials. The email contains the structured summary of the video. Set Up Steps: YouTube Transcript API: Obtain a free API key from youtube-transcript.io and replace APIKEY in the "Generate transcript" node with your key. AI Model Configuration: Configure the AI model nodes (DeepSeek, OpenAI, or OpenRouter) with the appropriate API credentials. You can choose one or multiple models depending on your preference. Email Setup: Configure the "Send Email" node with your SMTP credentials (e.g., Gmail, Outlook, or any SMTP service). Ensure the email settings are correct to send the analysis results. Key Features: Free Tools: Uses **youtube-transcript.io for free transcript generation. AI Models**: Supports multiple AI models (DeepSeek, OpenAI, OpenRouter) for flexible analysis. Email Notifications**: Sends the analysis results directly to your inbox. Customizable**: Easily adapt the workflow to analyze different videos or use different AI models.
by Onur
Turn BBC News Articles into Podcasts using Hugging Face and Google Gemini Effortlessly transform BBC news articles into engaging podcasts with this automated n8n workflow. Who is this for? This template is perfect for: Content creators** who want to quickly produce podcasts from current events. Students** looking for an efficient way to create audio content for projects or assignments. Individuals** interested in generating their own podcasts without technical expertise. Setup Information Install n8n: If you haven't already, download and install n8n from n8n.io. Import the Workflow: Copy the JSON code for this workflow and import it into your n8n instance. Configure Credentials: Gemini API: Set up your Gemini API credentials in the workflow's LLM nodes. Hugging Face Token: Obtain an access token from Hugging Face and add it to the HTTP Request node for the text-to-speech model. Customize (Optional): Filtering Criteria: Adjust the News Classifier node to fine-tune the selection of news articles based on your preferences. Output Options: Modify the workflow to save the generated audio file to a cloud storage service or publish it to a podcast hosting platform. Prerequisites An active n8n instance. Basic understanding of n8n workflows (no coding required). API credentials for Gemini and a Hugging Face account with an access token. What problem does it solve? This workflow eliminates the manual effort involved in creating podcasts from news articles. It automates the entire process, from fetching and filtering news to generating the final audio file. What are the benefits? Time-saving:** Create podcasts in minutes, not hours. Easy to use:** No coding or technical skills required. Customizable:** Adapt the workflow to your specific needs and preferences. Cost-effective:** Leverage free or low-cost services like Gemini and Hugging Face. How does it work? The workflow fetches news articles from the BBC website. It filters articles based on their suitability for a podcast. It extracts the full content of the selected articles. It uses Gemini LLM to create a podcast script. It converts the script to speech using Hugging Face's text-to-speech model. The final podcast audio is ready for use. Nodes in the Workflow Fetch BBC News Page: Retrieves the main BBC News page. News Classifier: Categorizes news articles using Gemini LLM. Fetch BBC News Detail: Extracts detailed content from suitable articles. Basic Podcast LLM Chain: Generates a podcast script using Gemini LLM. HTTP Request: Converts the script to speech using Hugging Face. Add Story I'm excited to share this workflow with the n8n community and help content creators and students easily produce engaging podcasts! Additional Tips Explore the n8n documentation and community resources for more advanced customization options. Experiment with different filtering criteria and LLM prompts to achieve your desired podcast style.
by Angel Menendez
CallForge - AI Gong Transcript PreProcessor Transform your Gong.io call transcripts into structured, enriched, and AI-ready data for better sales insights and analytics. Who is This For? This workflow is designed for: ✅ Sales teams looking to automate call transcript formatting. ✅ Revenue operations (RevOps) professionals optimizing AI-driven insights. ✅ Businesses using Gong.io that need structured, enriched call transcripts for better decision-making. What Problem Does This Workflow Solve? Manually processing raw Gong call transcripts is inefficient and often lacks essential context for AI-driven insights. With CallForge, you can: ✔ Extract and format Gong call transcripts for structured AI processing. ✔ Enhance metadata using sales data from Salesforce. ✔ Classify speakers as internal (sales team) or external (customers). ✔ Identify external companies by filtering out free email domains (e.g., Gmail, Yahoo). ✔ Enrich customer profiles using PeopleDataLabs to identify company details and locations. ✔ Prepare transcripts for AI models by structuring conversations and removing unnecessary noise. What This Workflow Does 1. Retrieves Gong Call Data Calls the Gong API to extract call metadata, speaker interactions, and collaboration details. Fetches call transcripts for AI processing. 2. Processes and Cleans Transcripts Converts call transcripts into structured, speaker-based dialogues. Assigns each speaker as either Internal (Sales Team) or External (Customer). 3. Extracts Company Information Retrieves Salesforce data** to match customers with existing sales opportunities. Filters out free email domains* to determine the *customer’s actual company domain**. Calls the PeopleDataLabs API to retrieve additional company data and location details. 4. Merges and Enriches Data Combines Gong metadata, Salesforce customer details and insights**. Ensures all necessary data is available for AI-driven sales insights. 5. Final Formatting for AI Processing Merges all call transcript data into a single structured format for AI analysis. Extracts the final cleaned, enriched dataset for further AI-powered insights. How to Set Up This Workflow 1. Connect Your APIs 🔹 Gong API Access – Set up your Gong API credentials in n8n. 🔹 Salesforce Setup – Ensure API access if you want customer enrichment. 🔹 PeopleDataLabs API – Required to retrieve company and location details based on email domains. 🔹 Webhook Integration – Modify the webhook call to push enriched call data to an internal system. CallForge - 01 - Filter Gong Calls Synced to Salesforce by Opportunity Stage CallForge - 02 - Prep Gong Calls with Sheets & Notion for AI Summarization CallForge - 03 - Gong Transcript Processor and Salesforce Enricher CallForge - 04 - AI Workflow for Gong.io Sales Calls CallForge - 05 - Gong.io Call Analysis with Azure AI & CRM Sync CallForge - 06 - Automate Sales Insights with Gong.io, Notion & AI CallForge - 07 - AI Marketing Data Processing with Gong & Notion CallForge - 08 - AI Product Insights from Sales Calls with Notion How to Customize This Workflow 💡 Modify Data Sources – Connect different CRMs (e.g., HubSpot, Zoho) instead of Salesforce. 💡 Expand AI Analysis – Add another AI model (e.g., OpenAI GPT, Claude) for advanced conversation insights. 💡 Change Speaker Classification Rules – Adjust internal vs. external speaker logic to match your team’s structure. 💡 Filter Specific Customers – Modify the free email filtering logic to better fit your company’s needs. Why Use CallForge? 🚀 Automate Gong call transcript processing to save time. 📊 Improve AI accuracy with enriched, structured data. 🛠 Enhance sales strategy by extracting actionable insights from calls. Start optimizing your Gong transcript analysis today!
by Mark Shcherbakov
Video Guide I prepared a detailed guide that demonstrates the complete process of building a trading agent automation using n8n and Telegram, seamlessly integrating various functions for stock analysis. Youtube Link Who is this for? This workflow is perfect for traders, financial analysts, and developers looking to automate stock analysis interactions via Telegram. It’s especially valuable for those who want to leverage AI tools for technical analysis without needing to write complex code. What problem does this workflow solve? Many traders desire real-time analysis of stock data but lack the technical expertise or tools to perform in-depth analysis. This workflow allows users to easily interact with an AI trading agent through Telegram for seamless stock analysis, chart generation, and technical evaluation, all while eliminating the need for manual interventions. What this workflow does This workflow utilizes n8n to construct an end-to-end automation process for stock analysis through Telegram communication. The setup involves: Receiving messages via a Telegram bot. Processing audio or text messages for trading queries. Transcribing audio using OpenAI API for interpretation. Gathering and displaying charts based on user-specified parameters. Performing technical analysis on generated charts. Sending back the analyzed results through Telegram. Setup Prepare Airtable: Create simple table to store tickers. Prepare Telegram Bot: Ensure your Telegram bot is set up correctly and listening for new messages. Replace Credentials: Update all nodes with the correct credentials and API keys for services involved. Configure API Endpoints: Ensure chart service URLs are correctly set to interact with the corresponding APIs properly. Start Interaction: Message your bot to initiate analysis; specify ticker symbols and desired chart styles as required.
by Jimleuk
This n8n template uses a Telegram chatbot to conduct a Product Satisfaction Survey and fetches questions and stores answers in a Google sheet. It augments an AI Agent to ask follow-up questions to engage the user and uncover more insights in their responses. This template is intended to demonstrate how you'd realistically approach a workflow where there is structured conversation (static questions) but you still want to include an free-form element (follow-up questions) which can only be accomplished via AI. Check out an example Survey results: https://docs.google.com/spreadsheets/d/e/2PACX-1vQWcREg75CzbZd8loVI12s-DzSTj3NE_02cOCpAh7umj0urazzYCfzPpYvvh7jqICWZteDTALzBO46i/pubhtml?gid=0&single=true How it works A chat session is started with the user who needs to enter the bot command "/next" to start the survey. Once started, the template pulls in questions from a google sheet to ask the user. Questions are asked in sequence from left column to right column. When the user answers the question, a text classifier node is used to determine if a follow-up question could be asked. If so, a mini conversation is initiated by the AI agent to get more details. If not, the survey proceeds to the next question. All answers and mini-conversations are recorded in the Google Sheet under the respective question. When all questions are answered, the template will stop the survey and give the user a chance to restart. How to use You'll need to setup a Telegram bot (see docs) Create a google sheet with an ID column. Populate the rest of the columns with your survey questions (see sample) Ensure you have a Redis instance to capture state. Either self-host or sign-up to Upstash for a free account. Update the "Set Variable" node with your google sheet ID and survey title. Share your bot to allow others to participate in your survey. Requirements Telegram for Chatbot Google Sheets for Survey questions and answers Redis for State Management and Chat Memory Community+ license and above for Execution data node - you can remove this node if you don't have this licence. Customising this workflow Not using Telegram? This template technically works with other chat apps such as Whatsapp, wechat and even n8n's hosted chat! This state management pattern can also be applied to other use-cases and scenarios. Try it for other types of surveys!
by Joseph LePage
Compare Local Ollama Vision Models for Image Analysis using Google Docs Process images using locally hosted Ollama Vision Models to extract detailed descriptions, contextual insights, and structured data. Save results directly to Google Docs for efficient collaboration. Who is this for? This workflow is ideal for developers, data analysts, marketers and AI enthusiasts who need to process and analyze images using locally hosted Ollama Vision Language Models. It’s particularly useful for tasks requiring detailed image descriptions, contextual analysis, and structured data extraction. What problem is this workflow solving? / Use Case The workflow solves the challenge of extracting meaningful insights from images in exhaustive detail, such as identifying objects, analyzing spatial relationships, extracting textual elements, and providing contextual information. This is especially helpful for applications in real estate, marketing, engineering, and research. What this workflow does This workflow: Downloads an image file from Google Drive. Processes the image using multiple Ollama Vision Models (e.g., Granite3.2-Vision, Gemma3, Llama3.2-Vision). Generates detailed markdown-based descriptions of the image. Saves the output to a Google Docs file for easy sharing and further analysis. Setup Ensure you have access to a local instance of Ollama. https://ollama.com/ Pull the Ollama vision models. Configure your Google Drive and Google Docs credentials in n8n. Provide the image file ID from Google Drive in the designated node. Update the list of Ollama vision models Test the workflow by clicking ‘Test Workflow’ to trigger the process. How to customize this workflow to your needs Replace the image source with another provider if needed (e.g., AWS S3 or Dropbox). Modify the prompts in the "General Image Prompt" node to suit specific analysis requirements. Add additional nodes for post-processing or integrating results into other platforms like Slack or HubSpot. Key Features: Detailed Image Analysis**: Extracts comprehensive details about objects, spatial relationships, text elements, and contextual settings. Multi-Model Support**: Utilizes multiple vision models dynamically for optimal performance. Markdown Output**: Formats results in markdown for easy readability and documentation. Google Drive Integration**: Seamlessly downloads images and saves results to Google Docs.
by Jaruphat J.
Overview This workflow automatically saves files received via LINE Messaging API into Google Drive and logs the file details into a Google Sheet. It checks the file type against allowed types, organizes files into date-based folders and (optionally) file type–specific subfolders, and sends a reply message back to the LINE user with the file URL or an error message if the file type is not permitted. Who is this for? Developers & IT Administrators: Looking to integrate LINE with Google Drive and Sheets for automated file management. Businesses & Marketing Teams: That want to automatically archive media files and documents received from users via LINE. Anyone Interested in No-Code Automation: Users who want to leverage n8n’s capabilities without heavy coding. What Problem Does This Workflow Solve? Automated File Organization: Files received from LINE are automatically checked for allowed file types, then stored in a structured folder hierarchy in Google Drive (by date and/or file type). Data Logging: Each file upload is recorded in a Google Sheet, providing an audit trail with file names, upload dates, URLs, and types. Instant Feedback: Users receive an immediate reply via LINE confirming the file upload, or an error message if the file type is not allowed. What This Workflow Does 1. Receives Incoming Requests: A webhook node ("LINE Webhook Listener") listens for POST requests from LINE, capturing file upload events and associated metadata. 2. Configuration Loading: A Google Sheets node ("Get Config") reads configuration data (e.g., parent folder ID, allowed file types, folder organization settings, and credentials) from a pre-defined sheet. Data Merging & Processing: The "Merge Event and Config Data" and "Process Event and Config Data" nodes merge and structure the event data with configuration settings. A "Determine Folder Info" node calculates folder names based on the configuration. If Store by Date is enabled, it uses the current date (or a specified date) as the folder name. If Store by File Type is also enabled, it uses the file’s type (e.g., image) for a subfolder. 4. Folder Search & Creation: The workflow searches for an existing date folder ("Search Date Folder"). If the date folder is not found, an IF node ("Check Existing Date Folder") routes to a "Create Date Folder" node. Similarly, for file type organization, the workflow uses a "Search FileType Folder" node (with appropriate conditions) to look for a subfolder, or creates it if not found. The "Set Date Folder ID" and "Set Image Folder ID" nodes capture and merge the resulting folder IDs. Finally, the "Config final ParentId" node sets the final target folder ID based on the configuration conditions: Store by Date: TRUE, Store by File Type: TRUE: Use the file type folder (inside the date folder). Store by Date: TRUE, Store by File Type: FALSE: Use the date folder. Store by Date: FALSE, Store by File Type: TRUE: Use the file type folder. Store by Date: FALSE, Store by File Type: FALSE: Use the Parent Folder ID from the configuration. 5. File Retrieval and Validation: A HTTP Request node ("Get File Binary Content") fetches the file’s binary data from the LINE API. A Function node ("Validate File Type") checks if the file’s MIME type is included in the allowed list (e.g., "audio|image|video"). If not, it throws an error that is captured for the reply. 6. File Upload and Logging: The "Upload File to Google Drive" node uploads the validated binary file to the final target folder. After a successful upload, the "Log File Details to Google Sheet" node logs details such as file name, upload date, Google Drive URL, and file type into a designated Google Sheet. 7. User Feedback: The "Check Reply Enabled Flag" node checks if the reply feature is enabled. Finally, the "Send LINE Reply Message" node sends a reply message back to the LINE user with either the file URL (if the upload was successful) or an error message (if the file type was not allowed). Setup Instructions 1. Google Sheets Setup: Create a Google Sheet with two sheets:** config: Include columns for Parent Folder Path, Parent Folder ID, Store by Date (boolean), Store by File Type (boolean), Allow File Types (e.g., “audio|image|video”), CurrentDate, Reply Enabled, and CHANNEL ACCESS TOKEN. fileList: Create headers for File Name, Date Uploaded, Google Drive URL, and File Type. For an example of the required format, check this Google Sheets template: Google Sheet Template 2. Google Drive Credentials: Set up and authorize your Google Drive credentials in n8n. 3. LINE Messaging API: Configure your LINE Developer Console webhook to point to the n8n Webhook URL ("Line Chat Bot" node). Ensure you have the proper Channel Access Token stored in your Google Sheet. 4. n8n Workflow Import: Import the provided JSON file into your n8n instance. Verify node connections and update any credential references as needed. 5. Test the Workflow: Send a test message via LINE to confirm that files are properly validated, uploaded, logged, and that reply messages are sent. How to Customize This Workflow Allowed File Types: Adjust the "Validate File Type" field in your config sheet to control which file types are accepted. Folder Structure: Modify the logic in the "Determine Folder Info" and subsequent folder nodes to change how folders are structured (e.g., use different date formats or add additional categorization). Logging: Update the "Log File Details to Google Sheet" node if you wish to log additional file metadata. Reply Messages: Customize the reply text in the "Send LINE Reply Message" node to include more detailed information or instructions.