by Khairul Muhtadin
This Workflow auto-ingests Google Drive documents, parses them with LlamaIndex, and stores Azure OpenAI embeddings in an in-memory vector store—cutting manual update time from ~30 minutes to under 2 minutes per doc. Why Use This Workflow? Cost Reduction: Eliminates pays monthly fee on cloud just for store knowledge Ideal For Knowledge Managers / Documentation Teams:** Automatically keep product docs and SOPs in sync when source files change on Google Drive. Support Teams:** Ensure the searchable KB is always up-to-date after doc edits, speeding agent onboarding and resolution time. Developer / AI Teams:** Populate an in-memory vector store for experiments, rapid prototyping, or local RAG demos. How It Works Trigger: Google Drive Trigger watches a specific document or folder for updates. Data Collection: The updated file is downloaded from Google Drive. Processing: The file is uploaded to LlamaIndex cloud via an HTTP Request to create a parsing job. Intelligence Layer: Workflow polls LlamaIndex job status (Wait + Monitor loop). If parsing status equals SUCCESS, the result is retrieved as markdown. Output & Delivery: Parsed markdown is loaded into LangChain's Default Data Loader, passed to Azure OpenAI embeddings (deployment "3small"), then inserted into an in-memory vector store. Storage & Logging: Vector store holds embeddings in memory (good for prototyping). Optionally persist to an external vector DB for production. Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Execute and import the workflow — use the n8n instance | | Google Drive OAuth2 | Essential | Watch and download documents from Google Drive | | LlamaIndex Cloud API | Essential | Parse and convert documents to structured markdown | | Azure OpenAI Account | Essential | Generate embeddings (deployment configured to model name "3small") | | Persistent Vector DB (e.g., Pinecone) | Optional | Persist embeddings for production-scale search | Installation Steps Import the workflow JSON into your n8n instance: open your n8n instance and import the file. Configure credentials: Azure OpenAI: Provide Endpoint, API Key and set deployment name. LlamaIndex API: Create an HTTP Header Auth credential in n8n. Header Name: Authorization. Header Value: Bearer YOUR_API_KEY. Google Drive OAuth2: Create OAuth 2.0 credentials in Google Cloud Console, enable Drive API, and configure the Google Drive OAuth2 credential in n8n. Update environment-specific values: Replace the workflow's Google Drive fileId with the GUID or folder ID you want to watch (do not commit public IDs). Customize settings: Polling interval (Wait node): adjust for faster or slower job status checks. Target file or folder: toggled on the Google Drive Trigger node. Embedding model: change Azure OpenAI deployment if needed. Test execution: Save changes and trigger a sample file update on Drive. Verify each node runs and the vector store receives embeddings. Technical Details Core Nodes | Node | Purpose | Key Configuration | |------|---------|-------------------| | Knowledge Base Updated Trigger (Google Drive Trigger) | Triggers on file/folder changes | Set trigger type to specific file or folder; configure OAuth2 credential | | Download Knowledge Document (Google Drive) | Downloads file binary | Operation: download; ensure OAuth2 credential is selected | | Parse Document via LlamaIndex (HTTP Request) | Uploads file to LlamaIndex parsing endpoint | POST multipart/form-data to /parsing/upload; use HTTP Header Auth credential | | Monitor Document Processing (HTTP Request) | Polls parsing job status | GET /parsing/job/{{jobId}}; check status field | | Check Parsing Completion (If) | Branches on job status | Condition: {{$json.status}} equals SUCCESS | | Retrieve Parsed Content (HTTP Request) | Fetches parsed markdown result | GET /parsing/job/{{jobId}}/result/markdown | | Default Data Loader (LangChain) | Loads parsed markdown into document format | Use as document source for embeddings | | Embeddings Azure OpenAI | Generates embeddings for documents | Credentials: Azure OpenAI; Model/Deployment: 3small | | Insert Data to Store (vectorStoreInMemory) | Stores documents + embeddings | Use memory store for prototyping; switch to DB for persistence | Workflow Logic On Drive change, the file binary is downloaded and sent to LlamaIndex. Workflow enters a monitor loop: Monitor Document Processing fetches job status, If node checks status. If not SUCCESS, Wait node delays before re-check. When parsing completes, the workflow retrieves markdown, loads documents, creates embeddings via Azure OpenAI, and inserts data into an in-memory vector store. Customization Options Basic Adjustments: Poll Delay: Set Wait node (default: every minute) to balance speed vs. API quota. Target Scope: Switch the trigger from a single file to a folder to auto-handle many docs. Embedding Model: Swap Azure deployment for a different model name as needed. Advanced Enhancements: Persistent Vector DB Integration: Replace vectorStoreInMemory with Pinecone or Milvus for production search. Notification: Add Slack or email nodes to notify when parsing completes or fails. Summarization: Add an LLM summarization step to generate chunk-level summaries. Scaling option: Batch uploads and chunking to reduce embedding calls; use a queue (Redis or n8n queue patterns) and horizontal workers for high throughput. Performance & Optimization | Metric | Expected Performance | Optimization Tips | |--------|----------------------|-------------------| | Execution time (per doc) | ~10s–2min (depends on file size & LlamaIndex processing) | Chunk large docs; run embeddings in batches | | API calls (per doc) | 3–8 (upload, poll(s), retrieve, embedding calls) | Increase poll interval; consolidate requests | | Error handling | Retries via Wait loop and If checks | Add exponential backoff, failure notifications, and retry limits | Troubleshooting | Problem | Cause | Solution | |---------|-------|----------| | Authentication errors | Invalid/missing credentials | Reconfigure n8n Credentials; do not paste API keys directly into nodes | | File not found | Incorrect fileId or permissions | Verify Drive fileId and OAuth scopes; share file with the service account if needed | | Parsing stuck in PENDING | LlamaIndex processing delay or rate limit | Increase Wait node interval, monitor LlamaIndex dashboard, add retry limits | | Embedding failures | Model/deployment mismatch or quota limits | Confirm Azure deployment name (3small) and subscription quotas | Created by: khmuhtadin Category: Knowledge Management Tags: google-drive, llamaindex, azure-openai, embeddings, knowledge-base, vector-store Need custom workflows? Contact us
by Frederik Duchi
This n8n template demonstrates how to automatically process feedback on tasks and procedures using an AI agent. Employees provide feedback after completing a task, which is then analyzed by the AI to suggest improvements to the underlying procedures. Improvements can be to update how to execute a single tasks or to split or merge tasks within a procedure. The management reviews decides whether to implement those improvements. This makes it easy to close the loop between execution, feedback, and continuous process improvement. Use cases are many: Marketing (improve the process of approving advertising content) Finance (optimize the process of expense reimbursement) Operations (refine the process of equipment maintenance) Good to know The automation is based on the Baserow template for handling Standard Operating Procedures. However, it can also be implemented in other databases. Baserow authentication is done through a database token. Check the documentation on how to create such a token. Tasks are inserted using the HTTP request node instead of a dedicated Baserow node. This is to support batch import instead of importing records one by one. Requirements Baserow account (cloud or self-hosted) The Baserow template for handling Standard Operating Procedures or a similar database with the following tables and fields: Procedures table with general procedure information like to name or description . Procedures steps table with all the steps associated with a procedure. Tasks table that contains the actual tasks based on the procedure steps. must have a field to capture Feedback must have a boolean field to indicate if the feedback has been processed or not. This to avoid that the same feedback keeps getting used. Improvement suggestions table to store the suggestions that were made by the AI agent. How it works Set table and field ids** Stores the ids of the involved Baserow database and tables, together with the information to make requests to the Baserow API Feedback processing agent** The prompt contains a small instruction to check the feedback and suggest improvements to the procedures. The system message is much more extensive to provide as much details and guidance to the agent as possible. It contains the following sections: Role: giving the agent a clear professional perspective Goals: allowing the agent to focus on clarity, efficiency and actionable improvements. Instructions: guiding the agent to a step-by-step flow Output: showing the agent the expected format and details Notes: setting guardrails for the agent to make justified and practical suggestions. The agent uses the following nodes: OpenAI Chat Model (Model): the template uses by default the gpt-4.1 model from OpenAI. But you can replace this with any LLM. current_procedures (Tool): provides information about all available procedures to the agent current_procedure steps (Tool): provides information about every step in the procedures to the agent tasks_feedback (Tool): provides the feedback of the employees to the agent. Required output schema (Output parser): forces the agent to use a JSON schema that matches the Improvement suggestions table structure for the output. This allows to easily add them to the database in the next step. Create improvement suggestions** Calls the API endpoint /api/database/rows/table/{table_id}/batch/ to insert multiple records at once in the Improvement suggestions table. The inserted records is the output generated by the AI agent. Check the Baserow API documentation for further details. Get non-processed feedback** Gets all records from the Tasks table that contain feedback but that are not marked as processed yet. Set feedback to processed** Updates the boolean field for each task to true to indicate that the feedback has been processed Aggregate records for input** Aggregates the data from the previous nodes as an array in a property named items. This matches perfect with the Baserow API to insert new records in batch. Update tasks to processed feedback** Calls the API endpoint /api/database/rows/table/{table_id}/batch/ to update multiple records at once in the Tasks table. The updated records will have their processed field set to true. Check the Baserow API documentation for further details. How to use The Manual Trigger node is provided as an example, but you can replace it with other triggers such as a webhook The included Baserow SOP template works perfectly as a base schema to try out this workflow. Set the corresponding ids in the Configure settings and ids node. Check if the field names for the filters in the tasks_feedback tool node matches with the ones in your Tasks table. Check if the field names for the filters in the Get non-processed feedback node matches with the ones in your Tasks table. Check if the property name in the Set feedback to processed node matches with the ones in your Tasks table. Customising this workflow You can add a new workflow that updates the procedures based on the acceptance or rejection by the management There is a lot of customization possible in the system prompt. For example: change the goal to prioritize security, cost savings or customer experience
by Davide
This workflow creates a voice AI assistant accessible via Telegram that leverages ElevenLabs* powerful voice synthesis technology. Users can either clone their own voice or transform their voice using pre-existing voice models, all through simple voice messages sent to a Telegram bot. *ONLY FOR STARTER, CREATOR, PRO PLAN This workflow allows users to: Clone their voice by sending a voice message to a Telegram bot (creates a new voice profile on ElevenLabs) Change their voice to a cloned voice and save the output to Google Drive For Best Results Important Considerations for Best Results: For optimal voice cloning via Telegram voice messages: 1. Recording Quality & Environment Record in a quiet room with minimal echo and background noise Use a consistent microphone position (10-15cm from mouth) Ensure clear audio without distortion or clipping 2. Content Selection & Variety Send 1 voice messages totaling 5-10 minutes of speech Include diverse vocal sounds, tones, and natural speaking cadence Use complete sentences rather than isolated words 3. Audio Consistency Maintain consistent volume, tone, and distance from microphone Avoid interruptions, laughter, coughs, or background voices Speak naturally without artificial effects or filters 4. Technical Preparation Ensure Telegram isn't overly compressing audio (use HQ recording) Record all messages in the same session with same conditions Include both neutral speech and varied emotional expressions How it works Trigger The workflow starts with a Telegram trigger that listens for incoming messages (text, voice notes, or photos). Authorization check A Code node checks whether the sender’s Telegram user ID matches your predefined ID. If not, the process stops. Message routing A Switch node routes the message based on its type: Text → Not processed further in this flow. Voice message → Sent to the “Get audio” node to retrieve the audio file from Telegram. Photo → Not processed further in this flow. Two main options From the “Get audio” node, the workflow splits into two possible paths: Option 1 – Clone voice The audio file is sent to ElevenLabs via an HTTP request to create a new cloned voice. The voice ID is returned and can be saved for later use. Option 2 – Voice changer The audio is sent to ElevenLabs for speech-to-speech conversion using a pre-existing cloned voice (voice ID must be set in the node parameters). The resulting audio is saved to Google Drive. Output Cloned voice ID (for Option 1). Converted audio file uploaded to Google Drive (for Option 2). Set up steps Telegram bot setup Create a bot via BotFather and obtain the API token. Set up the Telegram Trigger node with your bot credentials. Authorization configuration In the “Sanitaze” Code node, replace XXX with your Telegram user ID to restrict access. ElevenLabs API setup Get an API key from ElevenLabs. Configure the HTTP Request nodes (“Create Cloned Voice” and “Generate cloned audio”) with: API key in the Xi-Api-Key header. Appropriate endpoint URLs (including voice ID for speech-to-speech). Google Drive setup (for Option 2) Set up Google Drive OAuth2 credentials in n8n. Specify the target folder ID in the “Upload file” node. Voice ID configuration For voice cloning: The voice name can be customized in the “Create Cloned Voice” node. For voice changing: Replace XXX in the “Generate cloned audio” node URL with your ElevenLabs voice ID. Test the workflow Activate the workflow. Send a voice note from your authorized Telegram account to trigger cloning or voice conversion. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Trung Tran
Beginner’s Tutorial: Manage Google Cloud Storage Buckets and Objects with n8n Watch the demo video below: Who’s it for Beginners who want to learn how to automate Google Cloud Storage (GCS) operations with n8n. Developers who want to combine AI image generation with cloud storage management. Anyone looking for a simple introduction to working with Buckets and Objects in GCS. How it works / What it does This workflow demonstrates end-to-end usage of Google Cloud Storage with AI integration: Trigger: Start manually by clicking Execute Workflow. Edit Fields: Provide input values (e.g., bucket name or image description). List Buckets: Retrieve all existing buckets in the project (branch: view only). Create Bucket: If needed, create a new bucket to store objects. Prompt Generation Agent: Use an AI model to generate a creative text prompt. Generate Image: Convert the AI-generated prompt into an image. Upload Object: Store the generated image as an object in the selected bucket. Delete Object: Clean up by removing the uploaded object if required. This shows the full lifecycle: Bucket → Object (Create/Upload/Delete) combined with AI image generation. How to set up Trigger the workflow: Use the When clicking Execute workflow node to start manually. Provide inputs: In Edit Fields, specify details such as bucket name or description text for the image. List buckets: Use the List Buckets node to see what exists. Create a bucket: Use Create Bucket if you want a new storage bucket. Generate prompt & image: The Prompt Generation Agent uses an OpenAI Chat Model to create an image prompt. The Generate an Image node turns this prompt into an actual image. Upload to bucket: Use Create Object to upload the generated image into your GCS bucket. Delete object (optional): Use Delete Object to remove the file from the bucket as a cleanup step. Requirements An active Google Cloud account with Cloud Storage API enabled. A Service Account Key (JSON) credential added in n8n for GCS. An OpenAI API Key configured in n8n for the prompt and image generation nodes. Basic familiarity with running workflows in n8n. How to customize the workflow Different object types:** Instead of images, upload PDFs, logs, or text files. Automatic cleanup:** Skip the delete step if you want objects to persist. Schedule trigger:** Replace manual execution with a weekly or daily schedule. Dynamic prompts:** Accept user input from a form or webhook to generate images. Multi-bucket management:** Extend the logic to manage multiple buckets across projects. Notifications:** Add a Slack/Email step after upload to notify your team with the object URL. ✅ By the end of this tutorial, you’ll understand how to: Work with Buckets (list, create). Work with Objects (upload, delete). Integrate AI image generation with Google Cloud Storage.
by Madame AI
Analyze job market data with AI to find matching jobs This n8n template helps you stay on top of the job market by matching scraped job offers with your resume using an AI Agent. This workflow is perfect for job seekers, recruiters, or market analysts who need to find specific job opportunities without manually sifting through countless listings. Steps to Take Create BrowserAct Workflow:* Set up the *Job Market Intelligence** template in your BrowserAct account. Add BrowserAct Token:* Connect your BrowserAct account credentials to the *HTTP Request** node. Update Workflow ID:* Change the workflow_id value in the *HTTP Request** node to match the one from your BrowserAct workflow. Connect Gemini:* Add your Google Gemini credentials and update your *resume* inside the prompt in the *AI Agent** node. Configure Telegram:* Connect your Telegram account and add your Channel ID to the *Send a text message** node. How it works The workflow is triggered manually by clicking "Execute workflow," but you can easily set it to run on a schedule. It uses an HTTP Request node to start a web scraping task via the BrowserAct API to collect the latest job offers. A series of If and Wait nodes monitor the scraping job, ensuring the full data is ready before proceeding. An AI Agent node, powered by Google Gemini, processes the job listings and filters them to find the best matches for your resume. A Code node then transforms the AI's output into a clean, readable format. Finally, the filtered job offers are sent directly to you via Telegram. Requirements BrowserAct** API account BrowserAct* *“Job Market Intelligence”** Template Gemini** account Telegram** credentials Need Help ? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates Workflow Guidance and Showcase Never Manually Search for a Job Again (AI Automation Tutorial)
by Feras Dabour
AI X (twitter) Threads Bot with Approval Loop This n8n workflow transforms your Telegram messenger into a personal assistant for creating and publishing X-Threads. You can simply send an idea as a text or voice message, collaboratively edit the AI’s suggestion in a chat, and then publish the finished thread directly to X just by saying “Okay.” What You’ll Need to Get Started Before you can use this workflow, you’ll need a few prerequisites set up. This workflow connects three different services, so you will need API credentials for each: Telegram Bot API Key*: You can get this by talking to the “BotFather” on Telegram. It will guide you through creating your new bot and provide you with the API token. New Chat with Telegram BotFather OpenAI API Key*: This is required for the “Speech to Text” and “AI Agent” nodes. You’ll need an account with OpenAI to generate this key. OpenAI API Platform Blotato API Key*: This service is used to publish the final post to X. You’ll need a Blotato account and to connect your X profile there to get the key. Blotato platform for social media publishing Once you have these keys, you can add them to the corresponding credentials in your n8n instance. How the Workflow Operates, Step-by-Step Here is a detailed breakdown of how the workflow processes your request and handles the publishing. 1. Input & Initial Processing This phase captures your idea and converts it into usable text. Node Name Role in Workflow Start: Telegram Message This Telegram Trigger node initiates the entire process upon receiving any message from you in the bot. Prepare Input Consolidates the message content, ensuring the AI receives only one clean text input. Check: ist it a Voice? Checks the incoming message for text. If text is empty, it proceeds to voice handling. Get Voice File If a voice note is detected, this node downloads the raw audio file from Telegram. Speech to Text This node uses the OpenAI Whisper API to convert the downloaded audio file into a text string. 2. AI Core & Iteration Loop This is the central dialogue system where the AI drafts the content and engages in the feedback loop. AI: Draft & Revise Post The main logic agent. It analyzes your request, applies the “System Prompt” rules, drafts the post, and handles revisions based on your feedback. OpenAI Chat Model Defines the large language model (LLM) used for generating and revising the post. Window Buffer Memory A memory buffer that stores the last turns of the conversation, allowing the AI to maintain context when you request changes (e.g., “Make it shorter”). Check if Approved This crucial node detects the specific JSON structure the AI outputs only when you provide an approval keyword (like “ok” or “approved”). Post Suggestion Or Ask For Approval Sends the AI’s post draft back to your Telegram chat for review and feedback. AI Agent System Prompt (Internal Instructions - English) The agent operates under a strict prompt that dictates its behavior and formatting (found within the AI: Draft & Revise Post node. 3. Publishing & Status Check Once approved, the workflow handles the publication and monitors the post’s status in real-time. Node Name Role in Workflow Approval: Extract Final Thread Posts Parses the incoming JSON, extracting only the clean text ready for publishing. Create post with Blotato Uses the Blotato API to upload the finalized content to your connected X account. Give Blotat 5s :) A brief pause to allow the publishing service to start processing the request. Check post status Checks back with Blotato to determine if the post is published, in progress, or failed. Published? Checks if the status is “published” to send the success message. In Progress? Checks if the post is still being processed. If so, it loops back to the next wait period. Give Blotat other 5s :) Pauses the workflow before re-checking the post status, preventing unnecessary API calls. Final Notification Node Name Role in Workflow Send a confirmation message Sends a confirmation message and the direct link to the published X post. Send an error message Sends a notification if the post failed to upload or encountered an error during processing. 🛠️ Personalizing Your Content Bot The true power of this n8n workflow lies in its flexibility. You can easily modify key components to match your unique brand voice and technical preferences. 1. Tweak the Content Creator Prompt The personality, tone, and formatting rules for your X content are all defined in the System Prompt. Where to find it: Inside the AI: Draft & Revise Post node, under the System Message setting. What to personalize: Adjust the tone, change the formatting rules (e.g., number of hashtags, required emojis), or insert specific details about your industry or target audience. 2. Switch the AI Model or Provider You can easily swap the language model used for generation. Where to find it: The OpenAI Chat Model node. What to personalize: Model: Swap out the default model for a more powerful or faster alternative (e.g., gpt-4 family, or models from other providers if you change the node). Provider: You can replace the entire Langchain block (including the AI Model and Window Buffer Memory nodes) with an equivalent block using a different provider’s Chat/LLM node (e.g., Anthropic, Cohere, or Google Gemini), provided you set up the corresponding credentials and context flow. 3. Modify Publishing Behavior (Schedule vs. Post) The final step is currently set to publish immediately, but you might prefer to schedule posts. Where to find it: The Create post with Blotato node. What to personalize: Consult the Blotato documentation for alternative operations. Instead of choosing the “Create Post” operation (which often posts immediately), you can typically select a “Schedule Post” or “Add to Queue” operation within the Blotato node. If scheduling, you will need to add a step (e.g., a Set node or another agent prompt) before publishing to calculate and pass a Scheduled Time parameter to the Blotato node.
by Alok Kumar
This n8n workflow shows how to extract website content, index it in Pinecone, and leverage Airtable to power a chat agent for customer Q&A. Use cases include: Building a knowledge base from your website. Creating a chatbot that answers customer queries using your own site content. Powering RAG workflows for FAQs, support docs, or product knowledge. How it works Workflow starts with a manual trigger or chat message. Website content is fetched via HTTP Request. The HTML body is extracted and converted into clean Markdown. Text is split into chunks (~500 chars with 50 overlap) using the Character Text Splitter. OpenAI embeddings** are generated for each chunk. Content and embeddings are stored in Pinecone with namespace separation. A Chat Agent (powered by OpenAI or OpenRouter) retrieves answers from Pinecone and Airtable. Memory buffer** allows multi-turn conversations. A billing tool (Airtable) provides dynamic billing-related answers when needed. How to use Replace the sample website URL in the HTTP Request node with your own domain or content source. Update Normalize code based on markdown content output to remove noise. Adjust chunk size in the Text Splitter for your website markdown output. In this example, the Character Text Splitter with separator ###### worked really well. Always check the Markdown output to fine-tune your splitting logic. Update Pinecone namespace to match your project. Customize the Chat Agent system prompt to fit your brand voice and response rules. Connect to your own Airtable schema if you want live billing/payment data access. Requirements OpenAI account** (for embeddings + chat model). Pinecone account** (vector DB for semantic search). Airtable account** (if using the billing tool). (Optional) OpenRouter account (alternative chat model provider). n8n self-hosted or cloud. Need Help? Ask in the n8n Forum! Happy Automating! 🚀
by Trung Tran
Free PDF Generator in n8n – No External Libraries or Paid Services > A 100% free n8n workflow for generating professionally formatted PDFs without relying on external libraries or paid converters. It uses OpenAI to create Markdown content, Google Docs to format and convert to PDF, and integrates with Google Drive and Slack for archiving and sharing, ideal for reports, BRDs, proposals, or any document you need directly inside n8n. Watch the demo video below: Who’s it for Teams that need auto-generated documents (reports, guides, checklists) in PDF format. Operations or enablement teams who want files archived in Google Drive and shared in Slack automatically. Anyone experimenting with LLM-powered document generation integrated into business workflows. How it works / What it does Manual trigger starts the workflow. LLM generates a sample Markdown document (via OpenAI Chat Model). Google Drive folder is configured for storage. Google Doc is created from the generated Markdown content. Document is exported to PDF using Google Drive. (Sample PDF generated from comprehensive markdown) PDF is archived in a designated Drive folder. Archived PDF is downloaded for sharing. Slack message is sent with the PDF attached. How to set up Add nodes in sequence: Manual Trigger OpenAI Chat Model (prompt to generate sample Markdown) Set/Manual input for Google Drive folder ID(s) HTTP Request or Google Drive Upload (convert to Google Docs) Google Drive Download (PDF export) Google Drive Upload (archive PDF) Google Drive Download (fetch archived file) Slack Upload (send message with attachment) Configure credentials for OpenAI, Google Drive, and Slack. Map output fields: data.markdown → Google Docs creation docId → PDF export fileId → Slack upload Test run to ensure PDF is generated, archived, and posted to Slack. Requirements Credentials**: OpenAI API key (or compatible LLM provider) Google Drive (OAuth2) with read/write permissions Slack bot token with files:write permission Access**: Write access to target Google Drive folders Slack bot invited to the target channel How to customize the workflow Change the prompt** in the OpenAI Chat Model to generate different types of content (reports, meeting notes, checklists). Automate triggering**: Replace Manual Trigger with Cron for scheduled document generation. Use Webhook Trigger to run on-demand from external apps. Modify storage logic**: Save both .md and .pdf versions in Google Drive. Use separate folders for drafts vs. final versions. Enhance distribution**: Send PDFs to multiple Slack channels or via email. Integrate with project management tools for automated task creation.
by Madame AI
AI-Powered Top GitHub Talent Sourcing (by Language & Location) to Google Sheet This n8n template is a powerful talent sourcing engine that finds, analyzes, and scores GitHub contributors using a custom AI formula. This workflow is ideal for technical recruiters, hiring managers, and team leads who want to build a pipeline of qualified candidates based on specific technical skills and location. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow runs on a Schedule Trigger (e.g., hourly) to constantly find new candidates. A BrowserAct node ("Run a workflow task") initiates a scraping job on GitHub based on your criteria (e.g., "Python" developers in "Berlin"). A second BrowserAct node ("Get details") waits for the scraping to complete. If the job fails, a Slack alert is sent. A Code node processes the raw scraped data, splitting the list of developers into individual items. An AI Agent, powered by Google Gemini, analyzes each profile. It scores their resume/summary and calculates a final weighted FinalScore based on their followers, repositories, and resume quality. The structured and scored candidate data is then saved to a Google Sheet, using the "Name" column to prevent duplicates. A final Slack message is sent to notify you that the GitHub contributors list has been successfully updated. Requirements BrowserAct** API account for web scraping BrowserAct* "Source Top GitHub Contributors by Language & Location*" Template BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Gemini** account for the AI Agent Google Sheets** credentials for saving leads Slack** credentials for sending notifications Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase Automate Talent Sourcing: Find GitHub Devs with n8n & Browseract
by Automate With Marc
📥 Invoice Intake & Notification Workflow This automated n8n workflow monitors a Google Drive folder for newly uploaded invoice PDFs, extracts essential information (like client name, invoice number, amount, due date), logs the data into a Google Sheet for recordkeeping, and sends a formatted Telegram message to notify the billing team. For step-by-step video build of workflows like this: https://www.youtube.com/@automatewithmarc ✅ What This Workflow Does 🕵️ Watches a Google Drive folder for new invoice files 📄 Extracts data from PDF invoices using AI (LangChain Information Extractor) 📊 Appends extracted data into a structured Google Sheet 💬 Notifies the billing team via Telegram with invoice details 🤖 Optionally uses Claude Sonnet AI model to format human-friendly summaries ⚙️ How It Works – Step-by-Step Trigger: Workflow starts when a new PDF invoice is added to a specific Google Drive folder. Download & Parse: The file is downloaded and its content extracted. Data Extraction: AI-powered extractor pulls invoice details (invoice number, client, date, amount, etc.). Log to Google Sheets: All extracted data is appended to a predefined Google Sheet. AI Notification Formatting: An Anthropic Claude model formats a clear invoice notification message. Telegram Alert: The formatted summary is sent to a Telegram channel or group to alert the billing team. 🧠 AI & Tools Used Google Drive Trigger & File Download PDF Text Extraction Node LangChain Information Extractor Google Sheets Node (Append Data) Anthropic Claude (Telegram Message Formatter) Telegram Node (Send Notification) 🛠️ Setup Instructions Google Drive: Set up OAuth2 credentials and specify the folder ID to watch. Google Sheets: Link the workflow to your invoice tracking sheet. Telegram: Set up your Telegram bot and obtain the chat ID. Anthropic & OpenAI: Add your Claude/OpenAI credentials if formatting is enabled. 💡 Use Cases Automated bookkeeping and invoice tracking Real-time billing alerts for accounting teams AI-powered invoice ingestion and summary
by Simeon Penev
Who’s it for Growth, marketing, sales, and founder teams that want a decision-ready Ideal Customer Profile (ICP)—grounded in their own site content. How it works / What it does On form submission* collects *Website URL* and *Business Name** and redirects to Google Drive Folder after the final node. Crawl and Scrape the Website Content* - crawls and scrape *20 pages** from the website. ICP Creator* builds a *Markdown ICP** with: A) Executive Summary B) One-Pager ICP C) Tiering & Lead Scoring D) Demand Gen & ABM Plays E) Evidence Log F) Section Confidence Facts vs. Inferences, confidence scores and tables. Markdown to Google Doc* converts Markdown to Google Docs batchUpdate requests. Then this is used in *Update a document** for updating the empty doc. Create a document* + *Update a document* generate *“ICP for <Business Name>”** in your Drive folder and apply formatting. How to set up 1) Add credentials: Firecrawl (Authorization header), OpenAI (Chat), Google Docs OAuth2. 2) Replace placeholders: {{API_KEY}}, {{google_drive_folder_id}}, {{google_drive_folder_url}}. 3) Publish and open the Form URL to test. Requirements Firecrawl API key • OpenAI API key • Google account with access to the target Drive folder. Resources Google OAuth2 Credentials Setup - https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/ OpenAI API key - https://docs.n8n.io/integrations/builtin/credentials/openai/ Firecrawl API key - https://take.ms/lGcUp
by Madame AI
Find & Qualify Funded Leads with BrowserAct & Gemini This n8n template helps you find new investment leads by automatically scraping articles for funding announcements and analyzing them with an AI Agent. This workflow is ideal for venture capitalists, sales teams, or market researchers who need to automatically track and compile lists of recently funded companies. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow is triggered manually but can be set to a Cron node to run on a schedule. A Google Sheet node loads a list of keywords (e.g., "Series A," "Series B") and geographic locations to search for. The workflow loops through each keyword, initiating BrowserAct web scraping tasks to collect relevant articles. A second set of BrowserAct nodes patiently monitors the scraping jobs, waiting for them to complete before proceeding. Once all articles are collected, they are merged and fed into an AI Agent node, powered by Google Gemini. The AI Agent processes the articles to identify companies that recently received funding, extracting the Company Name, the Field of Investment, and the source URL. A Code node transforms the AI's JSON output into a clean, itemized format. An If node filters out any entries where no company was found, ensuring data quality. The qualified leads are automatically added or updated in a Google Sheet, matching by "Company" to prevent duplicates. Finally, a Slack message is sent to a channel to notify your team that the lead list has been updated. Requirements BrowserAct** API account for web scraping BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) BrowserAct* "Funding Announcement to Lead List (TechCrunch)*" Template (or a similar scraping workflow) Gemini** account for the AI Agent Google Sheets** credentials for input and output Slack** credentials for sending notifications Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct n8n Community Node Workflow Guidance and Showcase How to Automatically Find Leads from Funding News (n8n Workflow Tutorial)