by Jimleuk
This n8n template scrapes a list of AI grants from grants.gov and qualifies them using AI; determining interest and eligibility for the business. It then sends an email alert of interesting items to team members in an email. The template also shows how you can use the "Remove Duplicates" node to simplify deduplication of external listings without the need to manage this yourself. Not particularly interested in AI Grants? This template works for other tender websites as long as you're able to scrape them. How it works A scheduled trigger is set to fetch a list of AI grants listed on the grants.gov website in the past day. A Remove Duplicates node is used to track Grant IDs to filter out those already processed by the workflow. New grants are summarized and analysed by AI nodes to determine eligibility and interest which is then saved to an Airtable database. Another scheduled trigger starts a little later than the first to collect and summarize the new grants The results are then compiled into an email template using the HTML node, in the form of a newsletter designed to alert and brief team members of new AI grants. This email is then sent to a list of subscribers using the gmail node. How to use Make a copy of sample Airtable here: https://airtable.com/appiNoPRvhJxz9crl/shrRdP6zstgsxjDKL The filters for fetching the grants is currently set to the "AI" category. Feel free to change this to include more categories. Not interested in grants, this template can works for other sources of leads just change the endpoint and how you're defining the item ID to track. Requirements Airtable for database OpenAI for LLM Note: These are not hard requirements and can be exchanged for services available to you. customising the workflow "Eligibility" criteria at this stage may be better served by identifying hard blockers instead ie. certifications, geographical considerations or certain legal checks. Be sure to mention any hard blockers into the Eligibility prompt. Not particularly interested in AI prompts? This template works for other tender websites as long as you're able to scrape them.
by Mark Shcherbakov
Video Guide I prepared a comprehensive guide detailing how to automate the parsing of invoices using n8n and LlamaParse, seamlessly capturing and storing vital billing information. Youtube Link Who is this for? This workflow is ideal for finance teams, accountants, and business operations managers who need to streamline invoice processing. It is particularly helpful for organizations seeking to reduce manual entry errors and improve efficiency in managing billing information. What problem does this workflow solve? Manually processing invoices can be time-consuming and error-prone. This automation eliminates the need for manual data entry by capturing invoice details directly from uploaded documents and storing structured data efficiently. This enhances productivity and accuracy across financial operations. What this workflow does The workflow leverages n8n and LlamaParse to automatically detect new invoices in a designated Google Drive folder, parse essential billing details, and store the extracted data in a structured format. The key functionalities include: Real-time detection of new invoices via Google Drive triggers. Automated HTTP requests to initiate parsing through Lama Cloud. Structured storage of invoice details and line items in a database for future reference. Google Drive Integration: Monitors a specific folder in Google Drive for new invoice uploads. Parsing with LlamaParse: Automatically sends invoices for parsing and processes results through webhooks. Data Storage in Airtable: Creates records for invoices and their associated line items, allowing for detailed tracking. Setup N8N Workflow Google Drive Trigger: Set up a trigger to detect new files in a specified folder dedicated to invoices. File Upload to LlamaParse: Create an HTTP request that sends the invoice file to LlamaParse for parsing, including relevant header settings and webhook URL. Webhook Processing: Establish a webhook node to handle parsed results from LlamaParse, extracting needed invoice details effectively. Invoice Record Creation: Create initial records for invoices in your database using the parsed details received from the webhook. Line Item Processing: Transform string data into structured line item arrays and create individual records for each item linked to the main invoice.
by Angel Menendez
Analyze & Sort Suspicious Email Contents with ChatGPT and Jira Who is this for? This workflow is tailored for IT security teams, managed service providers (MSPs), and organizations aiming to streamline the detection and reporting of phishing emails. It's especially useful for teams handling high email volumes and requiring quick, automated analysis. What problem is this workflow solving? Phishing emails pose a significant cybersecurity threat, and manual review processes are time-consuming and prone to human error. This workflow automates the identification of malicious emails, provides AI-driven insights, and generates structured reports, enabling faster and more efficient responses to email-based threats. What this workflow does This workflow integrates Gmail or Microsoft Outlook to monitor and capture incoming emails. It processes the email content and headers, converts the email's body to a visual screenshot for clarity, and uses ChatGPT's advanced AI to analyze the email for phishing indicators. Based on the analysis, it categorizes emails as potentially malicious or benign, creating detailed Jira tickets for each case. Attachments, including the email body and screenshots, are automatically uploaded for comprehensive reporting. Key steps include: Email Integration: Captures emails from Gmail or Microsoft Outlook. Content Processing: Extracts and organizes email content and metadata. AI Analysis: Uses ChatGPT to evaluate email content and headers. Classification: Categorizes emails as malicious or benign. Automated Reporting: Creates Jira tickets with detailed analysis and attachments. Setup Authentication: Configure Gmail or Microsoft Outlook credentials in n8n. API Keys: Add credentials for the HTML screenshot service (hcti.io) and OpenAI. Jira Configuration: Set up project and issue types in the Jira nodes. Customization: Update sticky notes and nodes to fit your organizational requirements, such as modifying the AI prompt or Jira ticket fields. How to customize this workflow to your needs Adjust email triggers to include or exclude specific senders or subjects. Refine the AI prompt in the ChatGPT node to tailor phishing detection criteria. Modify Jira ticket content to include additional fields or match specific workflows. This workflow is ideal for automating email threat detection, reducing response times, and enhancing overall cybersecurity processes. By leveraging AI-powered insights, it helps organizations stay ahead of phishing attacks.
by Mark Shcherbakov
Video Guide I prepared a comprehensive guide detailing how to create a Smart Agent that automates meeting task management by analyzing transcripts, generating tasks in Airtable, and scheduling follow-ups when necessary. Youtube Link Who is this for? This workflow is ideal for project managers, team leaders, and business owners looking to enhance productivity during meetings. It is particularly helpful for those who need to convert discussions into actionable items swiftly and effectively. What problem does this workflow solve? Managing action items from meetings can often lead to missed tasks and poor follow-up. This automation alleviates that issue by automatically generating tasks from meeting transcripts, keeping everyone informed about their responsibilities and streamlining communication. What this workflow does The workflow leverages n8n to create a Smart Agent that listens for completed meeting transcripts, processes them using AI, and generates tasks in Airtable. Key functionalities include: Capturing completed meeting events through webhooks. Extracting relevant meeting details such as transcripts and participants using API calls. Generating structured tasks from meeting discussions and sending notifications to clients. Webhook Integration: Listens for meeting completion events to trigger subsequent actions. API Requests for Data: Pulls necessary details like transcripts and participant information from Fireflies. Task and Notification Generation: Automatically creates tasks in Airtable and notifies clients of their responsibilities. Setup N8N Workflow Configure the Webhook: Set up a webhook to capture meeting completion events and integrate it with Fireflies. Retrieve Meeting Content: Use GraphQL API requests to extract meeting details and transcripts, ensuring appropriate authentication through Bearer tokens. AI Processing Setup: Define system messages for AI tasks and configure connections to the AI chat model (e.g., OpenAI's GPT) to process transcripts. Task Creation Logic: Create structured tasks based on AI output, ensuring necessary details are captured and records are created in Airtable. Client Notifications: Use an email node to notify clients about their tasks, ensuring communications are client-specific. Scheduling Follow-Up Calls: Set up Google Calendar events if follow-up meetings are required, populating details from the original meeting context.
by Reyhan
Automatically clean up your Gmail inbox by deleting unwanted emails, validated by Gemini AI. Ideal for anyone tired of manual inbox cleanup, this workflow helps you save time while staying in control, with full transparency via Telegram alerts. How it works Scans Gmail inbox in adjustable 2-week batches Uses Gemini AI to decide if an email should be deleted or skipped Applies a label to skipped emails to avoid rechecking in future runs Deletes unwanted emails and sends a Telegram message with the AI's reasoning Also notifies on skipped emails, with explanation included Set up steps Connect your Gmail, Gemini AI, and Telegram accounts Adjust the AI baseline to control sensitivity (e.g. how strict the filtering should be) Set your batch range (default: last 2 weeks, adjustable) Define your Telegram chat/channel for notifications Note: Thanks to n8n's modular design, you can easily switch Gemini for another AI model (like OpenAI, Claude, etc.) or replace Telegram with Discord, Slack, or even email, no code changes needed, just swap the nodes.
by Yaron Been
๐ Scrape Glassdoor with Bright Data Designed for sales teams, recruiters, and marketers aiming to automate job discovery and prospecting. This workflow scrapes Glassdoor job listings using Bright Data and automatically generates targeted pitches using AI, streamlining lead identification and outreach. ๐งฉ How It Works This automation leverages n8n, Bright Data, Google Sheets, and OpenAI: 1. Trigger Starts with a custom form input (Location, Keyword, Country). 2. Bright Data Job Scrape Triggers a Bright Data dataset snapshot via HTTP Request. Polls snapshot progress using a Wait node, ensuring data readiness. Retrieves full job listings dataset once ready. 3. Google Sheets Integration Writes detailed job data (company, role, location, overview, metrics) into a Google Sheet. Uses a pre-built template for organized data storage. 4. Automated Pitch Generation (AI) Splits listings into actionable parts: company name, title, and description. Sends data to OpenAI (via LangChain) to generate relevant pitches or icebreakers. Saves generated content back into the same sheet for easy access. โ Requirements Ensure you have the following: Google Sheets Google account Template Sheet with columns for job details and AI-generated pitches Bright Data Active account with Dataset API access API key and dataset ID OpenAI Valid OpenAI API key for GPT models n8n Environment Nodes: HTTP Request, Wait, If, Google Sheets, Split Out, LangChain (OpenAI) Credentials: Google Sheets OAuth2 Bright Data API credentials OpenAI API key โ๏ธ Setup Instructions Step 1: Prepare Google Sheets Copy the provided Google Sheets template Do not change headers Step 2: Import & Configure Workflow in n8n Import the workflow JSON file Set Google Sheets node: Link to your copied sheet Confirm correct tab name Step 3: Configure Bright Data Replace <YOUR_BRIGHT_DATA_API_KEY> with your real key Set your dataset ID in all HTTP Request nodes Step 4: Configure OpenAI (LangChain) Connect OpenAI API key to the LangChain node Customize prompt to match tone and outreach style Step 5: Testing & Scheduling Test via manual form trigger Schedule runs or leave form enabled for on-demand use ๐ง Tips & Best Practices Use specific keywords and locations for better results Adjust polling intervals based on dataset size Refine AI prompts regularly to improve pitch quality Clean unused columns from your sheet to boost performance ๐ฌ Support & Feedback For help or customization: ๐ง Email: Yaron@nofluff.online ๐บ YouTube: @YaronBeen ๐ LinkedIn: linkedin.com/in/yaronbeen ๐ Bright Data Docs: docs.brightdata.com/introduction
by Jimleuk
This n8n template introduces the Dynamic Prompts AI workflow pattern which are incredible for certain types of data extraction tasks where attributes are unknown or need to remain flexible. The general idea behind this pattern is that the prompts for requested attributes to be extracted live outside the template and so can be changed at any time - without needing to edit the template. This seriously cuts down on maintainance requirements and is reusable for any number of tables at little cost. Check out the n8n Studio Episode here: https://www.youtube.com/watch?v=_fNAD1u8BZw Community post here: https://community.n8n.io/t/dynamic-prompts-with-n8n-baserow-and-airtable/72052 Looking for the Airtable Version? https://n8n.io/workflows/2771-ai-data-extraction-with-dynamic-prompts-and-airtable/ How it works Given we have an "input" field for context and a number of fields for the data we want to extract, this template will run in the background to react to any changes to either the "input" or fields and automatically update the rows accordingly. The key is that Baserow fields have a special property called the "field description". In this pattern, we use this property to allow the user to store a simple prompt describing the data that should exist in the column. Our n8n template reads these column descriptions aka "prompts" to use as instructions to perform tasks on the "input". In this template, the "input" is a PDF of a resume/CV and the columns are attributes a HR person would want to extract from it - such as full name, address, last position, years of experience etc. How to use First publish this template and ensure it's accessible via webhook URL. You then have to complete the "create Baserow webhooks" steps to configure your baserow to send change events to the n8n template. Baserow webhooks are created in the Baserow web interface. Check the template for more instructions. Requirements Baserow for Tables/Database OpenAI for LLM and extraction. Feel free to choose another LLM if preferred. Customising this workflow If you're not using files, you can replace the "input" field with anything you like. For example, the "input" could be single line text.
by Chris Carr
Use Case When creating chatbots that interface through applications such as Telegram and WhatsApp, users can often sends multiple shorter messages in quick succession, in place of a single, longer message. This workflow accounts for this behaviour. What it Does This workflow allows users to send several messages in quick succession, treating them as one coherent conversation instead of separate messages requiring individual responses. How it Works When messages arrive, they are stored in a Supabase PostgreSQL table The system waits briefly to see if additional messages arrive If no new messages arrive within the waiting period, all queued messages are: Combined and processed as a single conversation Responded to with one unified reply Deleted from the queue Setup Create a table in Supabase called message_queue. It needs to have the following columns: user_id (uint8), message (text), and message_id (uint8) Add your Telegram, Supabase, OpenAI, and PostgreSQL credentials Activate the workflow and test by sending multiple messages the Telegram bot in one go Wait ten seconds after which you will receive a single reply to all of your messages How to Modify it to Your Needs Change the value of Wait Amount in the Wait 10 Seconds node in order to to modify the buffering window Add a System Message to the AI Agent to tailor it to your specific use case Replace the OpenAI sub-node to use a different language model
by Joseph LePage
This n8n workflow template is designed to integrate a DeepSeek AI agent with Telegram, incorporating long-term memory capabilities for personalized and context-aware responses. Here's a detailed breakdown: Core Features Telegram Integration Uses a webhook to receive messages from Telegram users. Validates user identity and message content before processing. AI-Powered Responses Employs DeepSeek's AI models for conversational interactions. Includes memory capabilities to personalize responses based on past interactions. Error Handling Sends an error message if the input cannot be processed. Model Options ๐ง DeepSeek-V3 Chat**: Handles general conversational tasks. DeepSeek-R1 Reasoning**: Provides advanced reasoning capabilities for complex queries. Memory Buffer Window**: Maintains session context for ongoing conversations. Quick Setup ๐ ๏ธ Telegram Webhook Configuration Set up a webhook using the Telegram Bot API: https://api.telegram.org/bot{my_bot_token}/setWebhook?url={url_to_send_updates_to} Replace {my_bot_token} with your bot's token and {url_to_send_updates_to} with your n8n webhook URL. Verify the webhook setup using: https://api.telegram.org/bot{my_bot_token}/getWebhookInfo DeepSeek API Configuration Base URL: https://api.deepseek.com Obtain your API key from the DeepSeek platform. Implementation Details ๐ง User Validation The workflow validates the user's first name, last name, and ID using data from incoming Telegram messages. Only authorized users proceed to the next steps. Message Routing Routes messages based on their type (text, audio, or image) using a switch node. Ensures appropriate handling for each message format. AI Agent Interaction Processes text input using DeepSeek-V3 or DeepSeek-R1 models. Customizable system prompts define the AI's behavior and rules, ensuring user-centric and context-aware responses. Memory Management Retrieves long-term memories stored in Google Docs to enhance personalization. Saves new memories based on user interactions, ensuring continuity across sessions.
by Yaron Been
LinkedIn Enrichment & Ice Breaker Generator For SDRs, growth marketers, and founders looking to scale personalized outreach. This workflow enriches LinkedIn profile data using Bright Data and generates AI-powered ice breakers using Claude (Anthropic). It automates research and messaging to help you connect smarter and faster โ without manual effort. ๐งฉ How It Works This workflow combines Google Sheets, Brigt Data, and Claude (Anthropic) to fully automate your outreach research: Trigger Manually trigger the workflow or run it on a schedule (via Manual Trigger or Schedule Trigger). Read Input Sheet Fetches rows from a Google Sheet. Each row must contain at least a Linkedin_URL_Person and row_number. Prepare Input Formats each row for Bright Dataโs API using Set and SplitInBatches nodes. Enrich Profile (Bright Data API) Sends LinkedIn URLs to Bright Dataโs Dataset API via HTTP Request. Waits for snapshot to be ready using polling logic with Wait, If, and Snapshot Progress nodes. Once ready, retrieves the enriched profile data including: Name City Current company About section Recent posts Update Sheet with Profile Data Writes the retrieved enrichment data into the corresponding row in Google Sheets (via row_number). Generate Ice Breaker (Claude AI) Sends enriched profile content to Claude (Anthropic) using a custom prompt. Focuses on recent posts for crafting relevant, respectful, 1โ4-line ice breakers. Update Sheet with Ice Breaker Writes the generated ice breaker to the Ice Breaker 1 column in the original row. โ Requirements To use this workflow, you must have the following: Google Sheets A Google account A Google Sheet with at least one sheet/tab containing: Column: Linkedin_URL_Person Column: row_number (used for mapping input and output rows) Bright Data A Bright Data account with access to the Dataset API An active dataset that accepts LinkedIn URLs API key with Dataset API access Anthropic Claude An Anthropic API key (for Claude 3.5 Haiku or other Claude models) n8n Environment Access to HTTP Request, Set, Wait, SplitInBatches, If, and Google Sheets nodes Access to Claude integration (via LangChain nodes: @n8n/n8n-nodes-langchain) Credential manager properly configured with: Google Sheets OAuth2 credentials Bright Data API key Anthropic API key โ๏ธ Setup Instructions Step 1: Copy the Google Sheets Template > ๐ Click here to make a copy Fill the Linkedin_URL_Person column with LinkedIn profile URLs you want to enrich Do not modify headers or add filters to the sheet Leave other columns (name, city, about, posts, ice breaker) blank โ the workflow fills them Step 2: Connect Your Accounts in n8n Google Sheets: Create a credential under Google Sheets OAuth2 API Bright Data: Add your API key as a credential under HTTP Request (Authorization header) Anthropic: Create a credential for Anthropic API with your Claude key Step 3: Import and Configure the Workflow Import the workflow into your n8n instance. In each Google Sheets node: Select the copied Google Sheet Select the correct tab (usually input or Sheet1) In the HTTP Request node to Bright Data: Paste your Bright Data dataset ID In the Claude prompt node: Optionally adjust the tone and length of the ice breaker prompt Step 4: Run the Workflow Test it using the Manual Trigger node For daily automation, enable the Schedule Trigger and configure interval settings Watch your Google Sheet populate with enriched data and tailored ice breakers ๐ง Tips & Best Practices Bright Data Delay**: Snapshots may take time. The workflow polls the status until complete. Retry Protection**: If and Wait nodes avoid infinite loops by checking snapshot status. Mapping via row_number**: Critical to ensure data is updated in the right row. Prompt Engineering**: You can fine-tune Claude's behavior by editing the text prompt. ๐งพ Output Example Once complete, each row in your Google Sheet will contain: | Linkedin_URL_Person | Name | City | Company | Recent Post | Ice Breaker | |---------------------|------|------|---------|-------------|--------------| | linkedin.com/... | Jane Doe | NYC | ACME Corp | โWhy AI should replace meetingsโ | "Loved your post about AI and meetings โ finally someone said it!" | ๐ฌ Support & Feedback Questions? Want to tweak the prompt or expand the enrichment? ๐ง Email: Yaron@nofluff.online ๐บ YouTube: @YaronBeen ๐ LinkedIn: linkedin.com/in/yaronbeen
by Yatharth Chauhan
How it works This workflow automates the process of handling incoming emails by: Receiving emails via IMAP. Converting the email to Markdown for better AI understanding. Summarizing the email using an AI model. Drafting a professional reply with AI, based on the summary. Requesting human approval for the AI-generated response. Sending the approved reply back to the original sender. Set up steps Estimated time: 10โ20 minutes (excluding credential setup) What youโll need: IMAP credentials for your email inbox SMTP credentials for sending emails OpenAI (or compatible) API key for AI steps Setup outline: Add your IMAP and SMTP credentials to the workflow. Connect your OpenAI (or compatible) account for AI summarization and reply generation. Deploy the workflow in n8n and activate it. Test by sending an email to your connected inbox. Note: Detailed configuration tips and explanations are included as sticky notes inside the workflow for each step.
by inderjeet Bhambra
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Who is this for? IT teams and support organizations looking to automate Level 1 support with AI-powered assistance while maintaining proper ticket management workflows. What problem does this solve? Eliminates repetitive manual support tasks by providing instant, context-aware assistance that references organizational knowledge and creates structured tickets when needed. What this workflow does RAG Pipeline**: Processes PDF/CSV documents into searchable vector database Intelligent Slack Bot**: This AI-helpdesk assistant handles support requests with thread-aware conversations Vector Knowledge Search**: Searches embedded knowledge base articles and historical case data JIRA Integration**: Creates, searches, and manages support tickets automatically Emoji Reactions**: Users can trigger actions (create tickets, escalate) via emoji reactions Requirements Required Accounts: n8n Cloud or self-hosted instance Slack workspace with admin access Supabase account (vector database) JIRA Cloud instance OpenAI API key Technical Prerequisites: Basic n8n workflow knowledge Slack app creation experience Understanding of vector databases Setup Steps 1. Slack App Configuration Create new Slack app with Bot Token Scopes: app_mentions:read, channels:history, channels:read, groups:history, groups:read, im:history, im:read, mpim:history, mpim:read, users:read Configure Event Subscriptions: app_mention, message.channels, message.groups, reaction_added Set Request URL to your n8n Slack Trigger webhook 2. Supabase Vector Database Setup Create new Supabase project Enable pgvector extension Create documents table with vector column (1536 dimensions for OpenAI embeddings) Configure RLS policies for secure access 3. JIRA Configuration Generate API token from JIRA Cloud Create helpdesk project with appropriate issue types Note project ID and issue type IDs for workflow configuration 4. n8n Workflow Configuration Import workflow and configure credentials Update Slack channel IDs in trigger nodes Set OpenAI API key in all OpenAI nodes Configure Supabase connection in vector store nodes Update JIRA project settings in MCP server nodes 5. Knowledge Base Data Format Supported file formats: PDF, CSV CSV Structure: Structure your data with columns, but not limited to, Ticket#, Issue Description, Issue Summary, Resolution Provided, Case Status, Contact User PDF Content: Technical documentation, troubleshooting guides, policy documents Upload documents via the form trigger to automatically embed in vector database. Customization Options AI Agent Behavior Modify system prompt in AIHelpdesk Agent node Adjust conversation memory window (default: 20 messages) Change AI model (GPT-4o, GPT-3.5-turbo, etc.) Reaction Mappings Customize emoji-to-action mappings in Reaction Handler code Add new reaction types for department-specific workflows Configure escalation rules and priority levels JIRA Integration Customize ticket templates and fields Add auto-assignment rules based on issue type Configure SLA and priority mappings Vector Search Adjust similarity thresholds for knowledge retrieval Modify search result limits and relevance scoring Add metadata filtering for departmental knowledge bases Advanced Features Thread-aware conversation memory Automatic bot loop prevention Context-preserving ticket creation Multi-modal file processing (PDF + CSV) Scalable MCP architecture for tool integration Use Cases Level 1 IT Support**: Automate common troubleshooting workflows Employee Onboarding**: Answer policy and procedure questions Internal Help Desk**: Route and track internal service requests Knowledge Management**: Make organizational knowledge searchable and actionable Template includes Complete Slack integration with thread support RAG pipeline for document processing Vector similarity search implementation JIRA ticket lifecycle management Emoji reaction-based user interactions Comprehensive error handling and validation