by Yaron Been
Scrape Indeed Job Listings for Hiring Signals Using Bright Data and LLMs How the flow runs Fill the form with job position you're hunting for. Bright data's scraper will scrape Indeed based on your requirments. Workflow waits for the snapshot. Data returns as JSON. Jobs append to Google Sheets. Each row goes to an LLM to analyze if you're a good fit for the job (based on your prompts). The LLMswrites YES or NO next to each job opportunity, helping you find job posts that are relevant to you. What you need Google Sheets with our template. Bright Data dataset and API key. OpenAI key for GPT‑4o mini (or any other LLM). n8n with required nodes. Form fields To Fill Job Location** – city or region. Keyword** – role or skills. Country** – two‑letter code. Setup steps Copy the sheet template link. Import the JSON workflow. Add your credentials in nodes. Test the form manually. Add a schedule if desired. Bright Data filter example [ { "country": "US", "domain": "indeed.com", "keyword_search": "Growth Marketer", "location": "Miami", "date_posted": "Last 24 hours" } ] Tips -Choose Last 24 hours often. -Increase wait time for big snapshots. -Narrow keywords to save credits. **Need help? **Email me anytime: Yaron@nofluff.online YouTube: @YaronBeen LinkedIn: https://www.linkedin.com/in/yaronbeen/ Bright Data Docs: https://docs.brightdata.com/introduction
by Jimleuk
This n8n workflow demonstrates a simple approach to improve chat UX by staggering an AI Agent's reply for users who send in a sequence of partial messages and in short bursts. How it works Twilio webhook receives user's messages which are recorded in a message stack powered by Redis. The execution is immediately paused for 5 seconds and then another check is done against the message stack for the latest message. The purpose of this check lets use know if the user is sending more messages or if they are waiting for a reply. The execution is aborted if the latest message on the stack differs from the incoming message and continues if they are the same. For the latter, the agent receives the buffered messages up to that point and is able to respond to them in a single reply. Requirements A Twilio account and SMS-enabled phone number to receive messages. Redis instance for the messages stack. OpenAI account for the language model. Customising the workflow This workflow should work for other common messaging platforms such as Whatsapp and Telegram. 5 seconds too long or too short? Adjust the wait threshold to suit your customers.
by Leonardo Grigorio
Youtube Video This n8n workflow is designed to assist YouTube content creators in identifying trending topics within a specific niche. By leveraging YouTube's search and data APIs, it gathers and analyzes video performance metrics from the past two days to provide insights into what content is gaining traction. Here's how the workflow operates: Trigger Setup: The workflow begins when a user sends a query through the chat_message_received node. If no niche is provided, the AI prompts the user to select or input one. AI Agent (Language Model): The central node utilizes a GPT-based AI agent to: Understand the user's niche or content preferences. Generate tailored search terms related to the niche. Process YouTube API responses and summarize trends using insights such as common themes, tags, and audience engagement metrics (views, likes, and comments). YouTube Search: The youtube_search node runs a secondary workflow to query YouTube for relevant videos published within the last two days. It retrieves basic video data such as video IDs, relevance scores, and publication dates. Video Details Retrieval: The workflow fetches additional details for each video: Video Snippet: Metadata like title, description, and tags. Video Statistics: Metrics such as views, likes, and comments. Content Details: Video duration, ensuring only content longer than 3 minutes and 30 seconds is analyzed. Data Processing: Video metadata is cleaned, sanitized, and stored in memory. Tags, titles, and descriptions are analyzed to identify patterns and trends across multiple videos. Output: The workflow compiles insights and presents them to the user, highlighting: The most common themes or patterns within the niche. URLs to trending videos and their respective channels. Engagement statistics, helping the user understand the popularity of the content. Key Notes for Setup: API Keys**: Ensure valid YouTube API credentials are configured in the get_videos, find_video_snippet, find_video_statistics, and find_video_data nodes. Memory Buffer**: The window_buffer_memory node ensures the AI agent retains context during analysis, enhancing the quality of the generated insights. Search Term Customization**: The AI agent dynamically creates search terms based on the user’s niche to improve search precision. Use Case: This workflow is ideal for YouTubers or marketers seeking data-driven inspiration for creating content that aligns with current trends, maximizing the potential to engage their audience. Example Output: For the niche "digital marketing": Trending Topic: Videos about "mental triggers" and "psychological marketing." Tags: "SEO," "Conversion Rates," "Social Proof." Engagement: Videos with over 200K views and high likes/comment ratios are leading trends. Video links: https://www.youtube.com/watch?v=video_id1 https://www.youtube.com/watch?v=video_id2
by simonscrapes
What this workflow does: This flow uses an AI node to generate Seed Keywords to focus SEO efforts on based on your ideal customer profile. You can use these keywords to form part of your SEO strategy. Outputs: List of 20 Seed Keywords Setup Fill the Set Ideal Customer Profile (ICP) Connect with your credentials Replace the Connect to your own database with your own database Pre-requisites / Dependencies You know your ideal customer profile (ICP) An AI API account (either OpenAI or Anthropic recommended) More templates and n8n workflows >>> @simonscrapes
by David Olusola
🤖 AI-Powered Lead Enrichment with Explorium MCP & Telegram Who it's for Sales reps, agencies, and growth teams who want to turn basic company info into qualified leads with automated research . Perfect for B2B prospecting. What it does This workflow lets you send a company name or domain via Telegram, and instantly returns: ✅ Enriched company profile (industry, size, tech, pain points) ✅ A clean, structured JSON — ready for your CRM or sales tools How it works 💬 Send company info to your Telegram bot 🔎 Workflow pulls data from Explorium MCP + Tavily 🧠 AI analyzes model, tools, pain points & goals 📤 JSON response sent back via Telegram or logged to your database Requirements 🔐 OpenAI API (GPT-4) 🧠 Explorium MCP API 🌐 Tavily Web Search API 🤖 Telegram Bot API 🗃️ PostgreSQL (for memory/logging) How to set up Add API keys in n8n Connect Telegram bot to webhook Set up PostgreSQL for memory persistence Customize prompts (tone, niche, etc.) Test by sending a company name via Telegram Customization Options 🎯 Focus enrichment on specific industries or keywords 💬 Adjust the email sequence structure & style 🧩 Add extra data sources (e.g. Clearbit, Crunchbase) 🧾 Format JSON to match your CRM schema ⚙️ Add approval step before sending emails Highlights ✅ Uses multi-source enrichment ✅ Works 100% from Telegram ✅ Integrates into any sales pipeline
by Daniel Shashko
This workflow enables you to automate the daily monitoring of how an AI model (like ChatGPT) responds to specific queries relevant to your market. It identifies mentions of your brand and predefined competitors, logs detailed interactions in Google Sheets, and delivers a comprehensive email report. Main Use Cases Monitor how your brand is mentioned by AI in response to relevant user queries. Track mentions of key competitors to understand AI's comparative positioning. Gain insights into AI's current knowledge and portrayal of your brand and market landscape. Automate daily intelligence gathering on AI-driven brand perception. How it works The workflow operates as a scheduled process, organized into these stages: Configuration & Scheduling Triggers daily (or can be run manually). Key variables are defined within the workflow: your brand name (e.g., "YourBrandName"), a list of queries to ask the AI, and a list of competitor names to track in responses. AI Querying For each predefined query, the workflow sends a request to the OpenAI ChatGPT API (via an HTTP Request node). Response Analysis Each AI response is processed by a Code node to: Check if your brand name is mentioned (case-insensitive). Identify if any of the listed competitors are mentioned (case-insensitive). Extract the core AI response content (limited to 500 characters for brevity in logs/reports). Data Logging to Google Sheets Detailed results for each query—including timestamp, date, the query itself, query index, your brand name, the AI's response, whether your brand was mentioned, and any errors—are appended to a specified Google Sheet. Email Report Generation A comprehensive HTML email report is compiled. This report summarizes: Total queries processed, number of times your brand was mentioned, total competitor mentions, and any errors encountered. A summary of competitor mentions, listing each competitor and how many times they were mentioned. A detailed table listing each query, whether your brand was mentioned, and which competitors (if any) were mentioned in the AI's response. Automated Reporting The generated HTML email report is sent to specified recipients, providing a daily snapshot of AI interactions. Summary Flow: Schedule/Workflow Trigger → Initialize Brand, Queries, Competitors (in Code node) → For each Query: Query ChatGPT API → Process AI Response (Check for Brand & Competitor Mentions) → Log Results to Google Sheets → Generate Consolidated HTML Email Report → Send Email Notification Benefits: Fully automated daily monitoring of AI responses concerning your brand and competitors. Provides objective insights into how AI models are representing your brand in user interactions. Delivers actionable competitive intelligence by tracking competitor mentions. Centralized logging in Google Sheets for historical analysis and trend spotting. Easily customizable with your specific brand, queries, competitor list, and reporting recipients.
by Sidetool
Hello there! This is a supporting workflow for an Airtable Base that handles Recurring Tasks. The objective of the workflow is to handle creating tasks on a recurring basis depending on the Airtable Setup You can access that Airtable Template here for complete context- Airtable Universe The functionality of the workflow can be easliy adapted to any data source. Feel free to contact us with any doubts or questions at http://sidetool.co Use this as is, or adapted to your existing Airtable Base – embrace automated simplicity! 🚀🌟
by Yosua Surojo
Who it's for This workflow is for anyone who wants to build an automated, AI-enhanced reading list. Ideal for: Knowledge workers and researchers who collect and organize articles Students managing study materials Productivity hackers who use Telegram and Notion for personal knowledge management Anyone using the AI-Enhanced Knowledge Base Tracker Notion Template How it works This workflow takes any article link sent to your Telegram bot and automatically: Parses the article into a clean title and body Uses OpenAI to generate a 1–2 sentence highlight and topic tag Saves it into your Notion database Sends a confirmation message with the highlight and Notion link back to Telegram Main steps: Telegram Trigger - Listens for incoming message containing an article link. Fetch Article Title & Content - Calls the article-parser-api deployed on Vercel to fetch and parse the article content into structured JSON (title and content). Generate Highlight + Tag (AI Agent) - Processes the parsed content to generate Highlight and Type tag values. Structured Metadata for Notion - Adjusts the extracted data before saving it to Notion. Save Article to Notion Database - Inserts the article and generated metadata into your Notion knowledge base. Confirm Save via Telegram - Sends a confirmation message and the Notion page link back to the Telegram bot chat after the entry is created. Setup Create and connect your API credentials: Telegram Bot OpenAI API Key Notion Integration Deploy the article parser: Use this repo: article-parser-api Deploy it to Vercel or any serverless environment Link your Notion database: Duplicate the AI‑Enhanced Knowledge Base Tracker Copy the database URL and connect it in the Notion node Test your workflow: Click Execute workflow Send an article link to your Telegram bot Once verified, activate the workflow so it runs automatically Requirements Telegram bot token OpenAI API key Notion integration and shared database A deployed article parser (e.g., article-parser-api) Optional customization Edit the AI Agent prompt to change tone or tagging style Add filtering or additional fields in the Edit Fields node Trigger from other sources (e.g., Slack or Email)
by Niklas Hatje
Use case When working with multiple teams, bugs must get in front of the right team as quickly as possible to be resolved. Normally this includes a manual grooming of new bugs that have arrived in your ticketing system (in our case Linear). We found this way too time-consuming. That's why we built this workflow. What this workflow does This workflow triggers every time a Linear issue is created or updated within a certain team. For us at n8n, we created one general team called Engineering where all bugs get added in the beginning. The workflow then checks if the issue meets the criteria to be auto-moved to a certain team. In our case, that means that the description is filled, that it has the bug label, and that it's in the Triage state. The workflow then classifies the bug using OpenAI's GPT-4 model before updating the team property of the Linear issue. If the AI fails to classify a team, the workflow sends an alert to Slack. Setup Add your Linear and OpenAi credentials Change the team in the Linear Trigger to match your needs Customize your teams and their areas of responsibility in the Set me up node. Please use the format Teamname. Also, make sure that the team names match the names in Linear exactly. Change the Slack channel in the Set me up node to your Slack channel of choice. How to adjust it to your needs Play around with the context that you're giving to OpenAI, to make sure the model has enough knowledge about your teams and their areas of responsibility Adjust the handling of AI failures to your needs How to enhance this workflow At n8n we use this workflow in combination with some others. E.g. we have the following things on top: We're using an automation that enables everyone to add new bugs easily with the right data via a /bug command in Slack (check out this template if that's interesting to you) This workflow was built using n8n version 1.30.0
by Jimleuk
This n8n workflow demonstrates how to automate oftern time-consuming form filling tasks in the early stages of the tendering process; the Request for Proposal document or "RFP". It does this by utilising a company's knowledgebase to generating question-and-answer pairs using Large Language Models. How it works A buyer's RFP is submitted to the workflow as a digital document that can be parsed. Our first AI agent scans and extracts all questions from the document into list form. The supplier sets up an OpenAI assistant prior loaded with company brand, marketing and technical documents. The workflow loops through each of the buyer's questions and poses these to the OpenAI assistant. The assistant's answers are captured until all questions are satisified and are then exported into a new document for review. A sales team member is then able to use this document to respond quickly to the RFP before their competitors. Example Webhook Request curl --location 'https://<n8n_webhook_url>' \ --form 'id="RFP001"' \ --form 'title="BlueChip Travel and StarBus Web Services"' \ --form 'reply_to="jim@example.com"' \ --form 'data=@"k9pnbALxX/RFP Questionnaire.pdf"' Requirements An OpenAI account to use AI services. Customising the workflow OpenAI assistants is only one approach to hosting a company knowledgebase for AI to use. Exploring different solutions such as building your own RAG-powered database can sometimes yield better results in terms of control of how the data is managed and cost.
by InfoGrab
This is a response chatbot in public channels through slash commands. I explain more in detail through the YouTube video, but it's only available in Korean. How it works? When you request the created slash command in Slack, the request comes to the webhook. Then, the Switch Node branches appropriately according to each slash command request. Here, a slash command called /ask is connected to the chatbot, and the chatbot generates answers to the questions asked. The final node responds to the channel. Set up steps Create a Slack app. Add chat:write permission in Slack OAuth&Permissions>Scopes. Create a Command in Slack Slash Commands menu and enter the n8n Webhook node's URL. Complete creating the Slash Commands. Enter the created command in the Switch node. 슬래시 커맨드를 통한 공개 채널에서의 응답 챗봇 입니다. 유튜브 영상에 더 자세하게 설명 드립니다. 설명 슬랙에 생성한 슬래시 커맨드를 슬랙에서 요청하면 웹훅에 요청이 들어옵니다. 이후 Switch Node에서 각 슬래시 커맨드의 요청에 따라 알맞게 분기합니다. 여기에서는 /ask라는 슬래시 커맨드가 챗봇으로 연결되어 있고, 챗봇에서 질문한 내용의 답변을 생성합니다. 마지막 노드에서 채널로 응답을 합니다. 설정 방법 Slack 앱을 만드세요. Slack OAuth&Permissions>Scopes 에서 chat:write 권한을 추가하세요. Slack Slash Commands 메뉴에서 Command를 생성하고, n8n Webhook 노드의 url을 입력하세요. Slash Slash Commands 생성을 완료하세요. Switch 노드에 생성한 커맨드를 입력하세요.
by Paul
AI Database Assistant with Smart Query's & PostgreSQL Integration Description: 🚀 Transform Your Database into an Intelligent AI Assistant This workflow creates a smart database assistant that safely handles natural language queries without crashing your system. Features dual-agent architecture with built-in query limits and PostgreSQL optimization – perfect for commercial applications! ✅ Ideal for: SaaS developers building database search features 🔍 Database administrators providing safe AI access 🛡️ Business teams needing user-friendly data queries 📊 Anyone wanting ChatGPT-like database interaction 🤖 🔧 How It Works 1️⃣ User asks a question – "Show me top 10 popular products" 2️⃣ Main AI Agent – Interprets the request and ensures safety limits 3️⃣ SQL Sub-Agent – Generates precise PostgreSQL queries 4️⃣ Database executes – Returns formatted, limited results safely ⚡ Setup Instructions 1️⃣ Prepare Your Database Ensure PostgreSQL is accessible from n8n Note your table structure and column names Set up database connection credentials 2️⃣ Customize the Templates Replace [YOUR_TABLE_NAME] with your actual table name Update [YOUR_FIELDS] with your column names Modify examples to match your use case Important**: Keep all LIMIT clauses intact! 3️⃣ Configure the Agents Copy Main Agent system message to your primary AI node Copy Sub-Agent system message to your SQL generator node Connect the sub-workflow between both agents 4️⃣ Test & Deploy Test with sample queries like "Show me 5 recent items" Verify query limits work (max 50 results) Deploy and monitor performance 🎯 Why Use This Workflow? ✔️ System Protection – Built-in limits prevent crashes from large queries ✔️ Natural Language – Users ask questions in plain English ✔️ Commercial Ready – Generic templates work with any database ✔️ Dual-Agent Safety – Smart interpretation + precise SQL generation ✔️ PostgreSQL Optimized – Handles complex schemas and data types 🚨 Critical Features Query Limits**: Default 10, maximum 50 results (can be modified) Error Prevention**: No unlimited data retrieval Smart Routing**: Natural language → Safe SQL → Formatted results Customizable**: Works with any PostgreSQL database schema 🔗 Start building your AI database assistant today – safe, smart, and scalable!