by Nukeador
Who is this for? BlueSky users who are looking to send a "welcome message" to their new followers as a private message. What this workflow does This worflow will check for new followers on BlueSky every 60 minutes and send a private message to the new ones. Setup You need to create a BlueSky app password with private messages access. Fill your credentials and the message text on the corresponding nodes (see sticky notes). Manually run once the `Save followers to file` node to generate your initial followers list. Enable the workflow How to customize this workflow to your needs You can adjust the check frecuency, but be careful to avoid hitting the 100 createSession per day rate limit Feedback or comments You can leave comments, feedback or improvements about this workflow on the n8n forums
by Angel Menendez
Phishing Email Detection and Reporting with n8n Who is this for? This workflow is designed for IT teams, security professionals, and managed service providers (MSPs) looking to automate the process of detecting, analyzing, and reporting phishing emails. What problem is this workflow solving? Phishing emails are a significant cybersecurity threat, and manually detecting and reporting them is time-consuming and prone to errors. This workflow streamlines the process by automating email analysis, generating detailed reports, and logging incidents in a centralized system like Jira. What this workflow does This workflow automates phishing email detection and reporting by integrating Gmail and Microsoft Outlook email triggers, analyzing the content and headers of incoming emails, and generating Jira tickets for flagged phishing emails. Here’s what happens: Email Triggers: Captures incoming emails from Gmail or Microsoft Outlook. Email Analysis: Extracts email content, headers, and metadata for analysis. HTML Screenshot: Converts the email’s HTML body into a visual screenshot. AI Phishing Detection: Leverages ChatGPT to analyze the email and detect potential phishing indicators. Jira Integration: Automatically creates a Jira ticket with detailed analysis and attaches the email screenshot for review by the security team. Customizable Reports: Includes options to customize ticket descriptions and adapt the workflow to organizational needs. Setup Authentication: Set up Gmail and Microsoft Outlook OAuth credentials in n8n to access your email accounts securely. API Keys: Add API credentials for the HTML screenshot service (hcti.io) and ChatGPT. Jira Integration: Configure your Jira project and issue types in the workflow. Workflow Configuration: Update sticky notes and nodes to include any additional setup or configuration details unique to your system. How to customize this workflow to your needs Email Filters**: Modify email triggers to filter specific subjects or sender addresses. Analysis Scope**: Adjust the ChatGPT prompt to refine phishing detection logic. Integration**: Replace Jira with your preferred ticketing system or modify the ticket fields to include additional information. This workflow provides an end-to-end automated solution for phishing email management, enhancing efficiency and reducing security risks. It’s perfect for teams looking to minimize manual effort and improve incident response times.
by Angel Menendez
Analyze & Sort Suspicious Email Contents with ChatGPT and Jira Who is this for? This workflow is tailored for IT security teams, managed service providers (MSPs), and organizations aiming to streamline the detection and reporting of phishing emails. It's especially useful for teams handling high email volumes and requiring quick, automated analysis. What problem is this workflow solving? Phishing emails pose a significant cybersecurity threat, and manual review processes are time-consuming and prone to human error. This workflow automates the identification of malicious emails, provides AI-driven insights, and generates structured reports, enabling faster and more efficient responses to email-based threats. What this workflow does This workflow integrates Gmail or Microsoft Outlook to monitor and capture incoming emails. It processes the email content and headers, converts the email's body to a visual screenshot for clarity, and uses ChatGPT's advanced AI to analyze the email for phishing indicators. Based on the analysis, it categorizes emails as potentially malicious or benign, creating detailed Jira tickets for each case. Attachments, including the email body and screenshots, are automatically uploaded for comprehensive reporting. Key steps include: Email Integration: Captures emails from Gmail or Microsoft Outlook. Content Processing: Extracts and organizes email content and metadata. AI Analysis: Uses ChatGPT to evaluate email content and headers. Classification: Categorizes emails as malicious or benign. Automated Reporting: Creates Jira tickets with detailed analysis and attachments. Setup Authentication: Configure Gmail or Microsoft Outlook credentials in n8n. API Keys: Add credentials for the HTML screenshot service (hcti.io) and OpenAI. Jira Configuration: Set up project and issue types in the Jira nodes. Customization: Update sticky notes and nodes to fit your organizational requirements, such as modifying the AI prompt or Jira ticket fields. How to customize this workflow to your needs Adjust email triggers to include or exclude specific senders or subjects. Refine the AI prompt in the ChatGPT node to tailor phishing detection criteria. Modify Jira ticket content to include additional fields or match specific workflows. This workflow is ideal for automating email threat detection, reducing response times, and enhancing overall cybersecurity processes. By leveraging AI-powered insights, it helps organizations stay ahead of phishing attacks.
by Yaron Been
🔍 Scrape Glassdoor with Bright Data Designed for sales teams, recruiters, and marketers aiming to automate job discovery and prospecting. This workflow scrapes Glassdoor job listings using Bright Data and automatically generates targeted pitches using AI, streamlining lead identification and outreach. 🧩 How It Works This automation leverages n8n, Bright Data, Google Sheets, and OpenAI: 1. Trigger Starts with a custom form input (Location, Keyword, Country). 2. Bright Data Job Scrape Triggers a Bright Data dataset snapshot via HTTP Request. Polls snapshot progress using a Wait node, ensuring data readiness. Retrieves full job listings dataset once ready. 3. Google Sheets Integration Writes detailed job data (company, role, location, overview, metrics) into a Google Sheet. Uses a pre-built template for organized data storage. 4. Automated Pitch Generation (AI) Splits listings into actionable parts: company name, title, and description. Sends data to OpenAI (via LangChain) to generate relevant pitches or icebreakers. Saves generated content back into the same sheet for easy access. ✅ Requirements Ensure you have the following: Google Sheets Google account Template Sheet with columns for job details and AI-generated pitches Bright Data Active account with Dataset API access API key and dataset ID OpenAI Valid OpenAI API key for GPT models n8n Environment Nodes: HTTP Request, Wait, If, Google Sheets, Split Out, LangChain (OpenAI) Credentials: Google Sheets OAuth2 Bright Data API credentials OpenAI API key ⚙️ Setup Instructions Step 1: Prepare Google Sheets Copy the provided Google Sheets template Do not change headers Step 2: Import & Configure Workflow in n8n Import the workflow JSON file Set Google Sheets node: Link to your copied sheet Confirm correct tab name Step 3: Configure Bright Data Replace <YOUR_BRIGHT_DATA_API_KEY> with your real key Set your dataset ID in all HTTP Request nodes Step 4: Configure OpenAI (LangChain) Connect OpenAI API key to the LangChain node Customize prompt to match tone and outreach style Step 5: Testing & Scheduling Test via manual form trigger Schedule runs or leave form enabled for on-demand use 🧠 Tips & Best Practices Use specific keywords and locations for better results Adjust polling intervals based on dataset size Refine AI prompts regularly to improve pitch quality Clean unused columns from your sheet to boost performance 💬 Support & Feedback For help or customization: 📧 Email: Yaron@nofluff.online 📺 YouTube: @YaronBeen 🔗 LinkedIn: linkedin.com/in/yaronbeen 📚 Bright Data Docs: docs.brightdata.com/introduction
by Mihai Farcas
This n8n workflow creates a financial analysis tool that generates reports on a company's quarterly earnings using the capabilities of OpenAI GPT-4o-mini, Google's Gemini AI and Pinecone's vector search. By analyzing PDFs of any company's earnings reports from their Investor Relations page, this workflow can answer complex financial questions and automatically compile findings into a structured Google Doc. How it works: Data loading and indexing Fetches links to PDF earnings document from a Google Sheet containing a list of file links. Downloads the PDFs from Google Drive. Parses the PDFs, splits the text into chunks, and generates embeddings using the Embeddings Google AI node (text-embedding-004 model). Stores the embeddings and corresponding text chunks in a Pinecone vector database for semantic search. Report generation with AI agent Utilizes an AI Agent node with a specifically crafted system prompt. The agent orchestrates the entire process. The agent uses a Vector Store Tool to access and retrieve information from the Pinecone database. Report delivery Saves the generated report as a Google Doc in a specified Google Drive location. Set up steps Google Cloud Project & Vertex AI API: Create a Google Cloud project. Enable the Vertex AI API for your project. Google AI API key: Obtain a Google AI API key from Google AI Studio. Pinecone account and API key: Create a free account on the Pinecone website. Obtain your API key from your Pinecone dashboard. Create an index named company-earnings in your Pinecone project. Google Drive - download and save financial documents: Go to a company you want to analize and download their quarterly earnings PDFs Save the PDFs in Google Drive Create a Google Sheet that stores a list of file URLs pointing to the PDFs you downloaded and saved to Google Drive Configure credentials in your n8n environment for: Google Sheets OAuth2 Google Drive OAuth2 Google Docs OAuth2 Google Gemini(PaLM) Api (using your Google AI API key) Pinecone API (using your Pinecone API key) Import and configure the workflow: Import this workflow into your n8n instance. Update the List Of Files To Load (Google Sheets) node to point to your Google Sheet. Update the Download File From Google Drive to point to the column where the file URLs are Update the Save Report to Google Docs node to point to your Google Doc where you want the report saved.
by Jimleuk
This n8n template introduces the Dynamic Prompts Ai workflow pattern which are incredible for certain types of data extraction tasks where attributes are unknown or need to remain flexible. The general idea behind this pattern is that the prompts for requested attributes to be extracted live outside the template and so can be changed at any time - without needing to edit the template. This seriously cuts down on maintainance requirements and is reusable for any number of tables at little cost. Check out the video demo I did for n8n Studio here: https://www.youtube.com/watch?v=_fNAD1u8BZw Check out the example Airtable here: https://airtable.com/appAyH3GCBJ56cfXl/shrXzR1Tj99kuQbyL Looking for the Baserow Version? https://n8n.io/workflows/2780-ai-data-extraction-with-dynamic-prompts-and-baserow/ How it works Given we have an "input" field for context and a number of fields for the data we want to extract, this template will run in the background to react to any changes to either the "input" or fields and automatically update the rows accordingly. The key is that Airtable fields have a special property called the "field description". In this pattern, we use this property to allow the user to store a simple prompt describing the data that should exist in the column. Our n8n template reads these column descriptions aka "prompts" to use as instructions to perform tasks on the "input". In this template, the "input" is a PDF of a resume/CV and the columns are attributes a HR person would want to extract from it - such as full name, address, last position, years of experience etc. How to use First publish this template and ensure it's accessible via webhook URL. You then have to run the "create airtable webhooks" mini-flow to configure your Airtable to send change events to the n8n template. This mini-flow exists in the template but you'll have to update the IDs. Check the template for more instructions. Requirements Airtable for Tables/Database OpenAI for LLM and extraction. Feel free to choose another LLM if preferred. Customising this workflow If you're not using files, you can replace the "input" field with anything you like. For example, the "input" could be single line text.
by Jimleuk
This n8n template introduces the Dynamic Prompts AI workflow pattern which are incredible for certain types of data extraction tasks where attributes are unknown or need to remain flexible. The general idea behind this pattern is that the prompts for requested attributes to be extracted live outside the template and so can be changed at any time - without needing to edit the template. This seriously cuts down on maintainance requirements and is reusable for any number of tables at little cost. Check out the n8n Studio Episode here: https://www.youtube.com/watch?v=_fNAD1u8BZw Community post here: https://community.n8n.io/t/dynamic-prompts-with-n8n-baserow-and-airtable/72052 Looking for the Airtable Version? https://n8n.io/workflows/2771-ai-data-extraction-with-dynamic-prompts-and-airtable/ How it works Given we have an "input" field for context and a number of fields for the data we want to extract, this template will run in the background to react to any changes to either the "input" or fields and automatically update the rows accordingly. The key is that Baserow fields have a special property called the "field description". In this pattern, we use this property to allow the user to store a simple prompt describing the data that should exist in the column. Our n8n template reads these column descriptions aka "prompts" to use as instructions to perform tasks on the "input". In this template, the "input" is a PDF of a resume/CV and the columns are attributes a HR person would want to extract from it - such as full name, address, last position, years of experience etc. How to use First publish this template and ensure it's accessible via webhook URL. You then have to complete the "create Baserow webhooks" steps to configure your baserow to send change events to the n8n template. Baserow webhooks are created in the Baserow web interface. Check the template for more instructions. Requirements Baserow for Tables/Database OpenAI for LLM and extraction. Feel free to choose another LLM if preferred. Customising this workflow If you're not using files, you can replace the "input" field with anything you like. For example, the "input" could be single line text.
by Chris Carr
Split Test Agent Prompts with Supabase and OpenAI Use Case Oftentimes, it's useful to test different settings for a large language model in production against various metrics. Split testing is a good method for doing this. What it Does This workflow randomly assigns chat sessions to one of two prompts, the baseline and the alternative. The agent will use the same prompt for all interactions in that chat session. How it Works When messages arrive, a table containing information regarding session ID and which prompt to use is checked to see if the chat already exists If it does not, the session ID is added to the table and a prompt is randomly assigned These values are then used to generate a response Setup Create a table in Supabase called split_test_sessions. It needs to have the following columns: session_id (text) and show_alternative (bool) Add your Supabase, OpenAI, and PostgreSQL credentials Modify the Define Path Values node to set the baseline and alternative prompt values. Activate the workflow and test by sending messages through n8n's inbuilt chat Experiment with different chat sessions to test see both prompts in action Next Steps Modify the workflow to test different LLM settings such as temperature Add a method to measure the efficacy of the two alternative prompts
by Mark Shcherbakov
Video Guide I prepared a detailed guide that demonstrates the complete process of building a trading agent automation using n8n and Telegram, seamlessly integrating various functions for stock analysis. Youtube Link Who is this for? This workflow is perfect for traders, financial analysts, and developers looking to automate stock analysis interactions via Telegram. It’s especially valuable for those who want to leverage AI tools for technical analysis without needing to write complex code. What problem does this workflow solve? Many traders desire real-time analysis of stock data but lack the technical expertise or tools to perform in-depth analysis. This workflow allows users to easily interact with an AI trading agent through Telegram for seamless stock analysis, chart generation, and technical evaluation, all while eliminating the need for manual interventions. What this workflow does This workflow utilizes n8n to construct an end-to-end automation process for stock analysis through Telegram communication. The setup involves: Receiving messages via a Telegram bot. Processing audio or text messages for trading queries. Transcribing audio using OpenAI API for interpretation. Gathering and displaying charts based on user-specified parameters. Performing technical analysis on generated charts. Sending back the analyzed results through Telegram. Setup Prepare Airtable: Create simple table to store tickers. Prepare Telegram Bot: Ensure your Telegram bot is set up correctly and listening for new messages. Replace Credentials: Update all nodes with the correct credentials and API keys for services involved. Configure API Endpoints: Ensure chart service URLs are correctly set to interact with the corresponding APIs properly. Start Interaction: Message your bot to initiate analysis; specify ticker symbols and desired chart styles as required.
by Joseph LePage
Compare Local Ollama Vision Models for Image Analysis using Google Docs Process images using locally hosted Ollama Vision Models to extract detailed descriptions, contextual insights, and structured data. Save results directly to Google Docs for efficient collaboration. Who is this for? This workflow is ideal for developers, data analysts, marketers and AI enthusiasts who need to process and analyze images using locally hosted Ollama Vision Language Models. It’s particularly useful for tasks requiring detailed image descriptions, contextual analysis, and structured data extraction. What problem is this workflow solving? / Use Case The workflow solves the challenge of extracting meaningful insights from images in exhaustive detail, such as identifying objects, analyzing spatial relationships, extracting textual elements, and providing contextual information. This is especially helpful for applications in real estate, marketing, engineering, and research. What this workflow does This workflow: Downloads an image file from Google Drive. Processes the image using multiple Ollama Vision Models (e.g., Granite3.2-Vision, Gemma3, Llama3.2-Vision). Generates detailed markdown-based descriptions of the image. Saves the output to a Google Docs file for easy sharing and further analysis. Setup Ensure you have access to a local instance of Ollama. https://ollama.com/ Pull the Ollama vision models. Configure your Google Drive and Google Docs credentials in n8n. Provide the image file ID from Google Drive in the designated node. Update the list of Ollama vision models Test the workflow by clicking ‘Test Workflow’ to trigger the process. How to customize this workflow to your needs Replace the image source with another provider if needed (e.g., AWS S3 or Dropbox). Modify the prompts in the "General Image Prompt" node to suit specific analysis requirements. Add additional nodes for post-processing or integrating results into other platforms like Slack or HubSpot. Key Features: Detailed Image Analysis**: Extracts comprehensive details about objects, spatial relationships, text elements, and contextual settings. Multi-Model Support**: Utilizes multiple vision models dynamically for optimal performance. Markdown Output**: Formats results in markdown for easy readability and documentation. Google Drive Integration**: Seamlessly downloads images and saves results to Google Docs.
by lin@davoy.tech
This workflow template, "Chinese Translator via Line x OpenRouter (Text & Image)" is designed to provide seamless Chinese translation services directly within the LINE messaging platform. By integrating with OpenRouter.ai and advanced language models like Qwen, this workflow translates text or images containing Chinese characters into pinyin and English translations, making it an invaluable tool for language learners, travelers, and businesses operating in multilingual environments. This template is ideal for: Language Learners: Who want to practice Chinese by receiving instant translations of text or images. Travelers: Looking for quick translations of Chinese signs, menus, or documents while abroad. Educators: Teaching Chinese language courses and needing tools to assist students with translations. Businesses: Operating in multilingual markets and requiring efficient communication tools. Automation Enthusiasts: Seeking to build intelligent chatbots that can handle language translation tasks. What Problem Does This Workflow Solve? Translating Chinese text or images into English and pinyin can be challenging, especially for beginners or those without access to reliable translation tools. This workflow solves that problem by: Automatically detecting and translating text or images containing Chinese characters. Providing accurate translations in both pinyin and English for better comprehension. Supporting multiple input formats (text, images) to cater to diverse user needs. Sending replies directly to users via the LINE messaging platform , ensuring accessibility and ease of use. What This Workflow Does 1) Receive Messages via LINE Webhook The workflow is triggered when a user sends a message (text, image, or other types) to the LINE bot. 2) Display Loading Animation A loading animation is displayed to reassure the user that their request is being processed. 3) Route Input Types The workflow uses a Switch node to determine the type of input (text, image, or unsupported formats). If the input is text , it is sent to the OpenRouter.ai API for translation. If the input is an image , the workflow extracts the image content, converts it to base64, and sends it to the API for translation. Unsupported formats trigger a polite response indicating the limitation. 4) Translate Content Using OpenRouter.ai The workflow leverages Qwen models from OpenRouter.ai to generate translations: For text inputs, it provides Chinese characters , pinyin , and English translations . For images, it extracts and translates using the qwen-VL model which can take images 5) Reply with Translations The translated content is formatted and sent back to the user via the LINE Reply API. Setup Guide Pre-Requisites Access to the LINE Developers Console to configure your webhook and channel access token. An OpenRouter.ai account with credentials to access Qwen models. Basic knowledge of APIs, webhooks, and JSON formatting. Step-by-Step Setup 1) Configure the LINE Webhook: Go to the LINE Developers Console and set up a webhook to receive incoming messages. Copy the Webhook URL from the Line Webhook node and paste it into the LINE Console. Remove any "test" configurations when moving to production. 2) Set Up OpenRouter.ai: Create an account on OpenRouter.ai and obtain your API credentials. Connect your credentials to the OpenRouter nodes in the workflow. 3) Test the Workflow: Simulate sending text or images to the LINE bot to verify that translations are processed and replied correctly. How to Customize This Workflow to Your Needs Add More Languages: Extend the workflow to support additional languages by modifying the API calls. Enhance Image Processing: Integrate more advanced OCR tools to improve text extraction from complex images. Customize Responses: Modify the reply format to include additional details, such as grammar explanations or cultural context. Expand Use Cases: Adapt the workflow for specific industries, such as tourism or e-commerce, by tailoring the translations to relevant vocabulary. Why Use This Template? Real-Time Translation: Provides instant translations of text and images, improving user experience and accessibility. Multimodal Support: Handles both text and image inputs, catering to diverse user needs. Scalable: Easily integrate into existing systems or scale to support multiple users and workflows. Customizable: Tailor the workflow to suit your specific audience or industry requirements.
by Abdullah Maftah
Auto Source LinkedIn Candidates with GPT-4 Boolean Search & Google X-ray How It Works: User Input: The user pastes a job description or ideal candidate specifications into the workflow. Boolean Search String Generation: OpenAI processes the input and generates a precise LinkedIn Boolean search string formatted as: site:linkedin.com/in ("Job Title" AND "Skill1" AND "Skill2") This search string is optimized to find relevant LinkedIn profiles matching the provided criteria. Google Sheet Creation: A new Google Sheet is automatically created within a specified document to store extracted LinkedIn profile URLs. Google Search Execution: The workflow sends a search request to Google using an HTTP node with the generated Boolean string. Iterative Search & Data Extraction: The workflow retrieves the first 10 results from Google. If the desired number of LinkedIn profiles has not been reached, the workflow loops, fetching the next set of 10 results until the if condition is met. Data Storage: The workflow extracts LinkedIn profile URLs from the search results and saves them to the newly created Google Sheet for further review. Setup Steps: 1. API Key Configuration Under "Credentials", add your OpenAI API key from your OpenAI account settings. This key is used to generate the LinkedIn Boolean search string. 2. Adjust Search Parameters Navigate to the "If" node and update the condition to define the desired number of LinkedIn profiles to extract. The default is 50, but you can set it to any number based on your needs. 3. Establish Google Sheets Connection Connect your Google Sheets account** to the workflow. Create a document** to store the sourced LinkedIn profiles. The workflow automatically creates a new sheet for each new search, so no manual setup is needed. 4. Authenticate Google Search Google search requires authentication** for better results. Use the Cookie-Editor browser extension to export your header string and enable authenticated Google searches within the workflow. 5. Run the Workflow Execute* the workflow and monitor the *Google Sheet** for newly added LinkedIn profiles. Benefits: ✅ Automates profile sourcing, reducing manual search time. ✅ Generates precise LinkedIn Boolean search strings tailored to job descriptions. ✅ Extracts and saves LinkedIn profiles efficiently for recruitment efforts. This solution leverages OpenAI and advanced search techniques to enhance your talent sourcing process, making it faster and more accurate! 🚀