by lin@davoy.tech
The Chinese Translator workflow automates the translation of text into Chinese characters, pinyin, and English translations via Line Messaging API. This workflow leverages OpenRouter.ai to call advanced language models such as Qwen for accurate translations and ensures smooth user interaction by providing loading animations and timely replies. Purpose This workflow aims to Provide users with real-time translations of input text into Chinese characters, pinyin, and English Deliver seamless user experience through interactive features like loading animations and quick reply messages Enable easy integration with Line Messaging API for scalable deployment Key Features Real-Time Translation : Translates user-inputted text instantly using OpenRouter.ai's standardized API. Comprehensive Output : Delivers Chinese characters, pinyin, and English translations for each word or phrase. Interactive User Experience : Incorporates loading animations to inform users that the workflow is processing their request. Line Integration : Utilizes Line Webhooks and Reply APIs to facilitate communication between users and the translation service. Data Flow Receiving Input Node: Line Webhook Captures incoming messages from Line users. Extracts the text content and reply token from the webhook payload. Loading Animation Node: Line Loading Animation Sends a loading animation back to the user, indicating that the workflow is processing the request. Enhances user experience by providing immediate feedback. Translation Processing Node: Use OpenRouter Sends the extracted text to OpenRouter.ai's API, utilizing the Qwen model for translation. Requests Chinese characters, pinyin, and English translations for the input text. Sending Response Node: Line Reply Formats the translation results into a readable text message. Sends the translated text back to the user via Line's Reply API. Setup Instructions Prerequisites Line Developer Account : Create a Line channel to obtain necessary credentials for webhooks and messaging. OpenRouter.ai Account : Set up an account and configure access to utilize their language models. Steps to Configure Set Up Line Webhook : Navigate to the Line Developers Console and create a new webhook URL. Copy the generated webhook URL and paste it into the Line Webhook node in n8n. Configure OpenRouter.ai : Obtain API credentials from OpenRouter.ai and integrate them into the Use OpenRouter node within the workflow. Adjust Workflow Settings : Ensure the timezone is set to Asia/Bangkok . Verify that all nodes are correctly connected and configured with appropriate credentials. Intended Audience This workflow is ideal for: Language Learners : Seeking quick translations and pronunciation guides for Chinese language studies. Travelers : Looking to communicate effectively while traveling in Chinese-speaking regions. Businesses : Aiming to provide multilingual support to customers and clients. Benefits Enhanced Learning : Provides comprehensive translations, including pinyin, aiding in language acquisition. User-Friendly Interface : Real-time loading animations and prompt replies ensure a smooth user experience. Scalable Deployment : Easily integrates with Line's extensive user base for widespread accessibility.
by Jimleuk
This n8n template monitors active support issues in Linear.app to track the mood of their ongoing conversation between reporter and assignee using Sentiment Analysis. When sentiment dips into the negative, a notification is sent via Slack to alert the team. How it works A scheduled trigger is used to fetch recently updated issues in Linear using the GraphQL node. Each issue's comments thread is passed into a simple Information Extractor node to identify the overall sentiment. The resulting sentiment analysis combined with the some issue details are uploaded to Airtable for review. When the template is re-run at a later date, each issue is re-analysed for sentiment Each issue's new sentiment state is saved to the airtable whilst its previous state is moved to the "previous sentiment" column. An Airtable trigger is used to watch for recently updated rows Each matching Airtable row is filtered to check if it has a previous non-negative state but now has a negative state in its current sentiment. The results are sent via notification to a team slack channel for priority. Check out the sample Airtable here: https://airtable.com/appViDaeaFw4qv9La/shrq6HgeYzpW6uwXL How to use Modify the GraphQL filter to fetch issues to a relevant issue type, team or person. Update the Slack channel to ensure messages are sent to the correct location or persons. The Airtable also serves to give a snapshot of Sentiment across support tickets for a given period. It's possible to use this to assess the daily operations. Requirements Linear for issue tracking (but feel free to use another system if preferred) Airtable for Database OpenAI for LLM and Sentiment Analysis Customising the workflow Add more granular levels of sentiment to reduce the number of alerts. Explore different types of sentiment based on issue types and customer types. This may help prioritise alerts and response. Run across teams or categories of issues to get an overview of sentiment across the support organisation.
by Davide
This automated workflow takes a static image and a textual prompt and transforms them into an animated video using the MiniMax Hailuo 02 model. It then uploads the generated video to YouTube and TikTok, and updates a Google Sheet with relevant links and metadata. Benefits of This Workflow Fully Automated Pipeline**: From prompt to video to social media publication — all without manual intervention. Scalable Content Creation**: Generate and distribute dozens of videos per hour with minimal human input. Cross-Platform Posting: Automatically pushes content to **YouTube and TikTok simultaneously. SEO Optimization**: Uses AI to generate catchy, keyword-rich video titles that improve visibility. Easy Integration**: Based on Google Sheets for input/output, making it accessible to non-technical users. Time-Efficient**: Batch-processing enabled with scheduled runs every few minutes. Customizable Duration**: Video duration can be adjusted (default is 6 seconds). How It Works Trigger & Data Fetching: The workflow starts either manually or via a scheduled trigger (e.g., every 5 minutes). It checks a Google Sheet for new entries where the "VIDEO" column is empty, indicating pending video generation tasks. Video Creation: For each entry, the workflow extracts the image URL and prompt from the Google Sheet. It sends these inputs to the MiniMax Hailuo 02 to generate a video. The API processes the image and prompt, optimizes the prompt, and creates a short video (default: 6 seconds). Status Monitoring: The workflow polls the API every 60 seconds to check if the video is COMPLETED. Once ready, it retrieves the video URL and uploads the file to Google Drive. YouTube & TikTok Upload: The video is sent to YouTube and TikTok via the Upload-Post.com API (The free plan allows uploads to all platforms except TikTok. To enable, upgrade to a paid plan.). A GPT-generated SEO-optimized title is created for the video. The Google Sheet is updated with the video URL and YouTube link. Set Up Steps Google Sheet Setup: Create a Google Sheet with columns: IMAGE (input image URL), PROMPT (video description), VIDEO (auto-filled), and YOUTUBE_URL (auto-filled). Link the sheet to the workflow using the Google Sheets node. API Keys: Obtain a fal.run API key (for MiniMax Hailuo) and configure the "Authorization" header in the "Create video" node. Get an Upload-Post.com API key (10 free uploads/month) and set it in the "Upload on YouTube/TikTok" nodes. Workflow Configuration: Replace YOUR_USERNAME in the Upload-Post nodes with your profile name (e.g., "test1"). Adjust the video duration (6 or 10 seconds) in the "Create video" node. Set the Schedule Trigger interval (e.g., 5 minutes) to automate checks for new tasks. Execution: Run the workflow manually or let the scheduler process new rows automatically. The system handles video generation, uploads, and Google Sheet updates end-to-end. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Mark Shcherbakov
Video Guide I prepared a detailed guide explaining how to build an AI-powered meeting assistant that provides real-time transcription and insights during virtual meetings. Youtube Link Who is this for? This workflow is ideal for business professionals, project managers, and team leaders who require effective transcription of meetings for improved documentation and note-taking. It's particularly beneficial for those who conduct frequent virtual meetings across various platforms like Zoom and Google Meet. What problem does this workflow solve? Transcribing meetings manually can be tedious and prone to error. This workflow automates the transcription process in real-time, ensuring that key discussions and decisions are accurately captured and easily accessible for later review, thus enhancing productivity and clarity in communications. What this workflow does The workflow employs an AI-powered assistant to join virtual meetings and capture discussions through real-time transcription. Key functionalities include: Automatic joining of meetings on platforms like Zoom, Google Meet, and others with the ability to provide real-time transcription. Integration with transcription APIs (e.g., AssemblyAI) to deliver seamless and accurate capture of dialogue. Structuring and storing transcriptions efficiently in a database for easy retrieval and analysis. Real-Time Transcription: The assistant captures audio during meetings and transcribes it in real-time, allowing participants to focus on discussions. Keyword Recognition: Key phrases can trigger specific actions, such as noting important points or making prompts to the assistant. Structured Data Management: The assistant maintains a database of transcriptions linked to meeting details for organized storage and quick access later. Setup Preparation Create Recall.ai API key Setup Supabase account and table create table public.data ( id uuid not null default gen_random_uuid (), date_created timestamp with time zone not null default (now() at time zone 'utc'::text), input jsonb null, output jsonb null, constraint data_pkey primary key (id), ) tablespace pg_default; Create OpenAI API key Development Bot Creation: Use a node to create the bot that will join meetings. Provide the meeting URL and set transcription options within the API request. Authentication: Configure authentication settings via a Bearer token for interacting with your transcription service. Webhook Setup: Create a webhook to receive real-time transcription updates, ensuring timely data capture during meetings. Join Meeting: Set the bot to join the specified meeting and actively listen to capture conversations. Transcription Handling: Combine transcription fragments into cohesive sentences and manage dialog arrays for coherence. Trigger Actions on Keywords: Set up keyword recognition that can initiate requests to the OpenAI API for additional interactions based on captured dialogue. Output and Summary Generation: Produce insights and summary notes from the transcriptions that can be stored back into the database for future reference.
by Agent Studio
Who is it for Customer service or support teams who want to use their Zendesk articles in other tools. Content/Knowledge managers consolidating or migrating knowledge bases. Ops/automation specialists who want Markdown versions of articles (could be adapted to Notion, Google Sheets, or any Markdown-friendly system). How to get started Download the template and install it on your instance Set Zendesk and Airtable credentials Modify the Zendesk base_url and Airtable's table and base Run the workflow once manually to get your existing articles Finally, modify the Schedule Trigger (by default it runs every 30 days) and activate the workflow Prerequisites Airtable base** set up using this template. It includes the fields Title, Content, URL and Article ID. Zendesk account** with API access (read permissions for help center articles) Zendesk API credentials** (see instructions below) Airtable API credentials** (see instructions below) Getting Your Credentials Airtable: Sign up or log in to Airtable. Go to your account settings and generate a Personal Access Token (recommended scopes: data.records:read, data.records:write). In n8n, create new Airtable credentials using this token. Zendesk: Log in to your Zendesk dashboard. Go to Admin Center > Apps and Integrations > Zendesk API. Enable “Token Access,” and create an API token. In n8n, add Zendesk credentials with your Zendesk domain, email, and the API token. How it works 1. Triggers Manual:* For first setup, use the Manual Trigger to fetch *all** existing articles. Scheduled:* Automatically runs every N days to fetch *only new or updated** articles since the last run. 2. Fetch Articles from Zendesk Calls the Zendesk Help Center API, using pagination to handle large volumes. 3. Extract and Prepare Data Splits out each article, then collects fields: id, url, title, and body. Converts the article body from HTML to Markdown (for portability and easier reuse). 4. Upsert Into Airtable Inserts new articles, or updates existing ones (using Article ID as the unique key). Fields stored: Title, Content (Markdown), URL, Article ID. Airtable Template Use this Airtable template as your starting point. Make sure the table has columns: Title, Content, URL, Article ID. You can add more depending on your needs. Example Use Cases Migrating Zendesk articles to another knowledge base. Building an internal knowledge hub in Airtable or Notion. Creating Markdown backups for compliance or versioning. Service If you need help implementing the template or modifying it, just reach out 💌
by Lucas Peyrin
How it works This template provides a complete, ready-to-use web application for generating high-quality AI prompts. It features a user-friendly web form where you can describe your goal, and it leverages an AI model (Google Gemini) to create a structured, reusable prompt for you. The workflow is a full-stack application built entirely within n8n: Frontend (The Form): A Form Trigger node creates a beautiful, public-facing web form. Here, a user describes the prompt they need and selects which structural components to include (like system instructions, examples, or input variables). Backend (The AI Logic): A LangChain Chain node takes the user's request and constructs a "meta-prompt"—a set of instructions for the AI on how to generate the final prompt. The Google Gemini node executes this meta-prompt, creating a well-structured output with clear sections and tags. The Result (The Webpage): After generation, the user is automatically redirected to a new URL. This URL is handled by another Webhook node, which serves a custom-coded HTML page. This beautiful, dark-themed webpage displays the generated prompt and includes a one-click "Copy" button, making it easy to use the result immediately. This template is a perfect example of how to build interactive web tools with n8n, combining a user interface, backend logic, and a dynamic web response in a single workflow. Set up steps Setup time: ~1-3 minutes This workflow requires a Google AI credential to function. Configure Google AI Credentials: This workflow uses a Google Gemini model. You will need a Google AI API key. In n8n, go to Credentials and click Add credential. Search for Google Gemini and enter your API key. Go back to the workflow, open the Gemini 2.5 Flash node, and select your newly created credential from the dropdown. Activate the Workflow: Click the Active toggle in the top-right corner to turn the workflow on. Access Your Prompt Maker: Open the Prompt Request (Form Trigger) node. Copy the Public URL provided. This is the link to your new web application! Open the link in your browser, fill out the form, and see the magic happen. Note: This workflow uses environment variables like {{ $env.WEBHOOK_URL }} to build the redirect URL. These are typically set automatically by n8n and should work out-of-the-box on most standard n8n setups.
by Belgacem Dhiflaoui
Description What Problem Does This Solve? 🛠️ This workflow automates the process of extracting key information from resumes received as email attachments and storing that data in a structured format within a Supabase database. It eliminates the manual effort of reviewing each resume, identifying relevant details, and entering them into a database. This streamlines the hiring process, making it faster and more efficient for recruiters and HR professionals. Target audience: Recruiters, HR departments, and talent acquisition teams. What Does It Do? 🌟 Monitors a designated email inbox for new messages with resume attachments. Extracts key information such as name, contact details, education, work experience, and skills from the attached resumes. Cleans and formats the extracted data. Stores the processed data securely in a Supabase database. Key Features 📋 Automatic email monitoring for resume attachments. Intelligent data extraction from various resume formats (e.g., PDF, DOC, DOCX). Customizable data fields to capture specific information. Seamless integration with Supabase for data storage. Uses OpenRouter to streamline API key management for services such as AI-powered parsing. Setup Instructions Prerequisites ⚙️ n8n Instance**: Self-hosted or cloud instance of n8n. Email Account**: Gmail account with Gmail API access for receiving resumes. Supabase Account**: A Supabase project with a database/table ready to store extracted resume data. You'll need the Supabase URL and API key. OpenRouter Account**: For managing AI model API keys centrally when using LLM-based resume parsing. Installation Steps 📦 1. Import the Workflow: Copy the exported workflow JSON. Import it into your n8n instance via “Import from File” or “Import from URL”. 2. Configure Credentials: In n8n > Credentials, add credentials for: Email account (Gmail API): Provide Client ID and Client Secret from the Google Cloud Platform. Supabase: Provide the Supabase URL and the anon public API key. OpenRouter (Optional): Add your OpenRouter API key for use with any AI-powered resume parsing nodes. Assign these credentials to their respective nodes: Gmail Trigger → Email credentials. Supabase Insert → Supabase credentials. AI Parsing Node → OpenRouter credentials. 3. Set Up Supabase Table: Create a table in Supabase with columns such as: name, email, phone, education, experience, skills, received_date, etc. Make sure the field names align with the structure used in your workflow. 4. Customize Nodes: Parsing Node(s):* Modify the workflow to use an *OpenAI model* directly for field extraction, replacing the *Basic LLM Chain** node that utilizes OpenRouter. 5. Test the Workflow: Send a test email with a resume attachment. Check n8n's execution log to confirm the workflow triggered, parsed the data, and inserted it into Supabase. Verify data integrity in your Supabase table. How It Works High-Level Workflow 🔍 Email Monitoring: Triggered when a new email with an attachment is received (via Gmail API). Attachment Check: Verifies the email contains at least one attachment. Prepare Data: Extracts the attachment and prepares it for analysis. Data Extraction: Uses OpenRouter-powered LLM (if configured) to extract structured information from the resume. Data Storage: The structured information is saved into the Supabase database. Node Names and Actions (Example) Gmail Trigger:** Triggers when a new email is received. IF**: Checks whether the received email includes any attachments. Get Attachments:** Retrieves attachments from the triggering email. Prepare Data:** Prepares the attachment content for processing. Basic LLM Chain:** Uses an AI model via OpenRouter to extract relevant resume data and returns it as structured fields. Supabase-Insert:** Inserts the structured resume data into your Supabase database.
by Akhil Varma Gadiraju
n8n Workflow: Sync Workflows with GitLab How It Works This workflow ensures that your self-hosted n8n workflows are version-controlled in a GitLab repository. It compares each current workflow from n8n with its stored counterpart in GitLab. If any differences are detected, the GitLab file is updated with the latest version. Core Logic: Retrieve Workflows – Fetch all workflows from the n8n REST API. Compare with GitLab – For each workflow, fetch the corresponding file from GitLab and compare the JSON. Update if Changed – If differences exist, commit the updated workflow to GitLab using its API. Setup Before using the workflow, ensure the following: Prerequisites: n8n**: Self-hosted instance with access to the /rest/workflows API. GitLab**: A repository where workflows will be stored, and a Personal Access Token (PAT) with api and write_repository permissions. n8n Nodes Required**: HTTP Request (to call n8n and GitLab APIs) Code or Function nodes (for diffing and formatting) Looping (SplitInBatches or similar) Configuration: Set environment variables or workflow credentials for: GITLAB_TOKEN GITLAB_REPO GITLAB_BRANCH (e.g., main) GITLAB_FILE_PATH_PREFIX (e.g., n8n-workflows/) How to Use Import the Workflow into your n8n instance. Configure GitLab API Credentials: Set the GitLab PAT as a header in the HTTP Request node: Private-Token: {{ $env.GITLAB_TOKEN }} Map Workflows to GitLab Paths: Use the workflow name or ID to create the file path. Example: n8n-workflows/workflow-name.json Trigger the Workflow: Can be manually triggered, or scheduled to run at intervals (e.g., daily). Review Commits in GitLab: Each updated workflow will be committed with a message like: "Update workflow: Sample Workflow" Disclaimer This workflow does not handle merge conflicts or manual edits made directly in GitLab. Always ensure proper coordination if multiple sources are modifying workflows. Only structural changes are tracked. Non-functional metadata (like timestamps or IDs) may trigger false positives unless filtered. Use at your own risk. Test in a safe environment before applying to production workflows.
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Chat Conversations with Bright Data MCP Search Engines & Google Gemini workflow is designed for users who need real-time, AI-enhanced conversations powered by live search engine results. This workflow is tailored for: Data Analysts - Who want live, search-based data fused with AI reasoning. Marketing Researchers - Seeking up-to-the-minute market or competitor insights via conversational AI. Product Managers - Exploring user needs, market trends, and competitor analysis in real time. AI Developers - Building dynamic applications that combine live search data with intelligent conversation agents. Growth Hackers - Who need fast, conversational research tools for campaign ideation, outreach, or content creation. What problem is this workflow solving? Traditional chatbots and AI systems often rely on static, outdated data. This workflow enables AI agents to fetch live search engine data and converse intelligently about it, making interactions dynamic, accurate, and highly contextual. This workflow solves the major gaps of: Outdated Knowledge: Regular chatbots lack up-to-date information from live web searches. Manual Search Fatigue: Manually searching for information and interpreting it is time-consuming. Context Bridging: Connecting search results into meaningful, conversational replies requires human-level reasoning. What this workflow does? Accepts a user's conversational query input. Triggers a search request to Bright Data’s MCP Search Engines API (Google, Bing, etc.) based on the query. Waits for the search task to complete. Retrieves real-time search results. Feeds the search results and original question into Google Gemini. Generates a human-like, contextually accurate AI response combining live information and conversational flow. Outputs the response back into a chat app. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Also, do "Account Setup" as mentioned in the @brightdata/mcp URL. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data Web Unlocker API Token within the Environments textbox above as API_TOKEN=<your-token>. Update the HTTP Request for Webhook Notification node for sending the Webhook notification for chat responses. How to customize this workflow to your needs Change Search Engine: Add or Remove the Search Engine MCP tools based upon the Bright Data MCP Server updates. Expand Outputs: Send AI chat responses to Slack, Discord, custom chat UIs, WhatsApp, or CRM systems. Store conversation logs in a database (PostgreSQL, MongoDB, etc.) for future audits or training.
by Alfonso Corretti
Who is this for? Everyone! Did you dream of asking an AI "what hotel did I stay in for holidays last summer?" or "what were my marks last semester like?". Dream no more, as vector similarity searches and this workflow are the foundations to make it possible (as long as the information appears in your e-mails 😅). 100% Local and Open Source! This workflow is designed to use locally-hosted open source. Ollama as LLM provider, nomic-embed-text as the embeddings model, and pgvector as the vector database engine, on top of Postgres. Structured AND Vectorized This workflow combines structured and semantic search on your e-mail. No need for enterprise setups! Leverage the convenience of n8n and open source to get a bleeding edge solution. Setup You will need a PGVector database with embeddings for all your email. Use my other template Gmail to Vector Embeddings with PGVector and Ollama to set it up in a breeze! Make a copy of my Email Assistant: Convert Natural Language to SQL Queries with Phi4-mini and PostgreSQL, you will need it for structured searches. Install this template and modify the Call the SQL composer Workflow step, to point at your copy of the SQL workflow. Adjust the rest of necessary steps: Telegram Trigger, AI Chat model, AI Embeddings... Activate the workflow and chat around!
by Yaron Been
🔍 Competitor Review Scraper & Ad Copy Generator (Trustpilot + Bright Data + GPT-4o-mini) 📌 Who It's For Marketers, business owners, and agencies looking to: Analyze competitor pain points Generate high-impact Facebook ad copy Automate manual data processing 🧩 How It Works This n8n-based workflow combines Bright Data, Google Sheets, and OpenAI to scrape, process, and transform Trustpilot reviews into ready-to-use ad copy. 🔹 Step-by-Step Breakdown Trigger (Manual Form Submission) Input required: Competitor’s Trustpilot URL Review timeframe (30d, 3m, 6m, 12m) Fetch Reviews Calls Bright Data’s Dataset API with URL & timeframe Polls until snapshot is ready Retrieve & Store Extracts all reviews Saves them into a structured Google Sheet Filter & Aggregate Filters to only 1–2 star reviews Summarizes common negative feedback Generate Ad Copy Sends the summary to OpenAI GPT-4o-mini Produces 3 variations of ad copy targeting pain points Distribute Insights Sends ad copy + summary via email to the marketing team ✅ Requirements -LLM Account -Google Sheets - Copy this sheet: https://docs.google.com/spreadsheets/d/1Zi758ds2_aWzvbDYqwuGiQNaurLgs-leS9wjLWWlbUU/edit?gid=0#gid=0 -Bright Data account ⚙️ Setup Instructions **Step 1: Google Sheets ** Copy this Google Sheets template Do not change column headers **Step 2: n8n Credential Setup ** Google Sheets: OAuth2 Bright Data: Authorization Header OpenAI: API Key for GPT-4o-mini **Step 3: Import Workflow ** Import the .json file into n8n Configure your sheet + dataset ID Adjust GPT prompts as needed **Step 4: Run the Workflow ** Trigger via form Receive ad copy + review insights via email 🧠 Tips & Best Practices Bright Data snapshots may take time — polling is handled Focusing on 1–2 star reviews yields the most actionable pain points You can customize GPT-4o-mini prompts for tone or vertical 💬 Support & Feedback Need help or customization? 📧 Email: Yaron@nofluff.online 📺 YouTube: @YaronBeen 🔗 LinkedIn: linkedin.com/in/yaronbeen 📚 Bright Data Docs: docs.brightdata.com/introduction
by Ranjan Dailata
Who this is for? Indeed Data Scraper & Summarization with Airtable, Bright Data and Google Gemini is an automated workflow that extracts company profile information from Indeed using Bright Data Web Unlocker, transforms the data using Google Gemini's LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams who want quick summaries of companies listed on Indeed. Market researchers and analysts needing structured insights into businesses. Founders, investors, and consultants scouting potential competitors, partners, or clients. No-code enthusiasts looking to automate data extraction and enrichment pipelines without manual scraping or parsing. What problem is this workflow solving? Manually gathering structured information about companies on Indeed is time-consuming and inconsistent. Pages vary in structure, and extracting clean, digestible summaries can require technical scraping expertise. This workflow automates: Extracting company data from Indeed reliably using Bright Data Web Unlocker. Cleaning and summarizing the extracted content using Google Gemini LLM. Storing structured insights directly into Airtable for easy access and further workflows. Eliminates manual research, saves hours, and produces AI-enhanced, easily searchable records. What this workflow does Triggers on-demand. Pulls company page URLs from Airtable. Scrapes content from each Indeed company profile using Bright Data Web Unlocker. Sends the raw HTML to Google Gemini for extraction and summarization. Sends the summarized data to other platforms via a Webhook notification mechanism. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials for Bright Data. The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the Airtable Personal Access Token account under Credentials. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Extend the scraper**: Modify Bright Data targets to pull job listings, salaries, or employee reviews via the Airtable data source. Customize the summary prompt**: Ask Gemini to extract different attributes hiring trends, practices etc. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.