by Artem Makarov
About this template This template is to demonstrate how to trace the observations per execution ID in Langfuse via ingestion API. Good to know Endpoint: https://cloud.langfuse.com/api/public/ingestion Auth is a Generic Credential Type with a Basic Auth: username = you_public_key, password = your_secret_key. How it works Trigger**: the workflow is executed by another workflow after an AI run finishes (input parameter execution_id). Remove duplicates** Ensures we only process each execution_id once (optional but recommended). Wait to get execution data** Delay (60-80 secs) so totals and per-step metrics are available. Get execution** Fetches workflow metadata and token totals. Code: structure execution data** Normalizes your run into an array of perModelRuns with model, tokens, latency, and text previews. Split Out* → *Loop Over Items** Iterates each run step. Code: prepare JSON for Langfuse** Builds a batch with: trace-create (stable id trace-<executionId>, grouped into session-<workflowId>) generation-create (model, input/output, usage, timings from latency) HTTP Request to Langfuse** Posts the batch. Optional short Wait between sends. Requirements Langfuse Cloud project and API keys n8n instance with the HTTP node Customizing Add span-create and set parentObservationId on the generation to nest under spans. Add scores or feedback later via score-create. Replace sessionId strategy (per workflow, per user, etc.). If some steps don’t produce tokens, compute and set usage yourself before sending.
by Paul Abraham
This n8n template demonstrates how to automatically generate accurate subtitles from any video and optionally translate them into other languages. By combining FFmpeg, OpenAI Whisper, and LibreTranslate, this workflow turns video audio into ready-to-use .srt subtitle files that can be delivered via email. Use cases Auto-generate subtitles for training or educational videos Translate videos into multiple languages for global reach Create accessibility-friendly content with minimal effort Build a backend for media platforms to process subtitles automatically Good to know This workflow requires a self-hosted n8n instance since it uses the Execute Command node. FFmpeg is used for audio extraction and must be installed on the host machine. OpenAI Whisper (Local) is used for transcription, providing highly accurate speech-to-text results. LibreTranslate is used for translating subtitles into other languages. How it works Webhook Trigger – Starts when a video URL is received. Download Video – Fetches the video file from the provided link. Extract Audio (FFmpeg) – Separates audio from the video file. Run Whisper (Local) – Transcribes the extracted audio into text subtitles. Read SRT File – Loads the generated .srt subtitle file. Merge Paths – Combines both original and translated subtitle flows. Translate Subtitles (LibreTranslate) – Translates the .srt file into the target language. Write Translated SRT – Creates a translated .srt file for delivery. Send a Message (Gmail) – Sends the final subtitle file (original or translated) via email. How to use Clone this workflow into your self-hosted n8n instance. Ensure FFmpeg and Whisper are installed and available via your server’s shell path. Add your LibreTranslate service credentials for translation. Configure Gmail (or another email service) to send subtitle files. Trigger the workflow by sending a video URL to the webhook, and receive subtitle files in your inbox. Requirements Self-hosted n8n instance FFmpeg installed and available on the server OpenAI Whisper (Local) installed and callable via command line LibreTranslate service with API credentials Gmail (or any email integration) for delivery Customising this workflow Replace Gmail with Slack, Telegram, or Drive uploads for flexible delivery. Switch LibreTranslate with DeepL or Google Translate for higher-quality translations. Add post-processing steps such as formatting .srt files or embedding subtitles back into the video. Use the workflow as a foundation for building a multi-language subtitle automation pipeline.
by Automate With Marc
Automate Video Generation with Sora 2 & Veo 3.1 - For Beginners Kick-start AI video generation in n8n with two popular paths: OpenAI Sora 2 and Google Veo 3.1 (via Wavespeed). This beginner-friendly template lets you type a prompt in Chat Trigger, then automatically: Call Sora 2 to generate a short 720p clip and email the MP4. Call Veo 3.1 through Wavespeed, poll until ready, and email the result link. 🎥 Watch the full step-by-step build: https://www.youtube.com/watch?v=qjFWfYvHxlY What this template does Chat input → Two video routes run in parallel: Sora 2 route: HTTP Request (POST /v1/videos) → HTTP Request (GET /videos/{id}/content) → Gmail (attach MP4). Veo 3.1 route (Wavespeed): HTTP Request (POST /google/veo3.1/text-to-video) → poll result with HTTP Request (GET /predictions/{id}/result) inside a Wait + IF loop → Gmail (send result URL). Includes pinned sample prompt for a marketing use case. Uses Sticky Notes to explain each lane and link to the tutorial. Why it’s useful (for beginners) No code required: fill credentials, paste your prompt, run. Side-by-side comparison: learn how Sora 2 vs Veo 3.1 flows differ (direct file vs polled URL). Email delivery built-in: get outputs in your inbox for quick review. Requirements OpenAI API key with access to Sora 2 video endpoint. Wavespeed API key (for Veo 3.1). Google Gmail OAuth2 (to send yourself the results). n8n (self-hosted or Cloud), with internet access. Note: Replace any placeholder or demo credentials after import. Never commit real keys to public templates. Credentials to set in n8n HTTP Header Auth – OpenAI Sora 2 Header Name: Authorization Value: Bearer YOUR_OPENAI_API_KEY HTTP Header Auth – Wavespeed Header Name: Authorization Value: Bearer YOUR_WAVESPEED_API_KEY Gmail OAuth2 Connect your Google account. In the two Gmail nodes, change “sendTo” to your email. How it works (nodes overview) When chat message received (Chat Trigger) Receives your chatInput prompt. Veo 3.1 lane (top) HTTP Request — Post to Wavespeed POST https://api.wavespeed.ai/api/v3/google/veo3.1/text-to-video Body params (editable): aspect_ratio, duration, generate_audio, prompt, resolution. Wait 1 Minute → HTTP Request — Get result Polls GET /api/v3/predictions/{id}/result. IF (status === "completed") True → Gmail — Send Video Result to Email (sends the output URL). False → Wait another 60 secs → back to Wait 1 Minute (poll again). Sora 2 lane (bottom) HTTP Request — POST /v1/videos (OpenAI) Multipart form: prompt={{$json.chatInput}}, model="sora-2-pro", size="1280x720", seconds="8". HTTP Request — GET /v1/videos/{id}/content Downloads the MP4 as binary (video.mp4). Gmail — Send Video to Email Attaches video.mp4 and emails it to you. Setup (step-by-step) Import the template into n8n. Open each HTTP Request node and select the correct HTTP Header Auth credentials. Open both Gmail nodes and: Select your Gmail OAuth2 credential. Change “sendTo” to your email. (Optional) Tweak defaults: Sora 2: size (e.g., 1920x1080), seconds (e.g., 5–10). Veo 3.1 via Wavespeed: aspect_ratio (9:16, 16:9, 1:1), duration, resolution. Wait timing and max polling passes (add an extra IF to stop after N attempts). Execute Workflow and enter your prompt in the Chat window. Customization tips Different delivery: swap Gmail for Google Drive, Slack, or Telegram nodes. Approval loop: after email, branch to a Slack approval step before archiving the file. Brand presets: store common prompt fragments (tone, overlays, logo instructions) in an Item Lists/Set node and merge into the main prompt. Content length: Sora and Veo costs/latency grow with duration—start short (5–8s), then iterate. Troubleshooting 403/401 errors: re-check API keys and header names (Authorization: Bearer …). Empty Veo output: confirm polling loop runs until status = completed; inspect the GET result JSON path used in email (data.outputs[0]). Sora content fetch fails: ensure the second GET uses the returned id path and outputs binary video.mp4. Email not received: verify Gmail OAuth2 scope and recipient; check n8n executions for errors. NSFW/safety blocks: refine the prompt to avoid restricted content.
by WeblineIndia
Weekly hiring‑manager snapshot from Breezy HR to email (pipeline, next‑week interviews, stuck) This workflow sends each hiring manager a single weekly email with an overview of their open roles: pipeline totals, a per‑position summary, interviews scheduled next week and stuck candidates (no movement ≥ 7 days). It queries Breezy HR using your API token, detects the HM via each position’s team (falling back to a simple map when unavailable), aggregates results and emails one digest per HM. The job runs Mondays at 07:30 Asia/Kolkata, includes a DRY_RUN preview and rate‑limits sends. Who’s it for Hiring managers who want a once‑a‑week snapshot instead of many separate updates. Talent/recruiting teams using Breezy HR who need pipeline hygiene and next‑week readiness at a glance. Ops partners who prefer a standardized email with HTML tables and a text fallback. How it works Cron (Mon 07:30 IST) triggers weekly. Breezy → Positions: Fetch open positions (configurable). Find HM: For each position, read the position team and look for a member with a “hiring manager” role; if none, use the fallback map. Candidates per position: Fetch candidates, compute stage counts and mark stuck where last activity ≥ STUCK_AFTER_DAYS. Events per position: Fetch events and keep those that look like interviews scheduled next calendar week. Aggregate per position → Group by hiring manager → Build one HTML digest per HM. DRY_RUN? If true, output a preview only; otherwise send emails with a small delay between each. How to set up Credentials in n8n HTTP Header Auth for Breezy HR: set Authorization: Bearer <YOUR_TOKEN> in a credential (don’t hardcode in the node). SMTP (Gmail) to send digests. Open “Set: Config” (single place to edit) BREEZY_API_BASE = https://api.breezy.hr/v3 COMPANY_ID = your Breezy company ID TIMEZONE = Asia/Kolkata INCLUDE_ONLY_OPEN = true (use STATE_FILTER = open) USE_BREEZY_HM_DETECTION = true HM_FALLBACK_MAP_JSON = e.g., { "Default": "hm@example.com", "Java TL": "javatl@company.com" } STUCK_AFTER_DAYS = 7 INTERVIEW_EVENT_KEYWORDS_CSV = interview SMTP_FROM = sender address SUBJECT_TEMPLATE = Weekly HM snapshot {{range}} — {{positions_count}} roles, {{candidates_count}} candidates INTRO_TEMPLATE / OUTRO_TEMPLATE DRY_RUN = false (set true to preview) RATE_LIMIT_EMAIL_SECONDS = 5 Activate the workflow. Requirements Breezy HR API token with access to positions, teams, candidates and events. SMTP (Gmail) account to send emails. n8n (cloud or self‑hosted) with HTTP Header Auth and SMTP credentials. How to customize Schedule:** Change Cron to your preferred day/time. Scope:** Set INCLUDE_ONLY_OPEN=false to include other position states. Interview detection:** Edit INTERVIEW_EVENT_KEYWORDS_CSV to match your account’s event labels. Stuck threshold:** Adjust STUCK_AFTER_DAYS (e.g., 10 or 14). Templates:** Update SUBJECT_TEMPLATE, INTRO_TEMPLATE, OUTRO_TEMPLATE. Fallback mapping:** Expand HM_FALLBACK_MAP_JSON for positions lacking team data. Add‑ons Slack delivery:** Post a weekly summary to a channel or DM the HM alongside the email. CSV attachments:** Attach per‑HM CSV of positions/candidates for offline work. Manager CC:** CC a recruiting lead or HRBP for visibility on key roles. Writeback:* Log weekly metrics to *Google Sheets** for dashboards. Custom windows:** Use a different future window (e.g., interviews in the next 2 weeks). Use Case Examples Busy HMs* who want one email showing *where to focus** for the coming week. Recruiting Ops* keeping tabs on *pipeline health** and stalled candidates. Leadership reviews** where weekly snapshots feed into Monday stand‑ups. Common troubleshooting | Issue | Possible Cause | Solution | |---|---|---| | No emails sent | DRY_RUN=true | Set DRY_RUN=false to send. | | Missing HM recipients | Team endpoint returned no “hiring manager” and fallback map not configured | Add position→email in HM_FALLBACK_MAP_JSON (or set a better default). | | Interviews list is empty | Different event label in your Breezy account | Add your labels to INTERVIEW_EVENT_KEYWORDS_CSV (comma‑separated). | | Stuck candidates not flagged | Threshold too high or activity timestamps missing | Lower STUCK_AFTER_DAYS or verify candidate activity data. | | API errors | Bad token or insufficient scopes | Recreate the Breezy credential with a valid Bearer token. | | Emails fail to send | SMTP auth/quota issues | Check SMTP credentials/from‑address permissions and provider limits. | Need Help? If you’d like help tuning the interview filters, changing the grouping logic, or adding Slack/CSV writebacks, feel free to reach out our n8n experts at WeblineIndia. We'll be happy to help you tailor this to your stack.
by Sènato Julien KOUKPONOU
Who’s it for This workflow is ideal for community managers, event organizers, and businesses that regularly manage multiple WhatsApp groups. If you have a growing list of invitation codes stored in Google Sheets, this automation helps you automatically join groups, update statuses, and track results without manual work. How it works / What it does The workflow connects Google Sheets with WhatsApp through an automation sequence: Reads the list of invitation codes from a Google Sheet. Processes the first 50 unused codes per run. Validates group links via a Fetch groups node. Attempts to join each group using the Join group node. Updates the sheet with the join status (success or failure). Logs successful joins in a tracking list for easy follow-up. This ensures a fully automated way to manage WhatsApp group invitations while keeping your data organized in Google Sheets. How to set up Prepare a Google Sheet with invitation codes and a status column. Configure the Google Sheets node with read and write access. Set up your fetch-groups and join-group credentials. Adjust the Schedule Trigger to define how often the workflow should run. Test with a few sample codes before scaling. Requirements n8n (self-hosted or cloud). Google Sheets API credentials. WhatsApp integration (via \[Evolution API] or another community node — self-hosted only). How to customize the workflow Change the batch size (default: 50 codes per run). Add error handling or retry logic for invalid links. Send real-time notifications (Slack, email, or Telegram) after each join. Extend your Google Sheet schema with extra details (e.g., group category, campaign, date joined).
by Easy8.ai
Sync New Calendly Bookings to Esko CRM with Comment, Sales Activity, and Outlook Email Intro/Overview This automation workflow automatically syncs newly scheduled Calendly meetings into Easy Redmine CRM. When a meeting is booked via Calendly, it finds the associated lead in Easy Redmine CRM using the invitee’s email address and logs both a comment and a sales activity in the lead’s record. In addition, the workflow sends an email in Microsoft Outlook, ensuring the account manager can check and knows about the meeting. Ideal for sales teams and customer success managers, this workflow ensures your CRM and calendar stay updated with every new meeting—no manual entry required. How it works Calendly Trigger Watches for invitee.created events (newly booked meetings) Authenticates with Calendly via OAuth2 Get ID from email Sends a GET request to Easy Redmine CRM to search for a lead using the invitee’s email Extracts the lead ID, required for further CRM actions Add Comment Posts a comment to the lead’s record in Easy Redmine CRM The comment includes: Invitee’s full name Scheduled date/time of the meeting Meeting description Sales Activity POST Creates a new sales activity in Easy Redmine CRM under the same lead Includes structured details: Invitee’s name Meeting date and time Description of the meeting Send an email Sends a notification email in Microsoft Outlook Maps the name, start time, and description using dynamic values from the Calendly booking Ensures the account manager is informed about the scheduled meeting How to Use Import the workflow Copy or download the JSON of this workflow In your n8n editor, click Import Workflow and paste/upload the JSON Set up credentials Calendly API: Use OAuth2 credentials Ensure proper scopes for reading event data Follow n8n’s setup guide: https://docs.n8n.io/integrations/builtin/credentials/calendly/ Easy Redmine CRM: Add credentials using an API token or OAuth2, depending on your CRM setup Microsoft Outlook: Set up OAuth2 credentials and connect your Outlook email Assign and configure nodes Calendly Trigger: Confirm it’s set to listen for invitee.created Get ID from email: Update the GET request URL to match your Easy Redmine CRM lead search endpoint Ensure it uses the invitee’s email from the trigger Add Comment & Sales Activity POST: Use your Easy Redmine CRM’s correct endpoints Map dynamic fields like lead ID, name, meeting time, and description Send email notification: Set up your preferred Outlook email Map meeting details into email body (e.g. name, time, description) Timezone and formatting If needed, use a Code node to convert the meeting time if your CRM or Outlook expects a specific format or timezone Test the workflow Schedule a test meeting via Calendly using an email tied to an existing CRM lead Check Easy Redmine CRM to confirm: A comment was added A sales activity was created under the correct lead Check Outlook to confirm: An email notification was received Example Use Cases Sales Enablement Log every discovery or demo call automatically, helping reps track all activity in one place Block time in calendars without double-booking risk Customer Success Keep onboarding and check-in meetings logged, improving visibility across the team Ensure calendar events are created for every scheduled session Lead Qualification Identify engaged leads based on bookings and trigger follow-up sequences Requirements Calendly account with API access Easy Redmine CRM user with API access Microsoft Outlook account with email access Customization (Optional) Add Filters Use IF or Switch nodes to handle only specific event types (e.g., demo calls vs intro calls) Extend Comments Include additional data from Calendly like meeting location or answers to custom questions
by Lucas Peyrin
How it works This workflow is your automated sales assistant, designed to intelligently qualify incoming leads and route them to the most appropriate follow-up channel. It uses the powerful BANT (Budget, Authority, Need, Timing) framework, powered by Google Gemini AI, to score leads as 'hot', 'mid', or 'cold', ensuring your sales team focuses on the most promising opportunities. Here's a step-by-step breakdown: Lead Capture: A public Form Trigger collects essential lead information, including their name, email, what they want to build, their budget, desired start time, and job role. These questions are specifically designed to gather BANT data. AI Lead Scoring: The collected data is sent to Google Gemini. A detailed prompt instructs the AI to act as a Lead Scoring Expert, evaluating each BANT component individually and then assigning an overall 'hot', 'mid', or 'cold' score based on predefined criteria. Intelligent Routing: A Switch node acts as the central router. Based on the AI's 'hot', 'mid', or 'cold' score, the workflow directs the lead down one of three distinct follow-up paths. Hot Leads (Calendar Booking): For highly qualified 'hot' leads, the workflow immediately redirects them to your calendar booking link, making it easy for them to schedule a direct conversation. Mid Leads (WhatsApp Engagement): For 'mid' priority leads, Google Gemini generates a personalized, pre-filled WhatsApp message summarizing their inquiry. The lead is then redirected to a WhatsApp chat with your sales team, allowing for quick, informal engagement. Cold Leads (Nurturing Email): For 'cold' leads who might be in an early research phase, Google Gemini crafts a helpful, non-salesy follow-up email. This email provides valuable resources (like templates or community links) and is sent via Gmail, keeping them engaged without pressure. Set up steps Setup time: ~10-15 minutes This workflow requires connecting your Google AI and Gmail accounts, and customizing several nodes to fit your sales process. Get Your Google AI API Key: Visit Google AI Studio at aistudio.google.com/app/apikey. Click "Create API key in new project" and copy the key. In the workflow, select the Score Lead node. Click the Credential dropdown and select + Create New Credential. Paste your key into the API Key field and Save. Repeat this for the Write Placeholder WA Message and Write Follow up Email nodes, selecting the credential you just created. Connect Your Gmail Account: Select the Send Follow up Email with Gmail node. Click the Credential dropdown and select + Create New Credential to connect your Google account. Follow the prompts to grant n8n access. Customize Lead Scoring Criteria: Go to the Score Lead node. In the Text parameter, carefully review and adapt the BANT Criteria Mapping and Scoring Logic to align with your specific sales process and ideal customer profile. This is crucial for accurate lead qualification. Configure Follow-up Channels: Hot Leads: Select the Calendar Booking Link node. Update the redirectUrl parameter with your personal or team's calendar booking link (e.g., Calendly, Chili Piper). Mid Leads: Select the Phone Number node. Set the whatsapp_phone value to your company's WhatsApp phone number (e.g., +15551234567). You can also customize the pre-filled WhatsApp message by adjusting the prompt in the Write Placeholder WA Message node. Cold Leads: Select the Redirect to Website node. Update the redirectUrl parameter to your company's main website or a relevant resource page (e.g., a free templates page, a blog post). You can also customize the email content and resources shared by adjusting the prompt in the Write Follow up Email node. Activate and Test: Activate the workflow using the toggle at the top right. Go to the Lead Contact Form node and click the "Open Form URL" button. Submit several test applications with different answers (e.g., one "hot" lead, one "mid", one "cold") to ensure the AI scores them correctly and they are routed to the appropriate follow-up action. Start qualifying and engaging your leads more effectively!
by Adrian Bent
Part two of the Indeed Job Scraper, Filter, and Enrichment workflow, this workflow takes information about the scraped and filtered job listings on Indeed via Apify, which is stored in Google Sheets to generate a customized, five-line email icebreaker that implies the rest of the icebreaker is personalized. Personalized IJSFE (Indeed Job Scraper For Enrichment). ++ I am an engineering student, so I love my acronyms. Benefits Instant Icebreaker Generation - Convert hours of research, copywriting and personalization into seconds automatically Live Integration - Generate and send personalized icebreakers whenever, wherever Virtually Complete Automation - From research into company and job description to personalized response with the AI solution, this workflow does it in a click. Professional Presentation - Because the chatbot has context about the company and the job listing, it generates an icebreaker that makes the reader think that there was some research done How It Works Google Sheets Search: Google sheets node fetches all rows where the icebreaker column is empty Each row is an item returned that contains information about the company and the job listing AI Personalization: Uses sophisticated GPT-4 prompting Converts a bunch of information about a job posting and company into a customized, five-line personalized email icebreaker Applies a consistent and casual tone automatically to seem more human-written Database Update: Updates all rows fetched in the search and updates only the icebreaker column with the new personalized icebreaker Each item is returned as a row that contains information about the company, the job listing and the icebreaker Required Template Setup Google Sheets Template: Create a Google Sheet with these variables: jobUrl - Unique identifier for job listings title - Position Title descriptionText - Description of job listing hiringDemand/isHighVolumeHiring - Are they hiring at high volume? hiringDemand/isUrgentHire - Are they hiring at high urgency? isRemote - Is this job remote? jobType/0 - Job type: In person, Remote, Part-time, etc. companyCeo/name - CEO name collected from Tavily’s search icebreaker - AI-generated icebreakers for each job listing scrapedCeo - CEO name collected from Apify Scraper email - Email listed on for job listing companyName - Name of the company that posted the job companyDescription - Description of the company that posted the job companyLinks/corporateWebsite - Website of the company that posted the job companyNumEmployees - Number of employees the company listed that they have location/country - Location of where job is to take place salary/salaryText - Salary on job listing Setup Instructions: Google Sheets Search & Update Setup: Create a new Google Sheet with these column headers in the first row Name the sheet whatever you please Connect your Google Sheets OAuth credentials in n8n Update the document ID in the workflow nodes The search logic in the first Google Sheets node relies on the ID column for icebreaker generation, so this structure is essential for the workflow to function correctly. Feel free to reach out for additional help or clarification at my Gmail: terflix45@gmail.com, and I'll get back to you as soon as I can. AI Icebreaker Generation Setup: Configure OpenAI API for sophisticated proposal writing Implement example-based training with input/output for more specific output Set up JSON formatting for structure (Personally, I think JSON is easier to handle as an output) Setup Steps: Search & Fetch Rows Setup Create a Google Sheets database with the provided column structure Connect Google Sheets OAuth credentials Configure the filer on the get rows node to only include empty icebreaker columns Set up AI Personalization Add OpenAI API credentials for personalized icebreaker generation Customize the AI prompts for your specific niche, requirements or interest Update Google Sheets Setup Remember to map all items to their respective columns based on the row number All fields in the update sheets node should have the same value as the first sheets node, and the icebreaker field is to take the ChatGPT output as its value
by Maksudur Rahman
How it works Trigger: When a new meeting is booked in Cal.com. Date Check: The workflow calculates how many days remain before the meeting date. Email Scheduling: Depending on the time left, it sends a series of pre-written “warm-up” emails using Gmail, designed to set expectations and build interest in your offering. Timing Control: Emails are automatically spaced out to ensure natural engagement before the meeting. How to set up Connect your Cal.com API key to authenticate and trigger on new bookings. Connect your Google account to enable Gmail email sending. Customize the email messages in the Set or Send Email nodes to match your brand voice and tone. Test with internal bookings to ensure correct timing and delivery before activating for clients. Requirements Cal.com account with API access. Google account connected to Gmail node. Active n8n instance (self-hosted or cloud). How to customize Adjust email spacing or timing by modifying the Wait nodes. Edit the email copy for different purposes (e.g., sales, onboarding, consultation). Add conditional logic to send different warm-up sequences for specific meeting types or durations.
by Atta
Stop manually searching Google for sales leads. Start listening to the internet. This advanced workflow automatically identifies, qualifies, and enriches high-value leads by searching for their digital footprints (i.e., specific technology use or public directories). It uses a robust Find/Create/Conditional Update database pattern to prevent duplicates and ensure you only spend credits on enriching incomplete records. The workflow provides a fully persistent lead record, updating the same Airtable row as new data is found across multiple search steps. ✨ Key Features Persistent Data Integrity: Uses a dedicated Loop Over Items structure to run the Find/Create/Update logic sequentially for every lead, guaranteeing no data is lost or duplicated. Conditional Enrichment: A smart gate checks the Airtable record: if the high-value email field is empty, the workflow proceeds to the expensive scraping steps. If it is already complete, it skips the scrape. Targeted Scraping: Executes precise Google Dorks (via Decodo) to find initial leads and then targets the specific Contact Us page for deep email extraction. Database-as-a-State-Machine: Airtable acts as the single source of truth, logging the initial lead status and updating the same row across several enrichment phases. Final Output: Delivers the fully enriched lead data (Domain, Primary Email, Contact Page URL) to a final notification channel. ⚙️ How it Works (The Find/Create/Update Loop) Search & Filter: The workflow is manually triggered and uses the Config variables to execute a wide-scope Google Search via Decodo. The results are filtered into a clean array of unique domains. Loop & Check: The Loop Over Items node starts. Inside, the Airtable Read node checks the database for the current lead's domain. Create/Update: If the lead is NEW, the workflow creates a record (Airtable: Create Lead). If the lead EXISTS, the record is updated (Airtable: Update Lead). Data Merger: The Data Merger: ID Finalizer node consolidates the workflow, ensuring the unique Airtable Record ID is passed to the next step, regardless of whether the lead was created or updated. Conditional Enrichment: The If: Enrichment Needed? node checks the existing Primary Email status. If it's empty, the item proceeds to the deep scraping pipeline (Decodo: Email Search → Decodo: Scrape Contact Page). Final Update: The final node updates the Airtable record with the high-quality email address found from the deep scrape. 📥 Decodo Node Installation The Decodo node is used three times in this workflow for precision scraping and searching. Find the Node: Click the + button in your n8n canvas. Search: Search for the Decodo node and select it. Credentials: When configuring the first Decodo node, use your API key (obtained with the 80% discount coupon). 🎁 Exclusive Deal for n8n Users To run this workflow, you require a robust scraping provider. We have secured a massive discount for Decodo users: Get 80% OFF the 23k Advanced Scraping API plan. Coupon Code: ATTAN8N Sign Up Here: Claim 80% Discount on Decodo 🛠️ Setup Instructions This template requires specific node configuration and Airtable fields. Credentials: Obtain API keys for Decodo (using the coupon above) and Airtable. Airtable Setup (Schema): Create an Airtable base with a 'Leads' table. It must include these fields for mapping: Domain (Single Line Text - Primary Field) Primary Email (Email) Contact Page URL (URL) Source URL (URL) Lead Type (Single Select: Paid Ad Lead, Organic Lead) Status (Single Select: New Lead, Updated, Enrichment Complete) Global Configuration: Open the Config: Set Search Params node. Customize the following fields: tech_footprint: e.g., "We use Klaviyo" target_industry: e.g., site:promarketer.ca ➕ How to Adapt the Template Change Database:* Replace the *Airtable* nodes with *Postgres, **Notion, or Google Sheets for logging, adapting the field mappings. Final Notification:* Add a *Slack* or *Gmail** node to alert the sales team immediately upon successful enrichment. Multi-Step Enrichment:** Integrate a service like Hunter.io or Clearbit to find key employee names and titles before the final database update. Adjust Scoring:* Add an *If Node* after the deep scrape to set a *Lead Score based on whether a direct email (sales@) was found versus a general contact page link. Add AI Lead Scoring:* Integrate a *Gemini* or *OpenAI** node after the deep scraping step to assign an "AI Score" (1-100) based on lead quality (e.g., domain authority, quality of the extracted email), before the final update.
by Masaki Go
About This Template This workflow automatically fetches the Nikkei 225 closing price every weekday and sends a formatted message to a list of users on LINE. This is perfect for individuals or teams who need to track the market's daily performance without manual data checking. How It Works Schedule Trigger: Runs the workflow automatically every weekday at 4 PM JST (Tokyo time), just after the market closes. Get Data: An HTTP Request node fetches the latest Nikkei 225 data (closing price, change, %) from a data API. Prepare Payload: A Code node formats this data into a user-friendly message and prepares the JSON payload for the LINE Messaging API, including a list of user IDs. Send to LINE: An HTTP Request node sends the formatted message to all specified users via the LINE multicast API endpoint. Who It’s For Anyone who wants to receive daily stock market alerts. Teams that need to share financial data internally. Developers looking for a simple example of an API-to-LINE workflow. Requirements An n8n account. A LINE Official Account & Messaging API access token. An API endpoint to get Nikkei 225 data. (The one in the template is a temporary example). Setup Steps Add LINE Credentials: In the "Send to LINE via HTTP" node, edit the "Authorization" header to include your own LINE Messaging API Bearer Token. Add User IDs: In the "Prepare LINE API Payload" (Code) node, edit the userIds array to add all the LINE User IDs you want to send messages to. Update Data API: The URL in the "Get Nikkei 225 Data" node is a temporary example. Replace it with your own persistent API URL (e.g., from a public provider or your own server). Customization Options Change Schedule:** Edit the "Every Weekday at 4 PM JST" node to run at a different time. (Note: 4 PM JST is 07:00 UTC, which is what the Cron 0 7 * * 1-5 means). Change Message Format:** Edit the message variable inside the "Prepare LINE API Payload" (Code) node to change the text of the LINE message.
by vinci-king-01
Deep Research Agent with AI Analysis and Multi-Source Data Collection 🎯 Target Audience Market researchers and analysts Business intelligence teams Academic researchers and students Content creators and journalists Product managers conducting market research Consultants performing competitive analysis Data scientists gathering research data Marketing teams analyzing industry trends 🚀 Problem Statement Manual research processes are time-consuming, inconsistent, and often miss critical information from multiple sources. This template solves the challenge of automating comprehensive research across web, news, and academic sources while providing AI-powered analysis and actionable insights. 🔧 How it Works This workflow automatically conducts deep research on any topic using AI-powered web scraping, collects data from multiple source types, and provides comprehensive analysis with actionable insights. Key Components Webhook Trigger - Receives research requests and initiates the automated research process Research Configuration Processor - Validates and processes research parameters, generates search queries Multi-Source AI Scraping - Uses ScrapeGraphAI to collect data from web, news, and academic sources Data Processing Engine - Combines and structures data from all sources for analysis AI Research Analyst - Uses GPT-4 to provide comprehensive analysis and insights Data Storage - Stores all research findings in Google Sheets for historical tracking Response System - Returns structured research results via webhook response 📊 Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | sessionId | String | Unique research session identifier | "research_1703123456789" | | query | String | Research query that was executed | "artificial intelligence trends" | | timestamp | DateTime | When the research was conducted | "2024-01-15T10:30:00Z" | | analysis | Text | AI-generated comprehensive analysis | "Executive Summary: AI trends show..." | | totalSources | Number | Total number of sources analyzed | 15 | 🛠️ Setup Instructions Estimated setup time: 20-25 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials OpenAI API account and credentials Google Sheets account with API access Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up OpenAI Credentials Add OpenAI API credentials Enter your API key from OpenAI dashboard Ensure you have access to GPT-4 model Test the connection to verify API access 4. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Create a new spreadsheet for research data Configure the sheet name (default: "Research_Data") 5. Configure Research Parameters Update the webhook endpoint URL Customize default research parameters in the configuration processor Set appropriate search query generation logic Configure research depth levels (basic, detailed, comprehensive) 6. Test the Workflow Send a test webhook request with research parameters Verify data collection from all source types Check Google Sheets for proper data storage Validate AI analysis output quality 🔄 Workflow Customization Options Modify Research Sources Add or remove source types (web, news, academic) Customize search queries for specific industries Adjust source credibility scoring algorithms Implement custom data extraction patterns Extend Analysis Capabilities Add industry-specific analysis frameworks Implement comparative analysis between sources Create custom insight generation rules Add sentiment analysis for news sources Customize Data Storage Add more detailed metadata tracking Implement research versioning and history Create multiple sheet tabs for different research types Add data export capabilities Output Customization Create custom response formats Add research summary generation Implement citation and source tracking Create executive dashboard integration 📈 Use Cases Market Research**: Comprehensive industry and competitor analysis Academic Research**: Literature reviews and citation gathering Content Creation**: Research for articles, reports, and presentations Business Intelligence**: Strategic decision-making support Product Development**: Market validation and trend analysis Investment Research**: Due diligence and market analysis 🚨 Important Notes Respect website terms of service and robots.txt files Implement appropriate delays between requests to avoid rate limiting Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Consider data privacy and compliance requirements Validate research findings from multiple sources 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status OpenAI API errors: Check API key and model access permissions Google Sheets permission errors: Check OAuth2 scope and permissions Research data quality issues: Review search query generation logic Rate limiting: Adjust request frequency and implement delays Webhook response errors: Check response format and content Support Resources: ScrapeGraphAI documentation and API reference OpenAI API documentation and model specifications n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations