by Naveen Choudhary
Who is this for? Marketing, content, and enablement teams that need a quick, human-readable summary of every new video published by the YouTube channels they care about—without leaving Slack. What problem does this workflow solve? Manually checking multiple channels, skimming long videos, and pasting the highlights into Slack wastes time. This template automates the whole loop: detect a fresh upload from your selected channels → pull subtitles → distill the key take-aways with GPT-4o-mini → drop a neatly-formatted digest in Slack. What this workflow does Schedule Trigger fires every 10 min, then grabs a list of YouTube RSS feeds from a Google Sheet. HTTP + XML fetch & parse each feed; only brand-new videos continue. YouTube API fetches title/description, RapidAPI grabs English subtitles. Code nodes build an AI payload; OpenAI returns a JSON summary + article. A formatter turns that JSON into Slack Block Kit, and Slack posts it. Processed links are appended back to the “Video Links” sheet to prevent dupes. Setup Make a copy of this Google Sheet and connect a Google Sheets OAuth2 credential with edit rights. Slack App: create → add chat:write, channels:read, app_mention; enable Event Subscriptions; install and store the Bot OAuth token in an n8n Slack credential. RapidAPI key for https://yt-api.p.rapidapi.com/subtitles (300 free calls/mo) → save as HTTP Header Auth. OpenAI key → save in an OpenAI credential. Add your RSS feed URLs to the “RSS Feed URLs” tab; press Execute Workflow. How to customise Adjust the schedule interval or freshness window in “If newly published”. Swap the OpenAI model or prompt for shorter/longer digests. Point the Slack node at a different channel or DM. Extend the AI payload to include thumbnails or engagement stats. Use-case ideas Product marketing**: Instantly brief sales & CS teams when a competitor uploads a feature demo. Internal learning hub**: Auto-summarise conference talks and share bullet-point notes with engineers. Social media managers**: Get ready-to-post captions and key moments for re-purposing across platforms.
by Eduard
This workflow demonstrates three distinct approaches to chaining LLM operations using Claude 3.7 Sonnet. Connect to any section to experience the differences in implementation, performance, and capabilities. What you'll find: 1️⃣ Naive Sequential Chaining The simplest but least efficient approach - connecting LLM nodes in a direct sequence. Easy to set up for beginners but becomes unwieldy and slow as your chain grows. 2️⃣ Agent-Based Processing with Memory Process a list of instructions through a single AI Agent that maintains conversation history. This structured approach provides better context management while keeping your workflow organized. 3️⃣ Parallel Processing for Maximum Speed Split your prompts and process them simultaneously for much faster results. Ideal when you need to run multiple independent tasks without shared context. Setup Instructions: API Credentials: Configure your Anthropic API key in the credentials manager. This workflow uses Claude 3.7 Sonnet, but you can modify the model in each Anthropic Chat Model node, or pick an entirely different LLM. For Cloud Users: If using the parallel processing method (section 3), replace {{ $env.WEBHOOK_URL }} in the "LLM steps - parallel" HTTP Request node with your n8n instance URL. Test Data: The workflow fetches content from the n8n blog by default. You can modify this part to use a different content or a data source. Customization: Each section contains a set of example prompts. Modify the "Initial prompts" nodes to change the questions asked to the LLM. Compare these methods to understand the trade-offs between simplicity, speed, and context management in your AI workflows! Follow me on LinkedIn for more tips on AI automation and n8n workflows!
by Gleb D
This n8n workflow template automates the process of collecting and analyzing Twitter (X) posts for any public profile, then generates a clean, AI-powered summary including key metrics, interests, and activity trends. 🚀 What It Does Accepts a user's full name and date range through a public form. Automatically finds the person’s X (formerly Twitter) profile using a Google search. Uses Bright Data to retrieve full post data from the X.com profile. Extracts key post metrics like views, likes, reposts, hashtags, and mentions. Uses Google Gemini (PaLM) to generate a personalized summary: tone, themes, popularity, and sentiments. Stores both raw data and the AI summary into a connected Google Sheet for further review or team collaboration. 🛠️ Step-by-Step Setup Deploy the public form to collect full name and date range. Build a Google search query using the name to find their X profile. Scrape the search results via Bright Data (Web Unlocker zone). Parse the page content using the HTML node. Use Gemini AI to extract the correct X profile URL. Pull full post data via Bright Data dataset snapshot API. Transform post data into clean structured fields: date_posted, description, hashtags, likes, views, quoted_post.date_posted, quoted_post.description, replies, reposts, quotes, and tagged_users.profile_name. Analyze all posts using Google Gemini for interest detection and persona generation. Save results to a Google Sheet: structured post data + AI-written summary. Show success or fallback messages depending on profile detection or scraping status. 🧠 How It Works: Workflow Overview Trigger: When user submits form Search & Match: Google search → HTML parse → Gemini filters matching X profile Data Gathering: Bright Data → Poll for snapshot completion → Fetch post data Transformation: Extract and restructure key fields via Code node AI Summary: Use Gemini to analyze tone, interests, and trends Export: Save results to Google Sheet Fallback: Display custom error message if no X profile found 📨 Final Output A record in your Google Sheet with: Clean post-level data Profile-level engagement summary An AI-written overview including tone, common topics, and post popularity 🔐 Credentials Used Bright Data account** (for search & post scraping) Google Gemini (PaLM)** or Gemini Flash via - OpenAI/Google Vertex API Google Sheets (OAuth2) account** (for result storage) ⚠️Community Node Dependency This workflow uses a custom community node: n8n-nodes-brightdata Install it via UI (Settings → Community Nodes → Install).
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Extract, Transform LinkedIn Data with Bright Data MCP Server & Google Gemini workflow is an automated solution that scrapes LinkedIn content via Bright Data MCP Server then transforms the response using a Gemini LLM. The final output is sent via webhook notification and also persisted on disk. This workflow is tailored for: Data Analysts : Who require structured LinkedIn datasets for analytics and reporting. Marketing and Sales Teams : Looking to enrich lead databases, track company updates, and identify market trends. Recruiters and Talent Acquisition Specialists : Who want to automate candidate sourcing and company research. AI Developers : Integrating real-time professional data into intelligent applications. Business Intelligence Teams : Needing current and comprehensive LinkedIn data to drive strategic decisions. What problem is this workflow solving? Gathering structured and meaningful information from the web is traditionally slow, manual, and error-prone. This workflow solves: Reliable web scraping using Bright Data MCP Server LinkedIn tools. LinkedIn person and company web scrapping with AI Agents setup with the Bright Data MCP Server tools. Data extraction and transformation with Google Gemini LLM. Persists the LinkedIn person and company info to disk. Performs a Webhook notification with the LinkedIn person and company info. What this workflow does? This n8n workflow performs the following steps: Trigger: Start manually. Input URL(s): Specify the LinkedIn person and company URL. Web Scraping (Bright Data): Use Bright Data's MCP Server, LinkedIn tools for the person and company data extract. Data Transformation & Aggregation: Uses the Google LLM for handling the data transformation. Store / Output: Save results into disk and also performs a Webhook notification. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the LinkedIn URL person and company workflow. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Different Inputs: Instead of static URLs, accept URLs dynamically via webhook or form submissions. Data Extraction: Modify the LinkedIn Data Extractor node with the suitable prompt to format the data as you wish. Outputs: Update the Webhook endpoints to send the response to Slack channels, Airtable, Notion, CRM systems, etc.
by Joseph LePage
This n8n workflow template is designed to integrate a DeepSeek AI agent with Telegram, incorporating long-term memory capabilities for personalized and context-aware responses. Here's a detailed breakdown: Core Features Telegram Integration Uses a webhook to receive messages from Telegram users. Validates user identity and message content before processing. AI-Powered Responses Employs DeepSeek's AI models for conversational interactions. Includes memory capabilities to personalize responses based on past interactions. Error Handling Sends an error message if the input cannot be processed. Model Options 🧠 DeepSeek-V3 Chat**: Handles general conversational tasks. DeepSeek-R1 Reasoning**: Provides advanced reasoning capabilities for complex queries. Memory Buffer Window**: Maintains session context for ongoing conversations. Quick Setup 🛠️ Telegram Webhook Configuration Set up a webhook using the Telegram Bot API: https://api.telegram.org/bot{my_bot_token}/setWebhook?url={url_to_send_updates_to} Replace {my_bot_token} with your bot's token and {url_to_send_updates_to} with your n8n webhook URL. Verify the webhook setup using: https://api.telegram.org/bot{my_bot_token}/getWebhookInfo DeepSeek API Configuration Base URL: https://api.deepseek.com Obtain your API key from the DeepSeek platform. Implementation Details 🔧 User Validation The workflow validates the user's first name, last name, and ID using data from incoming Telegram messages. Only authorized users proceed to the next steps. Message Routing Routes messages based on their type (text, audio, or image) using a switch node. Ensures appropriate handling for each message format. AI Agent Interaction Processes text input using DeepSeek-V3 or DeepSeek-R1 models. Customizable system prompts define the AI's behavior and rules, ensuring user-centric and context-aware responses. Memory Management Retrieves long-term memories stored in Google Docs to enhance personalization. Saves new memories based on user interactions, ensuring continuity across sessions.
by Ranjan Dailata
Who this is for The Async Structured Bulk Data Extract with Bright Data Web Scraper workflow is designed for data engineers, market researchers, competitive intelligence teams, and automation developers who need to programmatically collect and structure high-volume data from the web using Bright Data's dataset and snapshot capabilities. This workflow is built for: Data Engineers - Building large-scale ETL pipelines from web sources Market Researchers - Collecting bulk data for analysis across competitors or products Growth Hackers & Analysts - Mining structured datasets for insights Automation Developers - Needing reliable snapshot-triggered scrapers Product Managers - Overseeing data-backed decision-making using live web information What problem is this workflow solving? Web scraping at scale often requires asynchronous operations, including waiting for data preparation and snapshots to complete. Manual handling of this process can lead to timeouts, errors, or inconsistencies in results. This workflow automates the entire process of submitting a scraping request, waiting for the snapshot, retrieving the data, and notifying downstream systems all in a structured, repeatable fashion. It solves: Asynchronous snapshot completion handling Reliable retrieval of large datasets using Bright Data Automated delivery of scraped results via webhook Disk persistence for traceability or historical analysis What this workflow does Set Bright Data Dataset ID & Request URL: Takes in the Dataset ID and Bright Data API endpoint used to trigger the scrape job HTTP Request: Sends an authenticated request to the Bright Data API to start a scraping snapshot job Wait Until Snapshot is Ready: Implements a loop or wait mechanism that checks snapshot status (e.g., polling every 30 seconds) until completion i.e ready state Download Snapshot: Downloads the structured dataset snapshot once ready Persist Response to Disk: Saves the dataset to disk for archival, review, or local processing Webhook Notification: Sends the final result or a summary of it to an external webhook Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. Update the Set Dataset Id, Request URL for setting the brand content URL. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Polling Strategy : Adjust polling interval (e.g., every 15–60 seconds) based on snapshot complexity Input Flexibility : Accept datasetId and request URL dynamically from a webhook trigger or input form Webhook Output : Send notifications to - Internal APIs – for use in dashboards Zapier/Make – for multi-step automation Persistence Save output to: Remote FTP or SFTP storage Amazon S3, Google Cloud Storage etc.
by Billy Christi
Who is this for? This workflow is perfect for: Digital marketers who need to scale SEO-optimized content production Bloggers and content creators who want to maintain consistent publishing schedules Small business owners who need regular blog content but lack writing resources What problem is this workflow solving? Creating high-quality, SEO-optimized blog content consistently is time-consuming and resource-intensive. This workflow solves that by: Automating the content generation process from topic to final draft Ensuring quality control through human-in-the-loop approval Managing topic queues and preventing duplicate content creation Streamlining the revision process based on human feedback Organizing and archiving all generated content for future reference What this workflow does From topics stored in Google Sheets, this workflow: Automatically retrieves pending topics from your Google Sheets tracking document Generates SEO-optimized blog posts (800-1200 words) using OpenAI GPT-4 with structured prompts Sends content for human approval via email with custom approval forms Handles revision requests by incorporating feedback while maintaining SEO best practices Updates topic status to prevent duplicate processing Add approved generated content in Google Sheets for easy access and management Routes workflow based on approval decisions (approve, revise, or cancel) Setup Copy the Google Sheet template here: 👉 Automate Blog Content Creation – Google Sheet Template Connect Google Sheets with your topic tracking document (requires "Topic List" and "Generated Content" sheets) Add your OpenAI API key to the AI agent nodes for content generation Configure Gmail for the approval notification system Set up your topic list in Google Sheets with "Topic" and "Status" columns Customize the schedule trigger to run at your preferred intervals Update email recipient in the approval node to your email address Test with a sample topic marked as "Pending" in your Google Sheet How to customize this workflow to your needs Adjust content length**: modify the word count requirements in the AI agent prompts Change writing style**: customize the copywriter prompts for different tones (formal, casual, technical) Add multiple reviewers**: extend the approval system to include additional stakeholders Integrate with CMS**: add nodes to automatically publish approved content to WordPress, Webflow, or other platforms Include keyword research**: add Ahrefs or SEMrush nodes to incorporate keyword data Add image generation**: integrate DALL-E or Midjourney for automatic featured image creation Customize approval criteria**: modify the approval form to include specific feedback categories Add content scoring**: integrate readability checkers or SEO analysis tools before approval
by Yaron Been
LinkedIn Enrichment & Ice Breaker Generator For SDRs, growth marketers, and founders looking to scale personalized outreach. This workflow enriches LinkedIn profile data using Bright Data and generates AI-powered ice breakers using Claude (Anthropic). It automates research and messaging to help you connect smarter and faster — without manual effort. 🧩 How It Works This workflow combines Google Sheets, Brigt Data, and Claude (Anthropic) to fully automate your outreach research: Trigger Manually trigger the workflow or run it on a schedule (via Manual Trigger or Schedule Trigger). Read Input Sheet Fetches rows from a Google Sheet. Each row must contain at least a Linkedin_URL_Person and row_number. Prepare Input Formats each row for Bright Data’s API using Set and SplitInBatches nodes. Enrich Profile (Bright Data API) Sends LinkedIn URLs to Bright Data’s Dataset API via HTTP Request. Waits for snapshot to be ready using polling logic with Wait, If, and Snapshot Progress nodes. Once ready, retrieves the enriched profile data including: Name City Current company About section Recent posts Update Sheet with Profile Data Writes the retrieved enrichment data into the corresponding row in Google Sheets (via row_number). Generate Ice Breaker (Claude AI) Sends enriched profile content to Claude (Anthropic) using a custom prompt. Focuses on recent posts for crafting relevant, respectful, 1–4-line ice breakers. Update Sheet with Ice Breaker Writes the generated ice breaker to the Ice Breaker 1 column in the original row. ✅ Requirements To use this workflow, you must have the following: Google Sheets A Google account A Google Sheet with at least one sheet/tab containing: Column: Linkedin_URL_Person Column: row_number (used for mapping input and output rows) Bright Data A Bright Data account with access to the Dataset API An active dataset that accepts LinkedIn URLs API key with Dataset API access Anthropic Claude An Anthropic API key (for Claude 3.5 Haiku or other Claude models) n8n Environment Access to HTTP Request, Set, Wait, SplitInBatches, If, and Google Sheets nodes Access to Claude integration (via LangChain nodes: @n8n/n8n-nodes-langchain) Credential manager properly configured with: Google Sheets OAuth2 credentials Bright Data API key Anthropic API key ⚙️ Setup Instructions Step 1: Copy the Google Sheets Template > 📄 Click here to make a copy Fill the Linkedin_URL_Person column with LinkedIn profile URLs you want to enrich Do not modify headers or add filters to the sheet Leave other columns (name, city, about, posts, ice breaker) blank — the workflow fills them Step 2: Connect Your Accounts in n8n Google Sheets: Create a credential under Google Sheets OAuth2 API Bright Data: Add your API key as a credential under HTTP Request (Authorization header) Anthropic: Create a credential for Anthropic API with your Claude key Step 3: Import and Configure the Workflow Import the workflow into your n8n instance. In each Google Sheets node: Select the copied Google Sheet Select the correct tab (usually input or Sheet1) In the HTTP Request node to Bright Data: Paste your Bright Data dataset ID In the Claude prompt node: Optionally adjust the tone and length of the ice breaker prompt Step 4: Run the Workflow Test it using the Manual Trigger node For daily automation, enable the Schedule Trigger and configure interval settings Watch your Google Sheet populate with enriched data and tailored ice breakers 🧠 Tips & Best Practices Bright Data Delay**: Snapshots may take time. The workflow polls the status until complete. Retry Protection**: If and Wait nodes avoid infinite loops by checking snapshot status. Mapping via row_number**: Critical to ensure data is updated in the right row. Prompt Engineering**: You can fine-tune Claude's behavior by editing the text prompt. 🧾 Output Example Once complete, each row in your Google Sheet will contain: | Linkedin_URL_Person | Name | City | Company | Recent Post | Ice Breaker | |---------------------|------|------|---------|-------------|--------------| | linkedin.com/... | Jane Doe | NYC | ACME Corp | “Why AI should replace meetings” | "Loved your post about AI and meetings — finally someone said it!" | 💬 Support & Feedback Questions? Want to tweak the prompt or expand the enrichment? 📧 Email: Yaron@nofluff.online 📺 YouTube: @YaronBeen 🔗 LinkedIn: linkedin.com/in/yaronbeen
by Brian Coyle
Description Candidate Engagement | Resume Screening | AI Voice Interviews | Applicant Insights This intelligent n8n workflow automates the process of extracting and scoring resumes received through a company career page, populating a Notion database with AI insights where the recruiter or hiring manager can automatically invite the applicant to an instant interview with an Elevenlabs AI voice agent. After the agent conducts the behavior-based interview, the workflow scores the overall interview against customizable evaluation criteria and updates the Notion database with AI insights about the applicant. AI Powered Resume Screening & Voice AI that interviews like a Recruiter! AI Insights in Notion dashboard Who is this for? HR teams, recruiters, and talent acquisition professionals This workflow is ideal for HR teams, recruiters, and talent acquisition professionals looking for a foundational, extensible framework to automate early stage recruiting. Whether you're exploring AI for the first time or scaling automation across your hiring process, this template provides a base for screening, interviewing, and tracking candidates—powered entirely by n8n, Elevenlabs, Notion, and LLM integrations. Be sure to consult State and Country regulations with respect to AI Compliance, AI Bias Audits, AI Risk Assessment, and disclosure requirements. What problem is this workflow solving? Manually screening resumes and conducting initial interviews slows down hiring. This template automates: Resume assessment against job description. Scheduling first and second round interviews. First-round AI-led behavioral interviews with AI scoring assessment. Centralized tracking of AI assessments in Notion. What this does This customizable tool, configured to manage 3 requisitions in parallel, automates the application process, resume screen, and first round behavioral interviews. Pre-screen Applicants with AI Immediately screens and scores applicant’s resume against the job description. The AI Agent generates a score and an AI assessment, adding both to the applicant's profile in Notion. Notion automatically notifies hiring manager when a resume receives a score of 8 or higher. Voice AI that Interviews like a Recruiter AI Voice agent adapts probing questions based on applicant’s response and intelligently dives deeper into skill and experience to assess answers against a scoring rubric for each question. AI Applicant Insights in Notion Get detailed post-interview AI analysis, including interview recordings and question-by-question scoring breakdowns to help identify who you should advance to the next stage in the process. AI insight provided in Notion ATS dashboard with drag and drop to advance top candidates to the next interview stage. How it works Link to Notion Template Notion Career Page: Notion Career Page published to web, can be integrated with your preferred job board posting system. Notion Job Posting: Gateway for applicants to apply to active requisitions with ‘Click to Apply’ button. Application Form: N8N webform embedded into Notion job posting captures applicant information and routes for AI processing. AI Agent evaluates resume against job description AI Agent evaluates resume against the job description, stored in Notion, and scores the applicant on a scale of 1 to 10, providing rationale for score. Creates ATS record in Notion with assessment and score Workflow creates the applicant record in the Notion ATS where Recruiters and Hiring Managers see applicants in a filtered view, sorted by AI generated resume score. Users can automatically advance applicants to the next step in process (AI Conversation interview) with drag and drop functionality. Invites applicant to an Instant AI Interview Dragging the applicant to AI Interview step in the Notion ATS dashboard triggers Notion automation that sends the applicant an email with a link to the Elevenlabs Conversation AI Agent. The AI Conversation Agent is provided with instructions on how to conduct the behavior-based interview, including probing questions, for the specific role. AI Conversation Agent Behavior Based Interview The email link resolves to an ElevenLabs AI Conversation agent that has been instructed to interview applicants using pre-defined interview questions, scoring rubric, job description, and company profile. The Elevenlabs agent assesses the applicant on a scale of 1 to 5 for each interview question and provides an overall assessment of the interview based on established evaluation criteria. Click to hear AI Voice Agent in action Example: Role: IT Support Analyst Mark: Elevenlabs AI Agent instructed to interview applicants for specific role Gemini: Google AI coached to answer questions as an IT Support Analyst being interviewed Updates Notion record with Interview Assessment and Score All results—including the conversation transcript, interview scores, and rationale for assessment are automatically added back to the applicant’s profile in Notion where the Hiring Manager can validate the AI assessment by skimming through the embedded audio file. AI Interview Overall Score: 1 to 5 based on response to all questions and probes. AI Agent confirms that it was able to evaluate the interview using the assigned rubric. AI Interview Criteria Score: Success/Failure based on response to individual interview questions. Invites applicant to second interview with Hiring Manager Dragging the applicant to the ‘Hiring Manager Interview’ step in the Notion ATS dashboard triggers a Notion automation that sends an email with a link to the Hiring Manager’s calendar scheduling solution. Configuration and Set Up Accounts & API Keys You wil need accounts and credentials for: n8n (hosted or self-hosted) Elevenlabs (for AI Conversation Agent) Gemini (for LLM model access) Google Drive (to back up applicant data) Calendly (to automate interview scheduling) Gmail (to automate interview scheduling) Data / Documents to implement Job Descriptions for each role Interview questions for each role Evaluation criteria for each interview question Notion Set Up Customize your Notion Career Page Link to Free Notion Template that enables workflow: Update Notion job description database -update job description(s) for each role -add interview questions to the job description database page in Notion -add evaluation criteria to the job description database page in Notion -edit each ‘Click to Apply’ button in the job description template so it resolves to the corresponding N8N 'Application Form' webform production URL (detail provided below) Notion Applicant Tracker In the Applicant Tracker database, update position titles, tab headings, in the custom database view (Notion) so it reflects the title of the position you are posting. Edit the filter for each tab so it matches the position title. Notion Email Automation Update Notion automation templates used to invite applicants to the AI Interview and Hiring Manager interview. Note: Trigger email automation by dragging applicant profile to the next Applicant Comm Status in the Applicant Tracker. AI Interview invite template: revise position title to reflect the title of the role you are posting; include the link to your Conversation AI Agent for that role in the email body. Note: each unique role will use an Elevenlabs AI conversation agent designed for that role. Hiring Manager Interview invite template: revise position title to reflect the title of the role you are posting; include the link to your Calendly page or similar solution provider to automate interview scheduling. N8N Configuration Workflow 1 Application Forms (3 Nodes - one for each job) Update the N8N form title and description to match the job description you configured in Notion. Confirm Job Code in Applicant Form node matches Job Code in Notion for that position. Edit the Form Response to customize the message you want displayed after applicant clicks submit. Upload CV - Google Drive Authenticate your Google Drive account and select the folder that will be used to store resumes Get Job Description - Notion Authenticate your Notion account and select your Career Page from the list of databases that contain your job descriptions. Applicant Data Backup - Google Sheet Create a Google Sheet where you will track applicant data for AI Compliance reporting requirements. Open the node in n8n and use the field names in the node as Google Sheet column headings. Workflow 2 Elevenlabs Web Hook (Node 1) Edit the Web Hook POST node and copy your production URL that is displayed in the Node. This URL is entered into the Elevenlabs AI Conversation Agent post-call webhook described below. AI Agent Authenticate your LLM model (Gemini in this example) and add your Notion database as a tool to pull the evaluation_criteria hosted in Notion for the specific role. Extract Audio Create an Elevenlabs API key for your conversation agent and enter that key as a json header for the Extract Audio node Upload Audio to Drive - Google Drive Authenticate your Google Drive account and select the folder that will be used to store the audio file. Elevenlabs Configuration Create an Elevenlabs account Create Conversation AI Agent Add First Message and Systems Prompt: Design your ‘First Message’ and ‘Systems Prompt’ that guides the AI agent conducting the interview. Tool Tip: provide instruction that limits the number of probes per interview question. Knowledge Base: Upload your role specific interview questions and job description, using the same text that is stored in your Notion Career page for the role. You can also add a document about your company and instruct the Elevenlabs agent to answer questions about culture, strategy, and company growth. Analysis: Evaluation Criteria: Add your evaluation criteria, less than 2000 characters, for each interview question / competency. Analysis: Data Collection: Add the following elements, using the exact character string represented below. phone_number_AI_screen "capture applicant's phone number provided at the start of the conversation and share this as a string, integers only." full_name "capture applicant's full name provided at the start of the conversation." Advanced: Max Duration Set the max duration for interview in seconds. The AI Agent will timeout at the max duration. Conversation AI Widget: Customize your AI Conversation Agent landing page, including the position tile and company name. AI Conversation Agent URL: Copy the AI Conversation Agent URL and add it to your Notion email template triggered by the AI Interview email automation. Use a custom AI Agent URL for each distinct job description. Enable your Elevenlabs Post-Call Webhook for your Conversation Agent: Log into your Elevenlabs account and go to Conversational AI Settings and click on Post-Call Web Hook. This is where you enter the production URL from the N8N Web Hook node (Workflow 2). This sends the AI Voice Agent output to your n8n workflow which feeds back to your Notion dashboard.
by Yaron Been
Transform chaotic support requests into organized, actionable insights automatically. This intelligent workflow captures support tickets from forms, uses AI to categorize and analyze sentiment, stores everything in organized databases, and delivers comprehensive analytics reports to your team - eliminating manual sorting while providing valuable business intelligence. 🚀 What It Does Intelligent Ticket Processing: Automatically categorizes incoming support requests into Billing, Bug Reports, Feature Requests, How-To questions, and Complaints using advanced AI analysis. Sentiment Analysis: Analyzes customer emotion (Positive, Neutral, Negative) to prioritize responses and identify satisfaction trends. Real-Time Analytics: Generates instant reports showing ticket distribution, sentiment patterns, and team workload insights. Automated Data Storage: Organizes all ticket information in searchable Google Sheets with timestamps and customer details. Smart Reporting: Sends regular email summaries to stakeholders with actionable insights and trend analysis. 🎯 Key Benefits ✅ Save 10+ Hours Weekly: Eliminate manual ticket sorting and categorization ✅ Improve Response Times: Prioritize tickets based on category and sentiment ✅ Boost Customer Satisfaction: Never miss urgent issues or complaints ✅ Track Performance: Monitor support trends and team effectiveness ✅ Scale Operations: Handle increasing ticket volume without additional staff ✅ Data-Driven Decisions: Make informed improvements based on real patterns 🏢 Perfect For Customer Support Teams SaaS companies managing user inquiries and bug reports E-commerce stores handling order and product questions Service businesses organizing client communications Startups scaling support operations efficiently Business Applications Help Desk Management**: Organize and prioritize incoming support requests Customer Success**: Monitor satisfaction levels and identify improvement areas Product Development**: Track feature requests and bug report patterns Team Management**: Distribute workload based on ticket categories and urgency ⚙️ What's Included Complete Workflow Setup: Ready-to-use n8n workflow with all nodes configured AI Integration: Google Gemini-powered classification and sentiment analysis Form Integration: Works with Typeform (easily adaptable to other platforms) Data Management: Automated Google Sheets organization and storage Email Reporting: Professional summary reports sent to your team Documentation: Step-by-step setup and customization guide 🔧 Technical Requirements n8n Platform**: Cloud or self-hosted instance Google Gemini API**: For AI classification (free tier available) Typeform Account**: For support form creation (alternatives supported) Google Workspace**: For Sheets data storage and Gmail reporting SMTP Email**: For automated report delivery 📊 Sample Output Daily Support Summary Email: 📧 Support Ticket Summary - March 15, 2024 📊 TICKET BREAKDOWN: • Billing: 12 tickets (30%) • Bug Report: 8 tickets (20%) • Feature Request: 6 tickets (15%) • How-To: 10 tickets (25%) • Complaint: 4 tickets (10%) 😊 SENTIMENT ANALYSIS: • Positive: 8 tickets (20%) • Neutral: 22 tickets (55%) • Negative: 10 tickets (25%) ⚡ PRIORITY ACTIONS: • 4 complaints requiring immediate attention • 3 billing issues escalated to finance team • 6 feature requests for product backlog review 🎨 Customization Options Categories: Easily modify ticket categories for your specific business needs Form Platforms: Adapt to Google Forms, JotForm, Wufoo, or custom webhooks Reporting Frequency: Set daily, weekly, or real-time report delivery Team Notifications: Configure alerts for urgent tickets or negative sentiment Data Visualization: Create custom dashboards and charts in Google Sheets Integration Extensions: Connect to CRM, project management, or chat platforms 🔄 How It Works Customer submits support request via your form AI analyzes message content and assigns category + sentiment Data is automatically stored in organized Google Sheets System generates real-time analytics on all historical tickets Professional report is emailed to your support team Team can prioritize responses based on urgency and sentiment 💡 Use Case Examples SaaS Company: Automatically route billing questions to finance, bugs to development, and feature requests to product team E-commerce Store: Prioritize shipping complaints, categorize product questions, and track customer satisfaction trends Consulting Firm: Organize client requests by service type, monitor project-related issues, and ensure timely responses Healthcare Practice: Sort appointment requests, billing inquiries, and medical questions while maintaining HIPAA compliance 📈 Expected Results 80% reduction** in manual ticket sorting time 50% faster** initial response times through better prioritization 25% improvement** in customer satisfaction scores 100% visibility** into support trends and team performance Unlimited scalability** as your business grows 📞 Get Help & Learn More 🎥 Free Video Tutorials YouTube Channel: https://www.youtube.com/@YaronBeen/videos 💼 Professional Support LinkedIn: https://www.linkedin.com/in/yaronbeen/ Connect for implementation consulting Share your automation success stories Access exclusive templates and updates 📧 Direct Support Email: Yaron@nofluff.online Technical setup assistance Custom workflow modifications Integration with existing systems Response within 24 hours 🏆 Why Choose This Workflow Proven Results: Successfully deployed across 100+ businesses worldwide Expert Created: Built by automation specialist with 10+ years experience Continuously Updated: Regular improvements and new features added Money-Back Guarantee: Full refund if not satisfied within 30 days Lifetime Support: Ongoing help and updates included with purchase
by Evoort Solutions
🚀 AI-Powered LinkedIn Post Automation 🧩 How It Works This workflow automatically generates LinkedIn posts based on a user-submitted topic, including both content creation and image generation, then publishes the post to LinkedIn. Ideal for marketers, content creators, or businesses looking to streamline their social media activity, without the need for manual post creation. High-Level Workflow: Trigger: The workflow is triggered when a user submits a form with a topic for the LinkedIn post. Data Mapping: The topic is mapped and prepared for the AI model. AI Content Generation: Calls the Google Gemini AI model to generate engaging post content and a visual image prompt. Image Creation: Sends the image prompt to the external API, gen-imager, to generate a professional image matching the topic. Post Creation: Publishes the text and image to LinkedIn, automatically updating the user's feed. ⚙️ Set Up Steps (Quick Overview) 🕐 Estimated Setup Time: ~10–20 minutes Connect Google Gemini: Set up your Google Gemini API credentials to interact with the AI model for content creation. Set Up External Image API: Configure the external image generation API (gen-imager API) for visual creation based on the post prompt. Connect LinkedIn: Set up OAuth2 credentials to authenticate your LinkedIn account and allow publishing posts. Form Submission Setup: Create a simple web form for users to submit the topic for LinkedIn posts. Activate the Workflow: Once everything is connected, activate the workflow. It will trigger automatically upon receiving form submissions. 💡 Important Notes: The flow uses Google Gemini (PaLM) for generating content based on the user's topic. Text to Image: The image generation process involves creating a professional, LinkedIn-appropriate image based on the post’s topic using the **gen-imager API. You can customize the visual elements of the posts and adjust the tone of the generated content based on preferences. 🛠 Detailed Node Breakdown: On Form Submission Trigger: Captures the user-submitted topic and initializes the workflow. Action: Start the process by gathering the topic information. Mapper (Field Mapping) Action: Maps the captured topic to a variable that is passed along for content generation. AI Agent (Content Generation) Action: Calls Google Gemini to generate professional LinkedIn post content and an image prompt based on the submitted topic. Key: Outputs content in a structured form — post text and image prompt. Google Gemini Chat Model Action: AI model that generates actionable insights, engaging copy, and an image prompt for LinkedIn post. Normalizer (Data Cleanup) Action: Cleans the output from the AI model to ensure the content and image prompt are correctly formatted for use in the next steps. Text to Image (Image Generation) Action: Sends the image prompt to the gen-imager API, which returns a custom image based on the post's topic. Decoder (Base64 Decoding) Action: Decodes the image from base64 format for easier uploading to LinkedIn. LinkedIn (Post Creation) Action: Publishes the generated text and image to LinkedIn, automatically creating a polished post for the user’s feed. ⏱ Execution Time Breakdown: Total Estimated Execution Time**: ~15–40 seconds per workflow run. On Form Submission: Instant (Trigger) Mapper (Field Mapping): ~1–2 seconds AI Content Generation: ~5–10 seconds (depending on server load) Text to Image: ~5–15 seconds (depends on external API) LinkedIn Post Creation: ~2–5 seconds 🚀 Ready to Get Started? Let’s get you started with automating your LinkedIn posts! Create your free n8n account and set up the workflow using this link. 📝 Notes & Customizations Form Fields**: Customize the form to gather more specific information for the LinkedIn posts (like audience targeting, post category, etc.). Image API Customization**: Adjust the image generation prompt to fit your brand’s style, or change the color palette as needed. Content Tone**: The tone can be adjusted by modifying the system message sent to Google Gemini for content generation.
by Leonard
Open Deep Research - AI-Powered Autonomous Research Workflow Description This workflow automates deep research by leveraging AI-driven search queries, web scraping, content analysis, and structured reporting. It enables autonomous research with iterative refinement, allowing users to collect, analyze, and summarize high-quality information efficiently. How it works 🔹 User Input The user submits a research topic via a chat message. 🧠 AI Query Generation A Basic LLM generates up to four refined search queries to retrieve relevant information. 🔎 SERPAPI Google Search The workflow loops through each generated query and retrieves top search results using the SerpAPI API. 📄 Jina AI Web Scraping Extracts and summarizes webpage content from the URLs obtained via SerpAPI. 📊 AI-Powered Content Evaluation An AI Agent evaluates the relevance and credibility of the extracted content. 🔁 Iterative Search Refinement If the AI finds insufficient or low-quality information, it generates new search queries to improve results. 📜 Final Report Generation The AI compiles a structured markdown report, including sources with citations. Set Up Instructions 🚀 Estimated setup time: ~10-15 minutes ✅ Required API Keys:** SerpAPI → For Google Search results Jina AI → For text extraction OpenRouter → For AI-driven query generation and summarization ⚙️ n8n Components Used:** AI Agents with memory buffering for iterative research Loops to process multiple search queries efficiently HTTP Requests for direct API interactions with SerpAPI and Jina AI 📝 Recommended Enhancements:** Add sticky notes in n8n to explain each step for new users Implement Google Drive or Notion Integration to save reports automatically 🎯 Ideal for: ✔️ Researchers & Analysts - Automate background research ✔️ Journalists - Quickly gather reliable sources ✔️ Developers - Learn how to integrate multiple AI APIs into n8n ✔️ Students - Speed up literature reviews 🔗 Completely free and open-source! 🚀