by Marian Tcaciuc
Manage Calendar with Voice & Text Commands using GPT-4, Telegram & Google Calendar This n8n workflow transforms your Telegram bot into a personal AI calendar assistant, capable of understanding both voice and text commands in Romanian, and managing your Google Calendar using the GPT-4 model via LangChain. Whether you want to create, update, fetch, or delete events, you can simply speak or write your request to your Telegram bot — and the assistant takes care of the rest. 🚀 Features Voice command support using Telegram voice messages (.ogg) Transcription using OpenAI Whisper Natural language understanding with GPT-4 via LangChain Google Calendar integration: ✅ Create Events 🔁 Update Events ❌ Delete Events 📅 Fetch Events Responses sent back via Telegram 🛠️ Step-by-Step Setup Instructions 1. Create a Telegram Bot Go to @BotFather on Telegram. Send /newbot and follow the instructions. Save the Bot Token. 2. Configure Telegram Trigger Node Paste the Telegram token into the Telegram Trigger and Telegram nodes. Set updates to ["message"]. 3. Set up OpenAI Credentials Get an OpenAI API key from https://platform.openai.com Create a credential in n8n for OpenAI. This is used for both transcription and AI reasoning. 4. Set up Google Calendar In Google Cloud Console: Enable Google Calendar API Set up OAuth2 credentials Add your n8n redirect URI (usually https://yourdomain/rest/oauth2-credential/callback) Create a credential in n8n using Google Calendar OAuth2 Grant access to your calendar (e.g., "Family" calendar). ⚙️ Customization Options 🗣️ Change Language or Locale The transcription node uses "en" for English. Change to another locale if needed. ✏️ Edit Prompt You can modify the prompt in the AI Agent node to include your name, work schedule, or specific behavior expectations. 📆 Change Calendar Logic Adjust time ranges or filters in the Get Events node Add custom logic before Create Event (e.g., validation, conflict checks) 📚 Helpful Tips Make sure n8n has HTTPS enabled to receive Telegram updates. You can test the flow first using only text, then voice. Use AI memory or vector stores (like Supabase) if you want context-aware planning in the future.
by Belgacem Dhiflaoui
Description What Problem Does This Solve? 🛠️ This workflow automates the process of extracting key information from resumes received as email attachments and storing that data in a structured format within a Supabase database. It eliminates the manual effort of reviewing each resume, identifying relevant details, and entering them into a database. This streamlines the hiring process, making it faster and more efficient for recruiters and HR professionals. Target audience: Recruiters, HR departments, and talent acquisition teams. What Does It Do? 🌟 Monitors a designated email inbox for new messages with resume attachments. Extracts key information such as name, contact details, education, work experience, and skills from the attached resumes. Cleans and formats the extracted data. Stores the processed data securely in a Supabase database. Key Features 📋 Automatic email monitoring for resume attachments. Intelligent data extraction from various resume formats (e.g., PDF, DOC, DOCX). Customizable data fields to capture specific information. Seamless integration with Supabase for data storage. Uses OpenRouter to streamline API key management for services such as AI-powered parsing. Setup Instructions Prerequisites ⚙️ n8n Instance**: Self-hosted or cloud instance of n8n. Email Account**: Gmail account with Gmail API access for receiving resumes. Supabase Account**: A Supabase project with a database/table ready to store extracted resume data. You'll need the Supabase URL and API key. OpenRouter Account**: For managing AI model API keys centrally when using LLM-based resume parsing. Installation Steps 📦 1. Import the Workflow: Copy the exported workflow JSON. Import it into your n8n instance via “Import from File” or “Import from URL”. 2. Configure Credentials: In n8n > Credentials, add credentials for: Email account (Gmail API): Provide Client ID and Client Secret from the Google Cloud Platform. Supabase: Provide the Supabase URL and the anon public API key. OpenRouter (Optional): Add your OpenRouter API key for use with any AI-powered resume parsing nodes. Assign these credentials to their respective nodes: Gmail Trigger → Email credentials. Supabase Insert → Supabase credentials. AI Parsing Node → OpenRouter credentials. 3. Set Up Supabase Table: Create a table in Supabase with columns such as: name, email, phone, education, experience, skills, received_date, etc. Make sure the field names align with the structure used in your workflow. 4. Customize Nodes: Parsing Node(s):* Modify the workflow to use an *OpenAI model* directly for field extraction, replacing the *Basic LLM Chain** node that utilizes OpenRouter. 5. Test the Workflow: Send a test email with a resume attachment. Check n8n's execution log to confirm the workflow triggered, parsed the data, and inserted it into Supabase. Verify data integrity in your Supabase table. How It Works High-Level Workflow 🔍 Email Monitoring: Triggered when a new email with an attachment is received (via Gmail API). Attachment Check: Verifies the email contains at least one attachment. Prepare Data: Extracts the attachment and prepares it for analysis. Data Extraction: Uses OpenRouter-powered LLM (if configured) to extract structured information from the resume. Data Storage: The structured information is saved into the Supabase database. Node Names and Actions (Example) Gmail Trigger:** Triggers when a new email is received. IF**: Checks whether the received email includes any attachments. Get Attachments:** Retrieves attachments from the triggering email. Prepare Data:** Prepares the attachment content for processing. Basic LLM Chain:** Uses an AI model via OpenRouter to extract relevant resume data and returns it as structured fields. Supabase-Insert:** Inserts the structured resume data into your Supabase database.
by scrapeless official
AI-Powered Web Data Pipeline with n8n How It Works This n8n workflow builds an AI-powered web data pipeline that automates the entire process of: Extraction** Structuring** Vectorization** Storage** It integrates multiple advanced tools to transform messy web pages into clean, searchable vector databases. Integrated Tools Scrapeless** Bypasses JavaScript-heavy websites and anti-bot protections to reliably extract HTML content. Claude AI** Uses LLMs to analyze unstructured HTML and generate clean, structured JSON data. Ollama Embeddings** Generates local vector embeddings from structured text using the all-minilm model. Qdrant Vector DB** Stores semantic vector data for fast and meaningful search capabilities. Webhook Notifications** Sends real-time updates when workflows complete or errors occur. From messy webpages to structured vector data — this pipeline is perfect for building intelligent agents, knowledge bases, or research automation tools. Setup Steps 1. Install n8n > Requires Node.js v18 / v20 / v22 npm install -g n8n n8n After installation, access the n8n interface via: URL: http://localhost:5678 2. Set Up Scrapeless Register at: Scrapeless Copy your API token Paste the token into the HTTP Request node labeled "Scrapeless Web Request" 3. Set Up Claude API (Anthropic) Sign up at Anthropic Console Generate your Claude API key Add the API key to the following nodes: Claude Extractor AI Data Checker Claude AI Agent 4. Install and Run Ollama macOS brew install ollama Linux curl -fsSL https://ollama.com/install.sh | sh Windows Download the installer from: https://ollama.com Start Ollama Server ollama serve Pull Embedding Model ollama pull all-minilm 5. Install Qdrant (via Docker) docker pull qdrant/qdrant docker run -d \ --name qdrant-server \ -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage \ qdrant/qdrant Test if Qdrant is running: curl http://localhost:6333/healthz 6. Configure the n8n Workflow Modify the Trigger (Manual or Scheduled) Input your Target URLs and Collection Name in the designated nodes Paste all required API Tokens / Keys into their corresponding nodes Ensure your Qdrant and Ollama services are running Ideal Use Cases Custom AI Chatbots Private Search Engines Research Tools Internal Knowledge Bases Content Monitoring Pipelines
by Andrew
Who is this for? This workflow is ideal for n8n self-hosted users, DevOps engineers, and automation developers who want to automatically back up their n8n workflows to GitHub on a regular basis. What problem is this workflow solving Manually backing up n8n workflows can be time-consuming and prone to human error. This workflow automates the backup process, ensuring that all workflows are safely stored in a version-controlled GitHub repository every 24 hours. What this workflow does This automation runs daily to back up all workflows from your n8n instance to a specified GitHub repository. Each workflow is saved as a .json file using its unique ID, organized into a folder path defined by repo_path. The workflow is designed to manage memory usage efficiently by recursively calling itself. Once the backup is complete, it optionally sends a Slack notification to confirm success. Setup Configure the Config node in the subworkflow to set: GitHub Repo Owner GitHub Repo Name Main folder path (repo_path) Connect your GitHub and (optionally) Slack credentials. Set the workflow to run on a daily cron schedule. Test the workflow manually to confirm the GitHub integration works. Sign up for a free consultation and find out how n8n can help you.
by Javier Hita
Who is this for? This workflow is perfect for sales teams, business development professionals, recruitment agencies, and fractional CFO service providers who need to identify and qualify companies actively hiring. Whether you're prospecting for new clients, building a database of potential customers, or researching market opportunities, this automated solution saves hours of manual research while delivering high-quality, AI-analyzed leads. What problem is this workflow solving? Finding qualified prospects in the finance sector is time-consuming and often inefficient. Traditional methods involve: Manually browsing LinkedIn job postings for hours Difficulty distinguishing between genuine opportunities and recruitment spam Inconsistent lead categorization and qualification Risk of contacting the same companies multiple times Lack of structured data for sales team follow-up This workflow automates the entire lead generation process, from data collection to AI-powered qualification, ensuring you focus only on the most promising opportunities. What this workflow does This comprehensive lead generation system performs six key functions: Automated LinkedIn Job Scraping: Uses Apify's reliable LinkedIn Jobs Scraper to extract detailed job postings for finance positions, including company information, job descriptions, and contact details. Smart Data Processing: Removes duplicates, filters companies by size, and structures data for consistent analysis across all leads. Intelligent Lead Categorization: Compares new leads against your existing database to optimize processing and avoid duplicate work. AI-Powered Qualification: Leverages OpenAI's GPT-4 Mini to analyze each lead and determine: Company Category: Consumer companies, Fractional CFO services, Recruiting agencies, or Other Finance Role Validation: Confirms the position is genuinely finance-related Seniority Level: Entry, Mid, Senior, Director, or C-Level classification Job Summary: Concise description for quick sales team review Automated Database Management: Stores qualified leads in Airtable with comprehensive profiles, preventing duplicates while maintaining data integrity. Lead Scoring & Routing: Prioritizes leads based on processing status and qualification results for efficient sales team follow-up. Setup Prerequisites You'll need accounts for three services: Airtable** (Free tier supported) - For lead storage and management Apify** (14-day free trial available) - For LinkedIn job scraping OpenAI** (Pay-per-use) - For AI-powered lead analysis Step 1: Create Required Credentials Apify API Credential Sign up for an Apify account at apify.com Navigate to Settings → Integrations → API tokens Create a new API token In n8n, create a new Apify API credential with your token OpenAI API Credential Create an account at platform.openai.com Generate an API key in the API section In n8n, create a new OpenAI credential with your key Airtable Personal Access Token Go to airtable.com/create/tokens Create a personal access token with the following scopes: data.records:read data.records:write schema.bases:read In n8n, create a new Airtable Personal Access Token credential Step 2: Set Up Airtable Base Create a new Airtable base with the following structure: Table Name: Qualified Leads Required Fields: Company Name (Single line text) Job Title (Single line text) Is Finance Job (Checkbox) Seniority Level (Single select: Entry, Mid, Senior, Director, C-Level) Company Category (Single select: Consumer, Recruiting, Fractional CFO, Other) Job Summary (Long text) Company LinkedIn (URL) Job Link (URL) Posted Date (Date) Location (Single line text) Industry (Single line text) Company Employees (Number) Step 3: Configure the Workflow Import the Workflow: Copy the JSON and import it into your n8n instance Update Credentials: Replace placeholder credential IDs with your actual credential IDs in: "Scrape LinkedIn Jobs" node (Apify credential) "OpenAI GPT-4 Mini" node (OpenAI credential) "Save to Airtable" and "Get Existing Leads" nodes (Airtable credential) Configure Airtable Connection: Update the base ID and table ID in both Airtable nodes Set Search Parameters: In the "Edit Variables" node, configure: linkedinUrls: Your target LinkedIn job search URLs maxEmployees: Maximum company size filter (default: 200) batchSize: Processing batch size for API efficiency (default: 5) Step 4: Test the Workflow Start with a small test by setting count: 50 in the HTTP Request node Use a specific LinkedIn job search URL (e.g., "CFO jobs in New York") Execute the workflow manually and verify results in your Airtable base Review the AI categorization accuracy and adjust prompts if needed How to customize this workflow to your needs Targeting Different Roles Modify the LinkedIn search URLs in the "Edit Variables" node to target different positions: "https://www.linkedin.com/jobs/search/?keywords=Controller" "https://www.linkedin.com/jobs/search/?keywords=Finance%20Director" "https://www.linkedin.com/jobs/search/?keywords=VP%20Finance" Adjusting Company Size Filters Change the maxEmployees parameter to focus on different company segments: Startups: 1-50 employees SMBs: 51-500 employees Enterprise: 500+ employees Customizing AI Analysis Enhance the GPT-4 prompt in the "AI Lead Analyzer" node to include: Industry-specific criteria Geographic preferences Technology stack requirements Company growth stage indicators Integration Options Extend the workflow by adding: Slack notifications** for new qualified leads Email alerts** for high-priority prospects CRM integration** (Salesforce, HubSpot, Pipedrive) Lead enrichment** with additional data sources Scheduling Automation Set up the workflow to run automatically: Daily**: For active prospecting campaigns Weekly**: For ongoing market research Monthly**: For periodic database updates Performance & Cost Optimization API Efficiency**: The workflow processes leads in batches to optimize API usage Smart Deduplication**: Avoids re-processing existing leads to reduce costs Configurable Limits**: Adjust batch sizes and employee count filters based on your needs Expected Costs**: Approximately $0.05-$0.20 per 100 analysed leads (OpenAI costs) Troubleshooting Common Issues: Rate Limiting**: Increase delays between API calls if you encounter rate limits Data Quality**: Review LinkedIn search URLs for relevance to your target market AI Accuracy**: Adjust prompts if categorisation doesn't match your criteria Airtable Errors**: Verify field names match exactly between workflow and base structure Support Resources: Apify LinkedIn Scraper Documentation OpenAI API Documentation Airtable API Reference Transform your lead generation process with this powerful, AI-driven workflow that delivers qualified prospects ready for immediate outreach.
by InfraNodus
This template can be used to find the content gaps in your competitors' discourse: identifying the topics they are not yet connecting and giving you an opportunity to fill in this gap with your content and product ideas. It will also generate research questions that will help bridge the gaps and generate new ideas. The template showcases the use of multiple n8n nodes and processes: enriching Google sheets file with the new data data extraction content enhancement using GraphRAG approach content gap / research question generation This approach can be very useful for research, marketing, and SEO applications as you can quickly get an overview of the main topics that are available online for a certain niche and understand what is missing. What are Content Gaps in Marketing and SEO? In the context of SEO, content gaps are usually understood as the topics that your competitors rank for but you do not. However, it's hard to rank for these topics because there's very high competition. So a much more effective way is to identify the gaps between the topics your competitors are talking about that are not yet bridged in their discourse. If you address these gaps in your content, you will increase the informational gain that your content offers and also offer a novel perspective while touching upon the topics that are relevant in your field. For example, if we analyze the top websites for "body and physical practices, fitness, etc." we will see that most of them are talking about the health and fitness aspects and another big topic is the community aspect. However, there is a gap between the two topics: which means that most of the websites (companies) that talk about this topic don't mention the two in the same context. This might be an opportunity: bridging the gap between health, fitness but also emphasizing the community aspect that comes with a collective practice. How it works This template consists of the two stages: 1) Data enrichment of a Google sheet file with a list of your competitors using InfraNodus' GraphRAG to generate topical summaries and graph summaries for every URL you're analyzing. 2) Insight generation (using InfraNodus to identify the main topical clusters and gaps in those summaries, these insights are then added to the Google sheet file. Additionally, it contains a sub workflow that you can activate and launch to ask Perplexity model to conduct a market research and find the companies that operate in your field and populate the original Google sheet file. Here's a description step by step: Step 0: Populate the Google sheets file with the company data (either manually or using the sub-workflow provided or Manus AI / Deep Research) Steps 1-2: Triggering and Launching the workflow, extracting the company URL from the Google sheet row Step 3: Scraping the url content from the companies' websites and cleaning the data Steps 5-7: Use InfraNodus GraphRAG Content Enhancer to get a topical summary and graph summary. This is what you're going to get: Steps 8-10: Use InfraNodus AI to generate insight advice and research questions based on the content gaps How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for each expert (using PDF / content import options) in InfraNodus For each graph, go to the workflow, paste the name of the graph into the body name field. Keep other settings intact or learn more about them at the InfraNodus access points page. Once you add one or more graphs as experts to your flow, add the LLM key to the OpenAI node and launch the workflow Requirements An InfraNodus account and API key A Google Sheet account and an authorization key Note: OpenAI key is not required. But you might want to get a Perplexity AI key if you'd like to use the sub-workflow that populates the Google sheet with your competitors' website addresses (if you don't have this list yet). Customizing this workflow You can use this same workflow with a Telegram bot or Slack (to be notified of the summaries and ideas). You can also hook up automated social media content creation workflows in the end of this template, so you can generate posts that are relevant (covering the important topics in your niche) but also novel (because they connect them in a new way). Check out our n8n templates for ideas at https://n8n.io/creators/infranodus/ Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20234254556828-Find-Content-Gaps-in-Websites-Market-Research-and-SEO-n8n-Workflow Also check the full tutorial with a conceptual explanation at https://support.noduslabs.com/hc/en-us/articles/20454382597916-Beat-Your-Competition-Target-Their-Content-Gaps-with-this-n8n-Automation-Workflow Also check out the video tutorial with a demo: For support and help with this workflow, please, contact us at https://support.noduslabs.com
by Billy Christi
Who is this for? This workflow is ideal for: Finance teams** that need to process incoming invoices faster with minimal errors Small to mid-sized businesses** that want to automate invoice intake, review, and storage Operations managers** who require approval workflows and centralized record-keeping What problem is this workflow solving? Manually processing invoices is time-consuming, error-prone, and often lacks structure. This workflow solves those challenges by: Automating the intake of invoices** from multiple sources (email, Google Drive, web form) Extracting invoice data using AI**, eliminating manual data entry Implementing an email-based approval system** to add human oversight Automatically storing approved invoice data** in Google Sheets for easy access and reporting Notifying stakeholders** when invoices are approved or rejected What this workflow does This end-to-end invoice processing workflow includes: Three invoice input methods: Google Drive folder monitor, Gmail attachments, and web form uploads PDF to text extraction for each input method using native PDF parsing AI-powered invoice analysis with GPT-4 to extract structured fields such as vendor, total, and due date Dynamic categorization of invoice type (e.g., Travel, Software, Utilities) via AI Email-based approval workflow with embedded forms to collect decisions and notes Automated Google Sheets logging of all invoice data, approval status, and reviewer feedback Rejection notifications sent automatically to your finance team for transparency and follow-up Setup Copy the Google Sheet template here: 👉 PDF Invoice Parser with Approval Workflow – Google Sheet Template Connect your Google Drive account and specify the invoice folder ID Set up Gmail to monitor incoming invoices with PDF attachments Enable your form trigger to accept direct uploads from your internal or external users Enter your OpenAI API key in the AI processing node for data extraction Configure Google Sheets with a target spreadsheet to store invoice data Set recipient email addresses for invoice approvals and rejection notifications Test with a sample invoice to ensure end-to-end flow is working How to customize this workflow to your needs Change input sources**: Replace Gmail with Outlook or use Slack uploads instead Add validation steps**: Include regex or keyword checks before AI analysis Customize the AI schema**: Modify the expected JSON structure based on your internal finance system Integrate with accounting tools**: Add Xero, QuickBooks, or custom API nodes to push data Route based on category**: Add conditional logic to handle invoices differently based on vendor or category Multi-level approvals**: Add additional email steps if higher-level signoff is needed Audit logging**: Use database or Google Sheets to maintain a historical log of approvals and rejections
by InfraNodus
This template can be used to upload the files in your Google drive to an InfraNodus knowledge graph. The InfraNodus graph will then reveal the main topics and ideas in your collection of documents and show the content gaps in them. You can also use the built-in AI to converse with the documents. You can also access the InfraNodus Graphs via its GraphRAG API to re-use them in your other n8n workflows for high-quality content retrieval and knowledge base optimization. The template showcases the use of multiple n8n nodes and processes: Extracting documents from a Google Drive folder text extraction optional: high-quality PDF conversion using ConvertAPI InfraNodus knowledge graph generation Note: If you want to **Sync your Google drive to an InfraNodus graph, check out our other workflow* How it works Here's a description of this workflow step by step: Find all the files in a specific Google drive folder For each file found: reiterate the workflow and Identify the type of the file (TXT, PDF, Markdown) For TXT and Markdown files extract the text data For PDF files use a special PDF to Text convertor to extract the text data. (Optional: using ConvertAPI for better quality PDF conversion) Forward everything to the InfraNodus graphAndStatements API endpoint with the name of the new graph, the text field with the text data, the text settings, and doNotSave=false to create a new graph Reiterate through another file. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Use that API key to set up authorization for the InfraNodus tool in the workflow. If you want to upload the files to an existing graph, you should copy its name from InfraNodus. Otherwise you can specify any name you want. Requirements An InfraNodus account and API key A Google Drive account and authorization (you will need to set it up via Google Cloud using the n8n instructions provided in the Google Drive node). Customizing this workflow You can use Dropbox instead of Google Drive. You can also modify this workflow slightly to make it Sync with a Google Drive when the new files appear in it. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20267019838108-Upload-Sync-Your-Google-Drive-Folder-with-InfraNodus-using-n8n
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Brave Search Structured Data Extractor workflow is designed for professionals and teams that need high-quality, structured insights from Brave search results in real time. Whether you're performing market research, tracking competitors, training AI models, or powering content engines, this workflow offers a robust and automated solution. This workflow is tailored for: Market Researchers - Who analyze trends across multimedia channels AI Developers - Who require clean, structured datasets for model fine-tuning SEO & Content - Analysts looking to monitor visibility across news, images, and videos Media Researchers - Curating timely and relevant information across formats Automation Engineers - Integrating search insights into downstream workflows What problem is this workflow solving? Traditional web scraping and search result parsing is fragmented, inconsistent, and prone to errors, especially when dealing with multimedia (images, videos, news) data from search engines. This workflow provides: Centralized Brave search data extraction across all content types. Switches the search execution based upon the type of search that is being set. ex: news, images, videos, all Automated structured data transformation using Google Gemini Unified output persistence and notification across disk, webhook, and Google Sheets What this workflow does Input Configuration Define your Brave search query Set the search type: videos, images, news, or all Configure your Bright Data MCP zone Bright Data MCP Search Execution Initiates a Brave search via Bright Data MCP using the correct URL pattern for each search type Returns raw HTML of search results Google Gemini LLM Structured Data Extraction Transforms raw results into structured data (e.g., title, URL, source, snippet) Output Handling Save to disk (e.g., JSON or CSV file) Send Webhook notification with structured data (e.g., Slack, internal dashboards) Store in Google Sheets for team-wide access or dashboarding Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Enhance Output Analysis Add additional LLM prompts for topic classification, sentiment scoring, or trend forecasting. Output Format Options Choose to output CSV, Markdown, or HTML reports based on your integration target. Schedule Automation Trigger the workflow on a schedule (daily/weekly) to keep monitoring topical content.
by InfraNodus
This template can be used to sync the files in your Google drive to a new or existing InfraNodus knowledge graph. The InfraNodus graph will then reveal the main topics and ideas in your collection of documents and show the content gaps in them. You can also use the built-in AI to converse with the documents. You can also access the InfraNodus Graphs via its GraphRAG API to re-use them in your other n8n workflows for high-quality content retrieval and knowledge base optimization. The template showcases the use of multiple n8n nodes and processes: Syncing documents from a Google Drive folder / extracting them text extraction from files optional: high-quality PDF conversion using ConvertAPI InfraNodus knowledge graph generation Note: If you want to **upload files from your Google drive to an InfraNodus graph, check out our other workflow* How it works Here's a description of this workflow step by step: Wait for new file(s) to appear in the Google drive folder Reiterate through each file Retrieve the new file from the Google drive For each file found: reiterate the workflow and Identify the type of the file (TXT, PDF, Markdown) For TXT and Markdown files extract the text data For PDF files use a special PDF to Text convertor to extract the text data. (Optional: using ConvertAPI for better quality PDF conversion) Forward everything to the InfraNodus graphAndStatements API endpoint with the name of the new graph, the text field with the text data, the text settings, and doNotSave=false to create a new graph Reiterate through another file. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Use that API key to set up authorization for the InfraNodus tool in the workflow. If you want to upload the files to an existing graph, you should copy its name from InfraNodus. Otherwise you can specify any name you want. Requirements An InfraNodus account and API key A Google Drive account and authorization (you will need to set it up via Google Cloud using the n8n instructions provided in the Google Drive node). Customizing this workflow You can use Dropbox instead of Google Drive. You can also modify this workflow slightly to make it Upload the files from a Google Drive when the new files appear in it. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20267019838108-Upload-Sync-Your-Google-Drive-Folder-with-InfraNodus-using-n8n
by Nasser
For Who? Content Creators Youtube Automation Marketing Team How it works? 1 - Enter your content idea in the Edit Fields node in a "raw" format. Ex : Boil Eggs Perfectly 2 - LLM create 3 keywords request based on the idea and Apify scrape the YTB Search 3 - Wait until the dataset is completed in Apify 4 - Retrieve Dataset from Apify, calculate approximation of CTR and filter top performing videos 5 - LLM analyze patterns of best performing titles and create a prompt based on it. Another LLM create 5 titles based on these criteria 6 - LLM analyze patterns of best performing thumbnails and create a prompt based on it. Another LLM create 1 thumbnail based on these criteria 7 - Return titles and thumbnail in a HTML Page 📺 YouTube Video Tutorial: SETUP Setup Input Content Idea : Enter Keyword Related to the niche you want. Trigger can be replaced with anything as long as you retrieve a content idea. For example : Form submission, Database entry, etc ... If you want to change the number of keywords, update the data accordingly in the "Create Keywords" LLM Chain node ➡️ Structured Output Parser AND in the "YTB Search Scrape" HTTP Request Node in Body ➡️ JSON ➡️ searchQueries. If you want to change the number of scraped videos for each keyword, update the data accordingly in the "Create Videos Dataset" HTTP Request Node in Body ➡️ JSON ➡️ maxResults. If you want to adjust the CTR Calculation feel free to update it in the Code Node ➡️ Follow the Comments (after "//") to find what you're looking for. If you want to adjust the level of virality of the videos kept for analaysis go to Filter Node ➡️ Value. Setup Output HTML Page : You can also replace this part with any type of storage. For example : Airtable Database, Google Drive/Google Sheet, Send to an email, etc ... APIs : For the following third-party integrations, replace ==[YOUR_API_TOKEN]== with your API Token or connect your account via Client ID / Secret to your n8n instance : Apify : https://docs.apify.com/api/v2/getting-started OpenAI : https://platform.openai.com/docs/overview (base URL : https://api.openai.com/v1) OR OpenRouter : https://openrouter.ai/docs/quickstart (base URL : https://openrouter.ai/api/v1) HuggingFace (FLUX.1) : https://huggingface.co/docs 👨💻 More Workflows : https://n8n.io/creators/nasser/
by Billy Christi
Who is this for? This workflow is perfect for: HR professionals** seeking to automate employee and department management Startups and SMBs** that want an AI-powered HR assistant on Telegram Internal operations teams** that want to simplify onboarding and employee data tracking What problem is this workflow solving? Managing employee databases manually is error-prone and inefficient—especially for growing teams. This workflow solves that by: Enabling natural language-based HR operations directly through Telegram Automating the creation, retrieval, and deletion of employee records in Airtable Dynamically managing related data such as departments and job titles Handling data consistency and linking across relational tables automatically Providing a conversational interface backed by OpenAI for smart decision-making What this workflow does Using Telegram as the interface and Airtable as the backend database, this intelligent HR workflow allows users to: Chat in natural language (e.g. “Show me all employees” or “Create employee: Sarah, Marketing…”) Interpret and route requests via an AI Agent that acts as the orchestrator Query employee, department, and job title data from Airtable Create or update records as needed: Add new departments and job titles automatically if they don’t exist Create new employees and link them to the correct department and job title Delete employees based on ID Respond directly in Telegram, providing user-friendly feedback Setup View & Copy the Airtable base here: 👉 Employee Database Management – Airtable Base Template Telegram Bot: Set up a Telegram bot and connect it to the Telegram Trigger node Airtable: Prepare three Airtable tables: Employees with links to Departments and Job Titles Departments with Name & Description Job Titles with Title & Description Connect your Airtable API key and base/table IDs into the appropriate Airtable nodes Add your OpenAI API key to the AI Agent nodes Deploy both workflows: the main chatbot workflow and the employee creation sub-workflow Test with sample messages like: “Create employee: John Doe, john@company.com, Engineering, Software Engineer” “Remove employee ID rec123xyz” How to customize this workflow to your needs Switch databases**: Replace Airtable with Notion, PostgreSQL, or Google Sheets if desired Enhance security**: Add authentication and validation before allowing deletion Add approval flows**: Integrate Telegram button-based approvals for sensitive actions Multi-language support**: Expand system prompts to support multiple languages Add logging**: Store every user action in a log table for auditability Expand capabilities**: Integrate payroll, time tracking, or Slack notifications Extra Tips This is a two-workflow setup. Make sure the sub-workflow is deployed and accessible from the main agent. Use Simple Memory per chat ID to preserve context across user queries. You can expand the orchestration logic by adding more tools to the main agent—such as “Get active employees only” or “List employees by job title.”