by InfraNodus
This template can be used to generate research ideas from PDF scientific papers based on the content gaps found in text using the InfraNodus knowledge graph GraphRAG knowledge graph representation. Simply upload several PDF files (research papers, corporate or market reports, etc) and the template will generate a research question, which will then be sent as an AI prompt to the InfraNodus GraphRAG system that will extract the answer from the documents. As a result, you find the gap in a collection of research papers and bridge it in a few seconds . The template is useful for: advancing scientific research generating AI prompts that drive research further finding the right questions to ask to bridge blind spots in a research field avoiding the generic bias of LLM models and focusing on what's important in your particular context Using Content Gaps for Generating Research Questions Knowledge graphs represent any text as a network: the main concepts are the nodes, their co-occurrences are the connections between them. Based on this representation, we build a graph and apply network science metrics to rank the most important nodes (concepts) that serve as the crossroads of meaning and also the main topical clusters that they connect. Naturally, some of the clusters will be disconnected and will have gaps between them. These are the topics (groups of concepts) that exist in this context (the documents you uploaded) but that are not very well connected. Addressing those gaps can help you see which groups of concepts you could connect with your own ideas. This is exactly what InfraNodus does: builds the structure, finds the gaps, then uses the built-in AI to generate research questions that bridge those gaps. How it works 1) Step 1: First, you upload your PDF files using an online web form, which you can run from n8n or even make publicly available. 2) Steps 2-4: The documents are processed using the Code and PDF to Text nodes to extract plain text from them. 3) Step 5: This text is then sent to the InfraNodus GraphRAG node that creates a knowledge graph, identifies structural gaps in this graph, and then uses built-in AI to research questions, which are then used as AI prompts. 4) Step 6: The research questino is sent to the InfraNodus GraphRAG system that represents the PDF documents you submitted as a knowledge graph and then uses the research question generated to come up with an answer based on the content you uploaded. 4) Step 7: The ideas are then shown to the user in the same web form. Optionally, you can derive the answers from a different set of papers, so the question is generated from one batch, but the answer is generated from another. If you'd like to sync this workflow to PDF files in a Google Drive folder, you can copy our Google Drive PDF processing workflow for n8n. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key. Add this key into the InfraNodus GraphRAG HTTP node(s) you use in this workflow. You do not need any OpenAI keys for this to work. Optionally, you can change the settings in the Step 4 of this workflow and enforce it to always use the biggest gap it identifies. Requirements An InfraNodus account and API key Note: OpenAI key is not required. You will have direct access to the InfraNodus AI with the API key. Customizing this workflow You can use this same workflow with a Telegram bot or Slack (to be notified of the summaries and ideas). You can also hook up automated social media content creation workflows in the end of this template, so you can generate posts that are relevant (covering the important topics in your niche) but also novel (because they connect them in a new way). Check out our n8n templates for ideas at https://n8n.io/creators/infranodus/ Also check the full tutorial with a conceptual explanation at https://support.noduslabs.com/hc/en-us/articles/20454382597916-Beat-Your-Competition-Target-Their-Content-Gaps-with-this-n8n-Automation-Workflow Also check out the video introduction to InfraNodus to better understand how knowledge graphs and content gaps work: For support and help with this workflow, please, contact us at https://support.noduslabs.com
by Davide
This workflow is designed to analyze YouTube videos by extracting their transcripts, summarizing the content using AI models, and sending the analysis via email. This workflow is ideal for content creators, marketers, or anyone who needs to quickly analyze and summarize YouTube videos for research, content planning, or educational purposes. How It Works: Trigger: The workflow starts with a manual trigger, allowing you to test it by clicking "Test workflow." You can also set a YouTube video URL manually or dynamically. YouTube Video ID Extraction: The workflow extracts the YouTube video ID from the provided URL using a custom JavaScript function. This ID is necessary for fetching the transcript. Transcript Generation: The video ID is sent via an HTTP request to generate the transcript. You need to replace APIKEY with a free API key from the service. Transcript Validation: The workflow checks if a transcript exists for the video. If a transcript is available, it proceeds; otherwise, it stops. Full Text Extraction: If a transcript exists, the workflow combines all transcript segments into a single text variable for further analysis. AI-Powered Analysis: The full transcript is passed to an AI model (DeepSeek, OpenAI, or OpenRouter) for analysis. The AI generates a structured summary, including a title and key points, formatted in markdown. Email Notification: The analysis results (title and summary) are sent via email using SMTP credentials. The email contains the structured summary of the video. Set Up Steps: YouTube Transcript API: Obtain a free API key from youtube-transcript.io and replace APIKEY in the "Generate transcript" node with your key. AI Model Configuration: Configure the AI model nodes (DeepSeek, OpenAI, or OpenRouter) with the appropriate API credentials. You can choose one or multiple models depending on your preference. Email Setup: Configure the "Send Email" node with your SMTP credentials (e.g., Gmail, Outlook, or any SMTP service). Ensure the email settings are correct to send the analysis results. Key Features: Free Tools: Uses **youtube-transcript.io for free transcript generation. AI Models**: Supports multiple AI models (DeepSeek, OpenAI, OpenRouter) for flexible analysis. Email Notifications**: Sends the analysis results directly to your inbox. Customizable**: Easily adapt the workflow to analyze different videos or use different AI models.
by Stefan
Track n8n Node Definitions from GitHub and Export to Google Sheets Overview This workflow automatically retrieves and processes metadata from the official n8n GitHub repository, filters all available .node.json files, parses their structure, and appends structured information into a Google Sheet. Perfect for developers, community managers, and technical writers who need to maintain up-to-date information about n8n's evolving node ecosystem. Setup Instructions Prerequisites Before setting up this workflow, ensure you have: A GitHub account with API access A Google account with Google Sheets access An active n8n instance (cloud or self-hosted) Step 1: GitHub API Configuration Navigate to GitHub Settings → Developer Settings → Personal Access Tokens Generate a new token with public_repo permissions Copy the generated token and store it securely In n8n, create a new "GitHub API" credential Paste your token in the credential configuration and save Step 2: Google Sheets Setup Create a new Google Sheets document Set up the following column headers in the first row: node (Column A) - Node identifier/name nodeVersion (Column B) - Version of the node codexVersion (Column C) - Codex version number categories (Column D) - Node categories credentialDocumentation (Column E) - Credential documentation URL primaryDocumentation (Column F) - Primary documentation URL Note down the Google Sheets document ID from the URL Configure Google Sheets OAuth2 credentials in n8n Step 3: Workflow Configuration Import the workflow into your n8n instance Update the following placeholder values: Replace YOUR_GOOGLE_SHEETS_DOCUMENT_ID with your actual document ID Replace YOUR_WEBHOOK_ID if using webhook functionality Configure the GitHub API credentials in the HTTP Request nodes Set up Google Sheets credentials in the Google Sheets nodes Share your Google Sheets document with the email address associated with your Google OAuth2 credentials Grant "Editor" permissions to allow the workflow to write data Google Sheets Template Details The workflow creates a structured dataset with these columns: node**: Node identifier (e.g., n8n-nodes-base.slack) nodeVersion**: Version of the node (e.g., 1.0.0) codexVersion**: Codex version number (e.g., 1.0.0) categories**: Node categories (e.g., Communication, Productivity) credentialDocumentation**: URL to credential documentation primaryDocumentation**: URL to primary node documentation Customization Options Modifying Data Extraction You can customize the "Format Data" node to extract additional fields: Add new assignments in the Set node Modify the column mapping in the Google Sheets node Update your spreadsheet headers accordingly Changing Update Frequency To run this workflow on a schedule: Replace the Manual Trigger with a Cron node Set your desired schedule (e.g., daily, weekly) Configure appropriate timing to avoid API rate limits Adding Filters Customize the "Filter Node Files" code node to: Filter specific node types Include/exclude certain categories Process only recently updated nodes Features Fetches all node definitions from the n8n-io/n8n repository Filters for .node.json files only Downloads and parses metadata automatically Extracts key fields like node names, versions, categories, and documentation URLs Appends structured data to Google Sheets with batch processing Includes error handling and retry mechanisms Clears existing data before appending new information for fresh results Use Cases This workflow is ideal for: Track changes in official n8n node definitions over time Audit node categories and documentation links for completeness Build custom dashboards from node metadata Community management and documentation maintenance Integration planning and compatibility analysis
by MRJ
:car: Business Value Proposition Accelerates ISO 26262 compliance for automotive/industrial systems by automating safety analysis while maintaining rigorous audit standards. :gear: How It Works graph TD A[Engineer uploadssystem description] --> B(LLM identifies hazards) B --> C(LLM scores risks per ISO 26262) C --> D(Generates mitigation strategies) D --> E(Produces audit-ready reports) :chart_with_upwards_trend: Key Benefits Time 50-70% faster than manual HAZOP/FMEA sessions Instant report generation vs. weeks of documentation Risk Mitigation Pre-validated templates reduce human error Auto-generated traceability simplifies audits :warning: Governance Controls Human-in-the-loop: All LLM outputs require engineer sign-off Version tracking: Full history of modifications Audit mode: Export all decision rationales :computer: Technical Requirements Runs on existing n8n instances Docker deployment (<1hr setup) Integrates with JAMA/DOORS (optional) :wrench: Setup and Usage Prerequisites Docker (Install Guide) Docker Compose (Install Guide) n8n instance (Free Self-Hosted or Cloud - Paid) OpenAI API key (Get Key) Enterprise-ready deployment: When supported by IT infrastructure teams, this solution transforms into a scalable AI safety assistant, providing real-time HARA guidance akin to engineering Co-pilot tools. :arrow_down: Installation and :play_or_pause_button: Running the Workflow For installation procedures and usage of workflow, refer the repository :warning: Validation & Limitations AI-Assisted Analysis Considerations | Advantage | Mitigation Strategy | Implementation Example | |-----------|---------------------|------------------------| | Rapid hazard identification | Human validation layer | Manual review nodes in workflow | | Consistent S/E/C scoring | Rule-based validation | ASIL-D → Redundancy check | | Edge case coverage | Cross-reference with historical data | Integration with incident databases | Critical Validation Steps AI Output Review node in n8n Example: (by code) { "type": "function", "parameters": { "functionCode": "if ($input.item.json.ASIL === 'D' && !$input.item.json.redundancy) throw new Error('ASIL D requires redundancy');" } } Version Control Prompt versions tied to ISO standard editions (e.g., ISO26262:2018-v1.2) Git-tracked changes to ai_models/training_data/ Audit trails Providing a log structure for audit trails Log structure /logs/ └── YYYY-MM-DD/ ├── hazards_approved.log └── hazards_rejected.log
by David Olusola
🎯 JavaScript Master Class - Interactive Code Tutorial 📚 How It Works This tutorial is designed as a self-paced learning experience where you explore working JavaScript code examples. Unlike traditional tutorials, you learn by examining real implementations and understanding how they work. 🔍 The Learning Method: Execute first - See the workflow in action Open each node - This is where the real learning happens! Study the code - Read JavaScript implementations and comments Understand the flow - See how data transforms between nodes Experiment - Modify code to test your understanding 🎮 The "Game" Concept: It's not a real game - it's a gamified learning experience Uses RPG elements (XP, levels, achievements) to make learning engaging Simulates progression through 3 difficulty levels Main learning happens when you open nodes and read the code!** 🚀 Setup Steps Step 1: Import the Template Copy the JSON template provided Open your n8n instance Create a new workflow Press Ctrl+A (or Cmd+A on Mac) to select all Press Ctrl+V (or Cmd+V) to paste the JSON Click "Save" and name it: JavaScript Master Class - Interactive Tutorial Step 2: Execute the Workflow Click "Test workflow" or "Execute workflow" Watch it run through all nodes automatically See the final results and progression simulation Step 3: Start Learning (The Important Part!) Now the real learning begins - you must open each node manually: 🔍 For Each Code Node: Double-click the node to open it Read the JavaScript code carefully Study the comments - they explain key concepts Understand the logic - how input becomes output Note the techniques used in each challenge 📖 For Each Sticky Note: Read the explanations and context Understand the learning objectives Note the skills being taught 🎯 Learning Path Level 1: Data Warrior (Beginner) 📂 Open Node: 🎲 Level 1: Data Warrior Focus:** Data deduplication using filter() and findIndex() Key Skills:** Array methods, duplicate detection What to Study:** How the deduplication algorithm works Level 2: API Ninja (Intermediate) 📂 Open Node: ⚔️ Level 2: API Ninja Focus:** Data transformation and validation Key Skills:** String manipulation, validation logic, error handling What to Study:** How to clean and validate messy API data Level 3: Automation Master (Advanced) 📂 Open Node: 🏆 Final Boss: Automation Master Focus:** Complex workflow processing Key Skills:** Task orchestration, priority sorting, error handling What to Study:** How to build robust automation systems 💡 Learning Tips 🔍 Active Exploration: Don't just run it** - open every single node! Read all comments** - they contain key insights Compare approaches** - see how complexity increases Try modifications** - change values and see what happens 📝 Study Techniques: Take notes** on patterns you see Copy interesting code** snippets for reference Try to explain** each function to yourself Test your understanding** by modifying the code 🧪 Experimentation: Change filter conditions** in Level 1 Modify validation rules** in Level 2 Adjust workflow logic** in Level 3 Break something** and fix it - great for learning! ⚠️ Important Notes 🎮 "Game" Reality Check: This is NOT an interactive game where you make choices It's a code tutorial with game-like progression themes The "game" runs automatically when executed Real learning happens when you manually open and study each node** 📚 Educational Value: Primary learning:** Understanding JavaScript implementations Secondary learning:** n8n workflow patterns Bonus learning:** Problem-solving approaches 🔧 Technical Requirements: Working n8n instance Basic JavaScript knowledge helpful but not required Willingness to explore and experiment 🎯 Success Metrics You'll know you're learning when you can: ✅ Explain how each deduplication algorithm works ✅ Identify the validation patterns used ✅ Understand the workflow orchestration logic ✅ Modify the code to handle different scenarios ✅ Apply these patterns to your own projects 🤔 Next Steps After completing this tutorial: Apply the patterns to your own workflows Experiment with variations Build something using these techniques Share your learnings with the community Remember: The magic happens when you open each node and study the code! 🔍
by Arunava
This n8n workflow automates replying to Google Play Store reviews using AI. It analyzes each review’s sentiment and tone and posts a human-like response — saving time for indie devs, founders, and PMs managing multiple apps. 💡 Use Cases Respond to reviews at scale without sounding robotic Prioritize negative sentiment feedback Maintain consistent tone and support messaging Free up time for teams to focus on product instead of ops 🧠 How it works Uses the Play Store API to fetch new app reviews Filters out reviews that have already been replied to Analyzes sentiment using OpenAI GPT-4o Passes sentiment and review context to an AI Agent node that crafts a reply Replies are posted to Play Store via Google API (Optional) Logs the reply to Slack for visibility 🛠️ Setup Instructions (Sticky notes included in the workflow) 1. HTTPS Node Replace the package name with your app’s package ID Add Google Service Account credentials → Create from Google Cloud Console with access to Play Console → Add to n8n Credential Manager 2. OpenAI Node Add your OpenAI API key → GPT-4o or GPT-4o mini supported → Customize model or instructions if needed 3. AI Agent Node Modify prompt to reflect your app name, tone, and feature set → E.g. polite, witty, casual, support-friendly, etc. → You can add reply conditions or logic for different types of reviews 4. Slack Node (Optional) Configure Slack Webhook or OAuth credentials if you want reply logs → Otherwise, delete the node to simplify the workflow ⚡ Requirements Google Play Developer Console access Google Cloud Project with service account OpenAI account (GPT-4o or mini) (Optional) Slack workspace & app for logging 🙌 Don’t want to set this up yourself? I’ll do it for you. Just drop me an email: imarunavadas@gmail.com Let’s automate the boring stuff so you can focus on growth. 🚀
by Belgacem Dhiflaoui
What Problem Does This Solve? This workflow automates the end-to-end process of capturing company information from Google Drive, storing it semantically in Pinecone, and interacting with users via an intelligent AI chatbot. It eliminates the need for manual customer service, lead tracking, and company information retrieval—offering a fully automated, intelligent engagement system. Perfect for teams that need to: Maintain accurate, AI-readable company knowledge bases Answer customer inquiries 24/7 using AI Automatically collect and log lead information Embed a chatbot into their website to assist potential customers Target Audience: Sales teams, business owners, marketing departments, customer support reps, startup founders, or anyone looking to automate AI-powered lead generation and customer engagement. What Does It Do? Part One – Knowledge Ingestion Monitors** a Google Drive folder for new .txt or document uploads. Downloads** the document and splits the content into manageable chunks using a recursive character splitter. Generates** embeddings via OpenAI. Stores** the embeddings in a Pinecone vector database under the Q&A namespace. Purpose:** This knowledge base is later used to answer business-related questions through AI. Part Two – AI Chatbot Engagement Listens** for incoming chat messages using n8n’s chatTrigger node. Activates an AI agent** (powered by GPT-4o) to respond to inquiries regarding business hours, services, products, or general company info. Retrieves knowledge** using a vector search tool connected to Pinecone (newCompany_q). Captures leads:** If a user shows interest, the AI collects and stores: Name Email Phone number Specific interest into a connected Google Sheet automatically. Key Features 🔄 Google Drive integration for real-time file processing 🧠 OpenAI embedding + Pinecone vector store for semantic memory 🤖 LangChain agent with tool-based reasoning 🗃️ Google Sheets integration for dynamic lead storage 💬 GPT-4o model for accurate, human-like conversation ⚙️ Modular design to expand into CRM, Notion, or email workflows 🌐 Website-ready chatbot endpoint 🧰 Setup Instructions Prerequisites: n8n instance (cloud or self-hosted) Google Drive account (for uploading company data) Pinecone account (for vector storage) OpenAI API key Google Sheets access with OAuth2 credentials 📦 Installation Steps 1. Import the Workflow Upload the JSON files into your n8n instance. 2. Configure Credentials In n8n > Credentials, connect: Google Drive OpenAI Pinecone Google Sheets **3. Set Pinecone Index & Namespace Example:** Index: comanyName Namespace: Q&A 4. Test the Flow Upload a sample .txt or pdf file to the monitored Drive folder. Send a message to the chatbot (e.g., "What are your opening hours?"). Check the Google Sheet for collected user info. How It Works (Behind the Scenes) Part 1 – Data Preparation: Company files are uploaded to Google Drive. File is detected, downloaded, and chunked. Embeddings are created using OpenAI. Data is stored in Pinecone for semantic retrieval. Part 2 – Chat Interaction: A chat message triggers the workflow via webhook. The AI agent interprets the intent and accesses company data via newCompany_q. If lead data is gathered, it is appended to a Google Sheet using the AI-parsed values. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Kumar Shivam
Complete AI Product Description Generator Transforms product images into high-converting copy with GPT-4o Vision + Claude 3.5 The Shopify AI Product Description Factory is a production-grade n8n workflow that converts product images and metadata into refined, SEO-aware descriptions—fully automated and region-agnostic. It blends GPT-4o vision for visible attribute extraction, Claude 3.5 Sonnet for premium copy, Perplexity research for verified brand context, Google Sheets for orchestration and audit trails, plus automated daily sales analytics enrichment. Link-header pagination and structured output enforcement ensure reliable scale. To refine according to your usecase connect via my profile @connect Key Advantages Vision-first copywriting Uses gpt-4o to identify only visible physical attributes (closure, heel, materials, sole) from product images—no guesses. Premium copy generation anthropic/claude-3.5-sonnet crafts concise, benefit-led descriptions with consistent tone, length control, and clean formatting. Research-assisted accuracy perplexityTool verifies vendor/brand context from official sources to avoid speculation or fabricated claims. Pagination you can trust Automates Shopify REST pagination via Link headers and persists page_info for resumable runs. Google Sheets orchestration Centralized staging, status tracking, and QA in Products, with ProcessingState for batch/page markers, and Error_log for diagnostics. Bulletproof error feedback errorTrigger + AI diagnosis logs clear, non-technical and technical explanations to Error_log for fast recovery. Automated sales analytics Daily sales tracking automatically captures and enriches total sales data for comprehensive business intelligence and performance monitoring. How It Works Intake and filtering httpRequest fetches /admin/api/2024-04/products.json?limit=200&{page_info} code filters only items with: Image present Empty body_html The currSeas:SS2025 tag Extracts tag metadata such as x-styleCode, country_of_origin, and gender when available Pagination controller code parses Link headers for rel="next" and extracts page_info googleSheets updates ProcessingState with page_info_next and increments the batch number for resumable polling Generation pipeline googleSheets pulls rows with Status = Ready for AI Description; limit throttles batch size openAi Analyze image (model gpt-4o) returns strictly visible features lmChatOpenRouter (Claude 3.5) composes the SEO description, optionally blending verified vendor context from perplexityTool outputParserStructured guarantees strict JSON: product_id, product_title (normalized), generated_description, status googleSheets writes results back to Products for review/publish Sales analytics enrichment Schedule Trigger** runs daily at 2:01 PM to capture previous day's sales httpRequest fetches paid orders from Shopify REST API with date range filtering splitOut and summarize nodes calculate total daily sales Automatic Google Sheets logging with date stamps and totals Zero-sale days are properly recorded for complete analytics continuity Reliability and insight errorTrigger routes failures to an AI agent that explains the root cause and appends a concise note to Error_log. What's Inside (Node Map) Data + API httpRequest (Shopify REST 2024-04 for products and orders) googleSheets (multiple sheet operations) googleSheetsTool (error logging) AI models openAi (gpt-4o vision analysis) lmChatOpenRouter (anthropic/claude-3.5-sonnet for content generation) AI Agent** (intelligent error diagnosis) Analytics & Processing splitOut (order data processing) summarize (sales totals calculation) set nodes (data field mapping) Tools and guards perplexityTool (brand research) outputParserStructured (JSON validation) memoryBufferWindow (conversation context) Control & Scheduling scheduleTrigger (multiple time-based triggers) cron (periodic execution) limit (batch size control) if (conditional logic) code (custom filtering and pagination logic) Observability errorTrigger + AI diagnosis to Error_log Processing state tracking Sales analytics logging Content & Compliance Rules Locale-agnostic copy**; brand voice is configurable per store Only image-verifiable attributes** (no guesses); clean HTML suitable for Shopify themes Optional normalization rules (e.g., color/branding cleanup, title sanitization) Style code inclusion supported when x-styleCode is present Gender-aware content generation when gender tag is present Strict JSON output** and schema consistency for safe downstream publishing Setup Steps Core integrations Shopify Access Token** — Products read + Orders read (REST 2024-04) OpenAI API** — gpt-4o vision OpenRouter API** — Claude Sonnet (3.5) Perplexity API** — vendor/market verification via perplexityTool Google Sheets OAuth** — Products, ProcessingState, Error_log, Sales analytics Configure sheets ProcessingState** with fields: batch number page_info_next Products** with: Product ID Product Title Product Type Vendor Image url Status country of origin x_style_code gender Generated Description Error_log** with: timestamp Reason of Error Sales Analytics Sheet** with: Date Total Sales Workflow Capabilities Discovery and staging Auto-paginate Shopify; stage eligible products in Sheets with reasons and timestamps. Vision-grounded copywriting Descriptions reflect only visible attributes plus verified brand context; concise, mobile-friendly structure with gender-aware tone. Metadata awareness Auto-injects x-styleCode, country_of_origin, and gender when present; natural SEO for brand and product type. Sales intelligence Automated daily sales tracking with Melbourne timezone support, handles zero-sale days, and maintains complete historical records. Error analytics Layman + technical diagnosis logged to Error_log to shorten MTTR. Safe output Structured JSON via outputParserStructured for predictable row updates. Credentials Required Shopify Access Token** (Products + Orders read permissions) OpenAI API Key** (GPT-4o vision) OpenRouter API Key** (Claude Sonnet) Perplexity API Key** Google Sheets OAuth** Ideal For E-commerce teams** scaling compliant, on-brand product copy with comprehensive sales insights Agencies and SEO specialists** standardizing image-grounded descriptions with performance tracking and analytics Stores** needing resumable pagination, auditable content operations, and automated daily sales reporting in Sheets Advanced Features Dual-workflow architecture**: Content generation + Sales analytics in one system Link-header pagination with page_info persistence in ProcessingState Title/content normalization (e.g., color removal) configurable per brand Gender-aware copywriting** based on product tags Memory windows (memoryBufferWindow) to keep multi-step prompts consistent Melbourne timezone support** for accurate daily sales cutoffs Zero-sales handling** ensures complete analytics continuity Structured Output enforcement for downstream safety AI-powered error diagnosis** with technical and layman explanations Time & Scheduling (Universal) The workflow includes two independent schedules: Content Generation**: Every 5 minutes (configurable) for product processing Sales Analytics**: Daily at 2:01 PM Melbourne time for previous day's sales For globally distributed teams, schedule triggers and timestamps can be standardized on UTC to avoid regional drift. Pro Tip Start with small batches (limit set to 10 or fewer) to validate both copy generation and sales tracking flows. The workflow handles dual operations independently - content generation failures won't affect sales analytics and vice versa. Monitor the Error_log sheet for any issues and use the ProcessingState sheet to track pagination progress.
by Rizqi Pratama Ramadhani
Automated Financial Tracker: Telegram Invoices to Notion with AI Summaries & Reports Tired of manually logging every expense? Streamline your financial tracking with this powerful n8n workflow! Snap a photo of your invoice in Telegram, and let AI (powered by Google Gemini) automatically extract the details, record them in your Notion database, and even send you a quick summary. Plus, get scheduled weekly reports with charts to visualize your spending. Automate your finances, save time, and gain better insights with this easy-to-use template! Transform your expense tracking from a chore into an automated breeze. Try it out! Overview: This workflow revolutionizes how you track your finances by automating the entire process from invoice capture to reporting. Simply send a photo of an invoice or receipt to a designated Telegram chat, and this workflow will: Extract Data with AI: Utilize Google Gemini's capabilities to perform OCR on the image, understand the content, and extract key details like item name, quantity, price, total, date, and even attempt to categorize the expense. Store in Notion: Automatically log each extracted transaction into a structured Notion database. Instant Feedback: Send a summary of the processed transaction back to your Telegram chat. Scheduled Reporting: Generate and send a visual summary of your expenses (e.g., weekly spending by category) as a chart to your preferred Telegram chat or group. This workflow is perfect for individuals, freelancers, or small teams looking to effortlessly manage their expenses without manual data entry. Key Features & Benefits: Effortless Expense Logging:** Just send a picture – no more typing! AI-Powered Data Extraction:** Leverages Google Gemini for intelligent invoice processing. Centralized Data in Notion:** Keep all your financial records neatly organized in a Notion database. Automated Categorization:** AI helps in categorizing your expenses (e.g., Food & Beverage, Transportation). Instant Summaries:** Get immediate confirmation and a summary of what was recorded. Visual Reporting:** Receive scheduled charts (e.g., bar charts of spending by category) directly in Telegram. Customizable:** Easily adapt the workflow to your specific needs, categories, and reporting preferences. Time-Saving:** Drastically reduces the time spent on manual financial administration. How It Works (Workflow Breakdown): The workflow is divided into two main parts: Part 1: Real-time Invoice Processing & Logging (## Auto Notes Transaction with Telegram and Notion database) Telegram Trigger (Telegram Trigger | When recive photo): Activates when a new photo is sent to the configured Telegram chat. Get Photo Info (Get Info Photo from telegram chat): Retrieves the details of the received photo. Get Image Info (Get Image Info): Prepares the image data. AI Data Extraction (Google Gemini Chat Model & Basic LLM Chain): The image data is sent to the Google Gemini Chat Model. A specific prompt instructs the AI to extract details (date, ID, name, quantity, price, total, category, tax) in a JSON array format and provide a summary message. The categories include Food & Beverage, Transportation, Utilities, Shopping, Healthcare, Entertainment, Housing, and Education. Parse AI Output (Parse To your object | Table): Structures the AI's JSON output for easier handling. Split Transactions (Split Out | data transaction): If an invoice contains multiple items, this node splits them into individual records. Record to Notion (Record To Notion Database): Each transaction item is added as a new page/entry in your specified Notion database, mapping fields like Name, Quantity, Price, Total, Category, Date, and Tax. Send Telegram Summary (Sendback to chat and give summarize text): The summary message generated by the AI is sent back to the original Telegram chat. Part 2: Scheduled Financial Reporting (## Schedule report to send on chanel or private message) Schedule Trigger (Schedule Trigger | for send chart report): Runs at a predefined interval (e.g., every week) to generate reports. Get Recent Data from Notion (Get Recent Data from Notions): Fetches transaction data from the Notion database for a specific period (e.g., the past week). Summarize Data (Summarize Transaction Data): Aggregates the data, for example, by summing up the 'total' amount for each 'category'. Prepare Chart Data (Convert Data to JSON chart payload): Transforms the summarized data into a JSON format suitable for generating a chart (e.g., labels for categories, data for spending amounts). Generate Chart (Generate Chart): Uses the QuickChart node to create a visual chart (e.g., a bar chart) from the prepared data. Send Chart to Telegram (Send Chart Image to Group or Private Chat): Sends the generated chart image to a specified Telegram chat ID or group. Nodes Used (Key Nodes): Telegram Trigger & Telegram Node:** For receiving images and sending messages/images. Google Gemini Chat Model (Langchain):** For AI-powered OCR and data extraction from invoices. Basic LLM Chain (Langchain):** To interact with the language model using specific prompts. Output Parser Structured (Langchain):** To structure the output from the language model. Notion Node:** For reading from and writing to your Notion databases. Schedule Trigger:** To automate the reporting process. Summarize Node:** To aggregate data for reports. Code Node:** Used here to format data for the chart. QuickChart Node:** For generating charts. SplitOut Node:** To process multiple items from a single invoice. Setup Instructions: Credentials: Telegram: Create a Telegram bot and get its API token. You'll also need the Chat ID where you'll send invoices and where reports should be sent. Google Gemini (PaLM) API: You'll need an API key for Google Gemini. Notion: Create a Notion integration and get the API key. Create a Notion database with properties corresponding to the data you want to save (e.g., Name (Title), Quantity (Number), Price (Number), Total (Number), Category (Select), Date (Text or Date), Tax (Number)). Share this database with your Notion integration. Configure Telegram Trigger: Add your Telegram Bot API token. When you first activate the workflow or test the trigger, send /start to your bot in the chat you want to use for sending invoices. n8n will then capture the Chat ID. Configure Google Gemini Node (Google Gemini Chat Model): Select or add your Google Gemini API credentials. Review the prompt in the Basic LLM Chain node and adjust if necessary (e.g., date format, categories). Configure Notion Nodes: Record To Notion Database: Select or add your Notion API credentials. Select your target Notion Database ID. Map the properties from the workflow (e.g., ={{ $json.name }}) to your Notion database columns. Get Recent Data from Notions: Select or add your Notion API credentials. Select your target Notion Database ID. Adjust the filter if needed (default is "past_week"). Configure Telegram Node for Reports (Send Chart Image to Group or Private Chat): Select or add your Telegram Bot API token. Enter the Chat ID for the group or private chat where you want to receive the reports. Configure Schedule Trigger (Schedule Trigger | for send chart report): Set your desired schedule (e.g., every Monday at 9 AM). Test: Send an image of an invoice to your Telegram bot and check if the data appears in Notion and if you receive a summary message. Wait for the scheduled report or manually trigger it to test the reporting functionality. Sticky Note Text for Your n8n Template: (These are suggestions. You would place these directly into the sticky notes within your n8n workflow editor.) Existing High-Level Sticky Notes: ## Auto Notes Transaction with Telegram and Notion database ## Schedule report to send on chanel or private message Specific Sticky Notes to Add: On Telegram Trigger | When recive photo:** 📸 INVOICE INPUT 📸 Bot listens here for photos of your receipts/invoices. Ensure your Telegram Bot API token is set in credentials. Near Google Gemini Chat Model & Basic LLM Chain:** 🤖 AI MAGIC HAPPENS HERE 🧠 Image is sent to Google Gemini for data extraction. Check 'Basic LLM Chain' to customize the AI prompt (e.g., categories, output format). Requires Google Gemini API credentials. On Parse To your object | Table:** ✨ STRUCTURING AI DATA ✨ Converts the AI's text output into a usable JSON object. Check the schema if you modify the AI prompt significantly. On Record To Notion Database:** 📝 SAVING TO NOTION 📝 Extracted transaction data is saved here. Configure with your Notion API key & Database ID. Map fields correctly to your database columns! On Sendback to chat and give summarize text:** 💬 TRANSACTION SUMMARY 💬 Sends a confirmation message back to the user in Telegram with a summary of the recorded expense. On Schedule Trigger | for send chart report:** 🗓️ REPORTING SCHEDULE 🗓️ Set how often you want to receive your spending report (e.g., weekly, monthly). On Get Recent Data from Notions:** 📊 FETCHING DATA FOR REPORT 📊 Retrieves transactions from Notion for the report period. Default: "Past Week". Adjust filter as needed. Requires Notion API credentials & Database ID. On Summarize Transaction Data:** ➕ SUMMARIZING SPENDING ➕ Aggregates your expenses, usually by category, to prepare for the chart. On Convert Data to JSON chart payload (Code Node):** 🎨 PREPARING CHART DATA 🎨 This Code node formats the summarized data into the JSON structure needed by QuickChart. On Generate Chart (QuickChart Node):** 📈 GENERATING VISUAL REPORT 📈 Creates the actual chart image based on your spending data. You can customize chart type (bar, pie, etc.) here. On Send Chart Image to Group or Private Chat:** 📤 SENDING REPORT TO TELEGRAM 📤 Delivers the generated chart to your chosen Telegram chat/group. Set the correct Chat ID and Bot API token. General Sticky Note (Place where relevant):** 🔑 CREDENTIALS NEEDED 🔑 Remember to set up API keys/tokens for: Telegram Google Gemini Notion General Sticky Note (Place where relevant):** 💡 CUSTOMIZE ME! 💡 Adjust AI prompts for better accuracy. Change Notion database structure. Modify report frequency and content. `
by simonscrapes
Use Case Research search engine rankings for SEO analysis: You need to track keyword rankings for your website You want to analyze competitor positions in search results You need data for SEO competition analysis You want to monitor SERP changes over time What this Workflow Does The workflow uses ScrapingRobot API to fetch Google search results: Retrieves SERP data for your target keywords Captures URL rankings and page titles Processes up to 5000 searches with free account Organizes results for SEO analysis Setup Create a ScrapingRobot account and get your API key Add your ScrapingRobot API key to the HTTP Request node's GET SERP token parameter Either connect your keyword database (column name "Keyword") or use the "Set Keywords" node Configure your preferred output database connection How to Adjust it to Your Needs Modify keyword source to pull from different databases Adjust the number of SERP results to capture Customize output format for your reporting needs More templates and n8n workflows >>> @simonscrapes
by Incrementors
🛒 Lead Workflow: Yelp & Trustpilot Scraping + OpenAI Analysis via BrightData > Description: Automated lead generation workflow that scrapes business data from Yelp and Trustpilot based on location and category, analyzes credibility, and sends personalized outreach emails using AI. > ⚠️ Important: This template requires a self-hosted n8n instance to run. 📋 Overview This workflow provides an automated lead generation solution that identifies high-quality prospects from Yelp and Trustpilot, analyzes their credibility through reviews, and sends personalized outreach emails. Perfect for digital marketing agencies, sales teams, and business development professionals. ✨ Key Features 🎯 Smart Location Analysis** AI breaks down cities into sub-locations for comprehensive coverage 🛍 Yelp Integration** Scrapes business details using BrightData's Yelp dataset ⭐ Trustpilot Verification** Validates business credibility through review analysis 📊 Data Storage** Automatically saves results to Google Sheets 🤖 AI-Powered Outreach** Generates personalized emails using Claude AI 📧 Automated Sending** Sends emails directly through Gmail integration 🔄 How It Works User Input: Submit location, country, and business category through a form AI Location Analysis: Gemini AI identifies sub-locations within the specified area Yelp Scraping: BrightData extracts business information from multiple locations Data Processing: Cleans and stores business details in Google Sheets Trustpilot Verification: Scrapes reviews and company details for credibility check Email Generation: Claude AI creates personalized outreach messages Automated Outreach: Sends emails to qualified prospects via Gmail 📊 Data Output | Field | Description | Example | |---------------|----------------------------------|----------------------------------| | Company Name | Business name from Yelp/Trustpilot | Best Local Restaurant | | Website | Company website URL | https://example-restaurant.com | | Phone Number | Business contact number | (555) 123-4567 | | Email | Business email address | demo@example.com | | Address | Physical business location | 123 Main St, City, State | | Rating | Overall business rating | 4.5/5 | | Categories | Business categories/tags | Restaurant, Italian, Fine Dining | 🚀 Setup Instructions ⏱️ Estimated Setup Time: 10–15 minutes Prerequisites n8n instance (self-hosted or cloud) Google account with Sheets access BrightData account with Yelp and Trustpilot datasets Google Gemini API access Anthropic API key for Claude Gmail account for sending emails Step 1: Import the Workflow Copy the JSON workflow code In n8n: Workflows → + Add workflow → Import from JSON Paste JSON and click Import Step 2: Configure Google Sheets Integration Create two Google Sheets: Yelp data: Name, Categories, Website, Address, Phone, URL, Rating Trustpilot data: Company Name, Email, Phone Number, Address, Rating, Company About Copy Sheet IDs from URLs In n8n: Credentials → + Add credential → Google Sheets OAuth2 API Complete OAuth setup and test connection Update all Google Sheets nodes with your Sheet IDs Step 3: Configure BrightData Set up BrightData credentials in n8n Replace API token with: BRIGHT_DATA_API_KEY Verify dataset access: Yelp dataset: gd_lgugwl0519h1p14rwk Trustpilot dataset: gd_lm5zmhwd2sni130p Test connections Step 4: Configure AI Models Google Gemini (Location Analysis)** Add Google Gemini API credentials Configure model: models/gemini-1.5-flash Claude AI (Email Generation)** Add Anthropic API credentials Configure model: claude-sonnet-4-20250514 Step 5: Configure Gmail Integration Set up Gmail OAuth2 credentials in n8n Update "Send Outreach Email" node Test email sending Step 6: Test & Activate Activate the workflow Test with sample data: Country: United States Location: Dallas Category: Restaurants Verify data appears in Google Sheets Check that emails are generated and sent 📖 Usage Guide Starting a Lead Generation Campaign Access the form trigger URL Enter your target criteria: Country: Target country Location: City or region Category: Business type (e.g., restaurants) Submit the form to start the process Monitoring Results Yelp Data Sheet:** View scraped business information Trustpilot Sheet:** Review credibility data Gmail Sent Items:** Track outreach emails sent 🔧 Customization Options Modifying Email Templates Edit the "AI Generate Email Content" node to customize: Email tone and style Services mentioned Call-to-action messages Branding elements Adjusting Data Filters Modify rating thresholds Set minimum review counts Add geographic restrictions Filter by business size Scaling the Workflow Increase batch sizes Add delays between requests Use parallel processing Add error handling 🚨 Troubleshooting Common Issues & Solutions 1. BrightData Connection Failed Cause: Invalid API credentials or dataset access Solution: Verify credentials and dataset permissions 2. No Data Extracted Cause: Invalid location or changed page structure Solution: Verify location names and test other categories 3. Gmail Authentication Issues Cause: Expired OAuth tokens Solution: Re-authenticate and check permissions 4. AI Model Errors Cause: API quota exceeded or invalid keys Solution: Check usage limits and API key Performance Optimization Rate Limiting:** Add delays Error Handling:** Retry failed requests Data Validation:** Check for malformed data Memory Management:** Process in smaller batches 📈 Use Cases & Examples 1. Digital Marketing Agency Lead Generation Goal:** Find businesses needing marketing Target:** Restaurants, retail stores Approach:** Focus on good-rated but low-online-presence businesses 2. B2B Sales Prospecting Goal:** Find software solution clients Target:** Growing businesses Approach:** Focus on recent positive reviews 3. Partnership Development Goal:** Find complementary businesses Target:** Established businesses Approach:** Focus on reputation and satisfaction scores ⚡ Performance & Limits Expected Performance Processing Time:** 5–10 minutes/location Data Accuracy:** 90%+ Success Rate:** 85%+ Daily Capacity:** 100–500 leads Resource Usage API Calls:** ~10–20 per business Storage:** Minimal (Google Sheets) Execution Time:** 3–8 minutes/10 businesses Network Usage:** ~5–10MB/business 🤝 Support & Community Getting Help n8n Community Forum:** community.n8n.io Docs:** docs.n8n.io BrightData Support:** Via dashboard Contributing Share improvements Report issues and suggestions Create industry-specific variations Document best practices > 🔒 Privacy & Compliance: Ensure GDPR/CCPA compliance. Always respect robots.txt and terms of service of scraped sites. 🎯 Ready to Generate Leads! This workflow provides a complete solution for automated lead generation and outreach. Customize it to fit your needs and start building your pipeline today! For any questions or support, please contact: 📧 info@incrementors.com or fill out this form: Contact Us
by Ranjan Dailata
Who this is for? The LinkedIn Company Story Generator is an automated workflow that extracts company profile data from LinkedIn using Bright Data's web scraping infrastructure, then transforms that data into a professionally written narrative or story using a language model (e.g., OpenAI, Gemini). The final output is sent via webhook notification, making it easy to publish, review, or further automate. This workflow is tailored for: Marketing Professionals**: Seeking to generate compelling company narratives for campaigns. Sales Teams**: Aiming to understand potential clients through summarized company insights. Content Creators**: Looking to craft stories or articles based on company data. Recruiters**: Interested in obtaining concise overviews of companies for talent acquisition strategies. What problem is this workflow solving? Manually gathering and summarizing company information from LinkedIn can be time-consuming and inconsistent. This workflow automates the process, ensuring: Efficiency**: Quick extraction and summarization of company data. Consistency**: Standardized summaries for uniformity across use cases. Scalability**: Ability to process multiple companies without additional manual effort. What this workflow does The workflow performs the following steps: Input Acquisition**: Receives a company's name or LinkedIn URL as input. Data Extraction**: Utilizes Bright Data to scrape the company's LinkedIn profile. Information Parsing**: Processes the extracted HTML content to retrieve relevant company details. Summarization**: Employs AI Google Gemini to generate a concise company story. Output Delivery**: Sends the summarized content to a specified webhook or email address. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the LinkedIn URL by navigating to the Set LinkedIn URL node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Variations: Modify the **Set LinkedIn URL node to accept a different company LinkedIn URL. Data Points**: Adjust the HTML Data Extractor Node to retrieve additional details like employee count, industry, or headquarters location. Summarization Style**: Customize the AI prompt to generate summaries in different tones or formats (e.g., formal, casual, bullet points). Output Destinations**: Configure the output node to send summaries to various platforms, such as Slack, CRM systems, or databases.