by Incrementors
Wikipedia to LinkedIn AI Content Poster with Image via Bright Data ๐ Overview Workflow Description: Automatically scrapes Wikipedia articles, generates AI-powered LinkedIn summaries with custom images, and posts professional content to LinkedIn using Bright Data extraction and intelligent content optimization. ๐ How It Works The workflow follows these simple steps: Article Input: User submits a Wikipedia article name through a simple form interface Data Extraction: Bright Data scrapes the Wikipedia article content including title and full text AI Summarization: Advanced AI models (OpenAI GPT-4 or Claude) create professional LinkedIn-optimized summaries under 2000 characters Image Generation: Ideogram AI creates relevant visual content based on the article summary LinkedIn Publishing: Automatically posts the summary with generated image to your LinkedIn profile URL Generation: Provides a shareable LinkedIn post URL for easy access and sharing โก Setup Requirements Estimated Setup Time: 10-15 minutes Prerequisites n8n instance (self-hosted or cloud) Bright Data account with Wikipedia dataset access OpenAI API account (for GPT-4 access) Anthropic API account (for Claude access - optional) Ideogram AI account (for image generation) LinkedIn account with API access ๐ง Configuration Steps Step 1: Import Workflow Copy the provided JSON workflow file In n8n: Navigate to Workflows โ + Add workflow โ Import from JSON Paste the JSON content and click Import Save the workflow with a descriptive name Step 2: Configure API Credentials ๐ Bright Data Setup Go to Credentials โ + Add credential โ Bright Data API Enter your Bright Data API token Replace BRIGHT_DATA_API_KEY in all HTTP request nodes Test the connection to ensure access ๐ค OpenAI Setup Configure OpenAI credentials in n8n Ensure GPT-4 model access Link credentials to the "OpenAI Chat Model" node Test API connectivity ๐จ Ideogram AI Setup Obtain Ideogram AI API key Replace IDEOGRAM_API_KEY in the "Image Generate" node Configure image generation parameters Test image generation functionality ๐ผ LinkedIn Setup Set up LinkedIn OAuth2 credentials in n8n Replace LINKEDIN_PROFILE_ID with your profile ID Configure posting permissions Test posting functionality Step 3: Configure Workflow Parameters Update Node Settings: Form Trigger:** Customize the form title and field labels as needed AI Agent:** Adjust the system message for different content styles Image Generate:** Modify image resolution and rendering speed settings LinkedIn Post:** Configure additional fields like hashtags or mentions Step 4: Test the Workflow Testing Recommendations: Start with a simple Wikipedia article (e.g., "Artificial Intelligence") Monitor each node execution for errors Verify the generated summary quality Check image generation and LinkedIn posting Confirm the final LinkedIn URL generation ๐ฏ Usage Instructions Running the Workflow Access the Form: Use the generated webhook URL to access the submission form Enter Article Name: Type the exact Wikipedia article title you want to process Submit Request: Click submit to start the automated process Monitor Progress: Check the n8n execution log for real-time progress View Results: The workflow will return a LinkedIn post URL upon completion Expected Output ๐ Content Summary Professional LinkedIn-optimized text Under 2000 characters Engaging and informative tone Bullet points for readability ๐ผ๏ธ Generated Image High-quality AI-generated visual 1280x704 resolution Relevant to article content Professional appearance ๐ LinkedIn Post Published to your LinkedIn profile Includes both text and image Shareable public URL Professional formatting ๐ ๏ธ Customization Options Content Personalization AI Prompts:** Modify the system message in the AI Agent node to change writing style Character Limits:** Adjust summary length requirements Tone Settings:** Change from professional to casual or technical Hashtag Integration:** Add relevant hashtags to LinkedIn posts Visual Customization Image Style:** Modify Ideogram prompts for different visual styles Resolution:** Change image dimensions based on LinkedIn requirements Rendering Speed:** Balance between speed and quality Brand Elements:** Include company logos or brand colors ๐ Troubleshooting Common Issues & Solutions โ ๏ธ Bright Data Connection Issues Verify API key is correctly configured Check dataset access permissions Ensure sufficient API credits Validate Wikipedia article exists ๐ค AI Processing Errors Check OpenAI API quotas and limits Verify model access permissions Review input text length and format Test with simpler article content ๐ผ๏ธ Image Generation Failures Validate Ideogram API key Check image prompt content Verify API usage limits Test with shorter prompts ๐ผ LinkedIn Posting Issues Re-authenticate LinkedIn OAuth Check posting permissions Verify profile ID configuration Test with shorter content โก Performance & Limitations Expected Processing Times Wikipedia Scraping:** 30-60 seconds AI Summarization:** 15-30 seconds Image Generation:** 45-90 seconds LinkedIn Posting:** 10-15 seconds Total Workflow:** 2-4 minutes per article Usage Recommendations Best Practices: Use well-known Wikipedia articles for better results Monitor API usage across all services Test content quality before bulk processing Respect LinkedIn posting frequency limits Keep backup of successful configurations ๐ Use Cases ๐ Educational Content Create engaging educational posts from Wikipedia articles on science, history, or technology topics. ๐ข Thought Leadership Transform complex topics into accessible LinkedIn content to establish industry expertise. ๐ฐ Content Marketing Generate regular, informative posts to maintain active LinkedIn presence with minimal effort. ๐ฌ Research Sharing Quickly summarize and share research findings or scientific discoveries with your network. ๐ Conclusion This workflow provides a powerful, automated solution for creating professional LinkedIn content from Wikipedia articles. By combining web scraping, AI summarization, image generation, and social media posting, you can maintain an active and engaging LinkedIn presence with minimal manual effort. The workflow is designed to be flexible and customizable, allowing you to adapt the content style, visual elements, and posting frequency to match your professional brand and audience preferences. For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Muhammad Ali
Whoโs it for Perfect for marketing agencies that manage multiple Facebook ad accounts and want to automate their weekly reporting. It eliminates manual data collection, analysis, and client updates by delivering a ready-to-share PDF report. How it works Every Monday, the workflow: Fetches the previous weekโs campaign metrics from the Facebook Graph API. Formats and summarizes each campaignโs performance using OpenAI. Merges all summaries into one comprehensive report with insights and next-week suggestions. Converts the report into a polished PDF using any PDF generation API. Sends the final PDF report automatically to the client via Gmail. How to set up Connect your Facebook, OpenAI, and Gmail accounts in n8n. Add credentials for your preferred PDF generator (e.g., PDFCrowd, Placid, etc.). Open the โSet Nodeโ to customize recipient email, date range, or report text. Requirements Facebook Graph API access token OpenAI API key Gmail credentials API key for your PDF generation service How to customize You can modify the trigger day, personalize the report design, or include additional analytics such as ROAS, CPC, or conversion data for deeper insights.
by Cheng Siong Chin
Introduction Automates stock market analysis using multiple AI models to predict trends, analyze sentiment, and generate consensus-based investment insights. For traders and analysts seeking data-driven forecasts by eliminating manual research and combining AI perspectives for accurate predictions. How It Works Daily trigger fetches stock data, news, ratings, and sentiment โ AI models analyze each source โ OpenAI generates report โ Three AI validators (OpenAI, Anthropic, Gemini) cross-verify โ Consensus evaluation โ Telegram alert with insights. Workflow Template Schedule โ Fetch Stock Data โ Fetch News โ Fetch Ratings โ Fetch Sentiment โ AI Analysis โ Combine โ Generate Report (GPT) โ Validate (3 AIs) โ Evaluate Consensus โ Send Telegram Workflow Steps Data Collection: Scheduled trigger fetches prices, news, analyst ratings, and social trends AI Analysis: Separate models analyze stocks, news sentiment, ratings, and social discussions Report Generation: OpenAI GPT combines analyses into comprehensive market report Multi-AI Validation: Three AI models independently validate predictions for accuracy Consensus Building: Evaluates AI agreement to determine confidence levels Alert Delivery: Sends Telegram alerts with buy/sell/hold recommendations Setup Instructions Schedule: Configure daily trigger time Data Sources: Add API keys for stock data, news APIs, and social platforms AI Models: Configure OpenAI, Anthropic, and Google Gemini credentials Telegram: Create bot and add token Thresholds: Define consensus requirements for recommendations Prerequisites Stock data API (Alpha Vantage, Yahoo Finance) News API key Social media API OpenAI API key Anthropic API key Google Gemini API key Telegram bot token Use Cases Day Trading: Real-time volatile stock analysis with multiple AI perspectives. Portfolio Management: Daily consensus reports for rebalancing. Customization Add technical indicators (RSI, MACD). Include crypto analysis. Integrate portfolio tracking. Add email/Slack notifications. Configure sector-specific analysis. Benefits Eliminates hours of daily research. Reduces AI hallucination through multi-model validation. Provides 24/7 monitoring. Combines multiple data sources.
by Abdullah Alshiekh
๐ Automated Customer Rewards Platform: Jotform Integration This blueprint details a highly efficient, AI-powered workflow designed to automate customer reward fulfillment. Leveraging the accessible interface of Jotform, this system delivers superior reliability and exceptional processing speed. ๐ Reliability, Productivity, and Performance This workflow is engineered to maximize operational efficiency and maintain data integrity: Instant Fulfillment: Automation handles receipt scanning (OCR), AI calculation, logging, and notification in seconds, eliminating manual delays. Seamless Data Capture: Leverages the user-friendly Jotform interface for fast, reliable customer submission and file uploads. ๐ ๏ธ Quick Configuration Guide Jotform Webhook:* In your *JotForm* settings, paste the n8n *Jotform Trigger URL** into the Webhook Integration. Done. API Access:* Generate a *"Full Access"** JotForm API key and insert it into the required n8n nodes (Jotform Trigger and Fetch All Receipts). Credential Setup:** Plug in your necessary API keys (Gemini, OCR.Space) and update the Notion Database ID and internal email recipient. ๐ How It Works (Practical Flow) Submission:* Customer submits their request via *Jotform**. Processing:** System extracts text from the receipt (OCR), the AI calculates the reward, and the If node verifies the total. Fulfillment:** Transaction logged, confirmation emails sent to both the customer and the internal team. If you need any help Get in Touch
by Rahul Joshi
Description Automatically qualify and route new leads from a Google Sheet into your CRM with AI-powered scoring and instant sales notifications. Turn raw form submissions into prioritized opportunitiesโeffortlessly. โก What This Template Does Monitors a Google Sheet for new form submissions. ๐ Uses Azure OpenAI (GPT-4o-mini) to analyze lead details (value, stage, company) and generate action items. ๐ค Parses the AI response into clean JSON for structured processing. ๐๏ธ Saves qualified lead data and AI-generated action items into a Lead Status sheet for tracking. ๐พ Categorizes leads into Hot, Warm, or Cold based on AI scoring. ๐ฅโ๏ธ Creates/updates the contact in HighLevel CRM. ๐ Sends an email notification to the assigned sales rep with lead details and priority. ๐ง Key Benefits Save time with automated lead qualification instead of manual checks. โฑ๏ธ Ensure consistent Hot/Warm/Cold scoring across all leads. โ Centralize lead data in both Google Sheets and CRM for tracking. ๐ Keep sales teams aligned with instant notifications. ๐ Fully no-code configurable and customizable for your business logic. ๐งฉ Features Google Sheets Trigger for new form rows. ๐ฅ AI Agent with Azure OpenAI (GPT-4o-mini) for lead scoring. ๐ง JSON parsing node to clean AI output. โ๏ธ Lead logging to โLead Statusโ sheet. ๐ Function node to categorize leads by score. ๐ฏ CRM sync with HighLevel to update/create contact records. ๐ SMTP email notification to sales reps. โ๏ธ Requirements n8n instance (cloud or self-hosted). ๐งฐ Google Sheet with headers: Lead Name, Lead Email, Lead Contact No., Company Name, Opportunity Value, Stage of Lead; shared with n8n Google account. ๐ Azure OpenAI access with a GPT-4o-mini deployment. โ๏ธ HighLevel CRM account connected via OAuth. ๐ SMTP email account configured in n8n. ๐ง Target Audience Sales teams handling inbound leads. ๐ Agencies managing multiple client pipelines. ๐ค Founders/startups wanting quick qualification and CRM sync. ๐ Ops teams needing reliable reporting of lead qualification. ๐๏ธ Step-by-Step Setup Instructions (Concise) Create a Google Sheet with required headers; share with n8n account. ๐ Configure the Google Sheets Trigger with the sheetโs Document ID. ๐ Connect your Azure OpenAI credentials and link to the AI Agent node. ๐ง Assign your HighLevel CRM account credentials. ๐ Set up SMTP credentials for the email send node. โ๏ธ Import the workflow, update node configs, and run a test submission. โถ๏ธ Security Best Practices Share Google Sheets only with the n8n Google account (Editor). ๐ Keep API keys and credentials encrypted in n8n, not hardcoded. ๐ก๏ธ Validate AI outputs before saving to CRM (via the parse node). โ Regularly back up your Lead Status sheet and CRM data. ๐
by Bhuvanesh R
The competitive edge, delivered. This Customer Intelligence Engine simultaneously analyzes the web, Reddit, and X/Twitter to generate a professional, actionable executive briefing. ๐ฏ Problem Statement Traditional market research for Customer Intelligence (CI) is manual, slow, and often relies on surface-level social media scraping or expensive external reports. Service companies, like HVAC providers, struggle to efficiently synthesize vast volumes of online feedback (Reddit discussions, real-time tweets, web articles) to accurately diagnose systemic service gaps (e.g., scheduling friction, poor automated systems). This inefficiency leads to delayed strategic responses and missed opportunities to invest in high-impact solutions like AI voice agents. โจ Solution This workflow deploys a sophisticated Multisource Intelligence Pipeline that runs on a scheduled or ad-hoc basis. It uses parallel processing to ingest data from three distinct source types (SERP API, Reddit, and X/Twitter), employs a zero-cost Hybrid Categorization method to semantically identify operational bottlenecks, and uses the Anthropic LLM to synthesize the findings into a clear, executive-ready strategic brief. The data is logged for historical analysis while the brief is dispatched for immediate action. โ๏ธ How It Works (Multi-Step Execution) 1. Ingestion and Parallel Processing (The Data Fabric) Trigger:** The workflow is initiated either on an ad-hoc basis via an n8n Form Trigger or on a schedule (Time Trigger). Parallel Ingestion:** The workflow immediately splits into three parallel branches to fetch data simultaneously: SERP API: Captures authoritative content and industry commentary (Strategic Context). Reddit (Looping Structure): Fetches posts from multiple subreddits via an Aggregate Node workaround to get authentic user experiences (Qualitative Signal). X/Twitter (HTTP Request): Bypasses standard rate limits to capture real-time social complaints (Sentiment Signal). 2. Analysis and Fusion (The Intelligence Layer) Cleanup and Labeling (Function Nodes):** Each branch uses dedicated Function Nodes to filter noise (e.g., low-score posts) and normalize the data by adding a source tag (e.g., 'Reddit'). Merge:** A Merge Node (Append Mode) fuses all three parallel streams into a single, unified dataset. Hybrid Categorization (Function Node):** A single Function Node applies the Hybrid Categorization Logic. This cost-free step semantically assigns a pain_point category (e.g., 'Call Hold/Availability') and a sentiment_score to every item, transforming raw text into labeled metrics. 3. Dispatch and Reporting (The Executive Output) Aggregation and Split (Function Node):** The final Function Node calculates the total counts, deduplicates the final results, and generates the comprehensive summaryString. Data Logging:* The aggregated counts and metrics are appended to *Google Sheets** for historical logging. LLM Input Retrieval (Function Node):** A final Function Node retrieves the summary data using the $items() helper (the serial route workaround). AI Briefing:* The *Message a model (Anthropic) Node receives the summaryString and uses a strict HTML System Prompt to synthesize the strategic brief, identifying the top pain points and suggesting AI features. Delivery:* The *Gmail Node** sends the final, professional HTML brief to the executive team. ๐ ๏ธ Setup Steps Credentials Anthropic:** Configure credentials for the Language Model (Claude) used in the Message a model node. SERP API, Reddit, and X/Twitter:** Configure API keys/credentials for the data ingestion nodes. Google Services:** Set up OAuth2 credentials for Google Sheets (for logging data) and Gmail (for email dispatch). Configuration Form Configuration:** If using the Form Trigger, ensure the Target Keywords and Target Subreddits are mapped correctly to the ingestion nodes. Data Integrity:** Due to the serial route, ensure the Function (Get LLM Summary) node is correctly retrieving the LLM_SUMMARY_HOLDER field from the preceding node's output memory. โ Benefits Proactive CI & Strategy:** Shifts market research from manual, reactive browsing to proactive, scheduled data diagnostic. Cost Efficiency:** Utilizes a zero-cost Hybrid Categorization method (Function Node) for intent analysis, avoiding expensive per-item LLM token costs. Actionable Output:** Delivers a fully synthesized, HTML-formatted executive brief, ready for immediate presentation and strategic sales positioning. High Reliability:** Employs parallel ingestion, API workarounds, and serial routing to ensure the complex workflow runs consistently and without failure.
by Cheng Siong Chin
Introduction Automate price monitoring for e-commerce competitorsโideal for retailers, analysts, and pricing teams. Scrapes competitor sites, extracts pricing/stock data via AI, detects changes, and sends instant alerts for dynamic pricing strategies. How It Works Scrapes competitor URLs via Firecrawl and Apify, extracts data with AI, detects price/stock changes, logs to Google Sheets, and sends Telegram alerts. Workflow Template Trigger โ Scrape URL โ AI Extract โ Parse โ Merge Historical โ Detect Changes โ Update Sheets + Send Telegram Alert Workflow Steps Trigger & Scrape โ Manual/scheduled trigger โ Firecrawl + Apify fetch competitor data AI Processing โ Claude extracts product details โ Parses and structures data Change Detection โ Reads historical prices โ Merges with current data โ Identifies updates Output โ Logs alerts to Sheets โ Updates historical data โ Sends Telegram notification Setup Instructions 1. Firecrawl API Get key from dashboard โ Add to n8n 2. Apify API Get key from console โ Add to n8n โ Configure actors 3. AI Model (Claude/OpenAI) Get API key โ Add to n8n 4. Google Sheets OAuth2 Create OAuth2 in Google Cloud Console โ Authorize in n8n โ Enable API 5. Telegram Bot Create via BotFather โ Get token & chat ID โ Add to n8n 6. Spreadsheet Setup Create Sheet with required columns โ Copy ID โ Paste in workflow Prerequisites Self-hosted n8n, Firecrawl account, Apify account, Claude/OpenAI API key, Google account (Sheets OAuth2),Telegram bot Customization Add more URLs, adjust scraping intervals, change detection thresholds, switch to Slack/email alerts, integrate databases Benefits Saves 2+ hours daily, real-time tracking, automated alerts, historical analysis, multi-source scraping
by Gracewell
Who Is This For? This workflow is designed for educators, universities, examination departments, and EdTech institutions that need a faster, smarter, and standardized way to prepare exam question papers. What Problem Does This Solve? Creating balanced, outcome-based question papers can take hours or even days of manual effort. Faculty often struggle to: Ensure syllabus coverage across units Maintain Bloomโs Taxonomy alignment Keep a consistent difficulty balance Format papers in institution-specific templates How it works This workflow automatically generates an exam question paper based on syllabus topics submitted via a form and sends it to the entered email address. Hereโs the flow in simple steps: Form Submission โ A student or faculty fills out a form with subject code, syllabus topics, and their email. AI Question Generation โ The workflow passes the syllabus to AI agents (Part A with 2 Marks, Part B with 13 Marks, and Part C with 14 Marks) to create question sets. The marks and the no. of question generated can be customized according to the convenience. Merging Questions โ All AI-generated questions are combined into a single structured document. Format into HTML โ The questions are formatted into a clean HTML exam paper (can also be extended to PDF). Send by Emailโ The formatted exam paper is sent to the userโs email (with option to CC/BCC). Set up steps Connect Accounts Connect your OpenAI (or LLM) credentials for AI-powered question generation. Connect your Gmail (or preferred email service) to send emails. Prepare Form Create an n8n form trigger with required fields: Subject with Code Syllabus for Unit 1, 2, 3โฆ Email to receive the paper Customize Question Generation Modify the AI prompts for Parts A, B, and C to fit your syllabus style (e.g., 2-mark, 13-mark, 14-mark). Format the Exam Paper Adjust the HTML template to match your institutionโs exam paper layout. Test & Deploy Submit a test form entry. Check the received email to ensure formatting looks good. Deploy the workflow to production for real usage. Need help customizing? โ๏ธ Contact Me ๐ผ LinkedIn
by Meelioo
How it works This beginner-friendly workflow demonstrates the core building blocks of n8n. It guides you through: Triggers โ Start workflows manually, on a schedule, via webhooks, or through chat. Data processing** โ Use Set and Code nodes to create, transform, and enrich data. Logic and branching โ Apply conditions with IF nodes and merge different branches back together. API integrations** โ Fetch external data (e.g., users from an API), split arrays into individual items, and extract useful fields. AI-powered steps** โ Connect to OpenAI for generating fun facts or build interactive assistants with chat triggers, memory, and tools. Responses** โ Return structured results via webhooks or summary nodes. By the end, it demonstrates a full flow: creating data โ transforming it โ making decisions โ calling APIs โ using AI โ responding with outputs. Set up steps Time required: 5โ10 minutes. What you need: An n8n instance (cloud or self-hosted). Optional: API credentials (e.g., OpenAI) if you want to test AI features. Setup flow: Import this workflow. Add your API keys where needed (OpenAI, etc.). Trigger the workflow manually or test with webhooks. >๐ Detailed node explanations and examples are already included as sticky notes inside the workflow itself, so you can learn step by step as you explore.
by Lucas Peyrin
How it works This workflow creates a sophisticated, self-improving customer support system that automatically handles incoming emails. It's designed to answer common questions using an AI-powered knowledge base and, crucially, to learn from human experts when new or complex questions arise, continuously expanding its capabilities. Think of it like having an AI assistant with a smart memory and a human mentor. Here's the step-by-step process: New Email Received: The workflow is triggered whenever a new email arrives in your designated support inbox (via Gmail). Classify Request: An AI model (Google Gemini 2.5 Flash Lite) first classifies the incoming email to ensure it's a genuine support request, filtering out irrelevant messages. Retrieve Knowledge Base: The workflow fetches all existing Question and Answer pairs from your dedicated Google Sheet knowledge base. AI Answer Attempt: A powerful AI model (Google Gemini 2.5 Pro) analyzes the customer's email against the entire knowledge base. It attempts to find a highly relevant answer and drafts a complete HTML email response if successful. Decision Point: An IF node checks if the AI found a confident answer. If Answer Found: The AI-generated HTML response is immediately sent back to the customer via Gmail. If No Answer Found (Human-in-the-Loop): Escalate to Human: The customer's summarized question and original email are forwarded to a human expert (you or your team) via Gmail, requesting their assistance. Human Reply & AI Learning: The workflow waits for the human expert's reply. Once received, another AI model (Google Gemini 2.5 Flash) processes both the original customer question and the expert's reply to distill them into a new, generic, and reusable Question/Answer pair. Update Knowledge Base: This newly created Q&A pair is then automatically added as a new row to your Google Sheet knowledge base, ensuring the system can answer similar questions automatically in the future. Set up steps Setup time: ~10-15 minutes This workflow requires connecting your Gmail and Google Sheets accounts, and obtaining a Google AI API key. Follow these steps carefully: Connect Your Gmail Account: Select the On New Email Received node. Click the Credential dropdown and select + Create New Credential to connect your Gmail account. Grant the necessary permissions. Repeat this for the Send AI Answer and Ask Human for Help nodes, selecting the credential you just created. Connect Your Google Sheets Account: Select the Get Knowledge Base node. Click the Credential dropdown and select + Create New Credential to connect your Google account. Grant the necessary permissions. Repeat this for the Add to Knowledge Base node, selecting the credential you just created. Set up Your Google Sheet Knowledge Base: Create a new Google Sheet in your Google Drive. Rename the first sheet (tab) to QA Database. In the first row of QA Database, add two column headers: Question (in cell A1) and Answer (in cell B1). Go back to the Get Knowledge Base node in n8n. In the Document ID field, select your newly created Google Sheet. Do the same for the Add to Knowledge Base node. Get Your Google AI API Key (for Gemini Models): Visit Google AI Studio at aistudio.google.com/app/apikey. Click "Create API key in new project" and copy the key. In the workflow, go to the Google Gemini 2.5 Pro node, click the Credential dropdown, and select + Create New Credential. Paste your key into the API Key field and Save. Repeat this for the Google Gemini 2.5 Flash Lite and Google Gemini 2.5 Flash nodes, selecting the credential you just created. Configure Human Expert Email: Select the Ask Human for Help node. In the Send To field, replace the placeholder email address with the actual email address of your human expert (e.g., your own email or a team support email). Activate the Workflow: Once all credentials and configurations are set, activate the workflow using the toggle switch at the top right of your n8n canvas. Start Learning! Send a test email to the Gmail account connected to the On New Email Received node. Observe how the AI responds, or how it escalates to your expert email and then learns from the reply. Check your Google Sheet to see new Q&A pairs being added!
by Anshul Chauhan
Automate Your Life: The Ultimate AI Assistant in Telegram (Powered by Google Gemini) Transform your Telegram messenger into a powerful, multi-modal personal or team assistant. This n8n workflow creates an intelligent agent that can understand text, voice, images, and documents, and take action by connecting to your favorite tools like Google Calendar, Gmail, Todoist, and more. At its core, a powerful Manager Agent, driven by Google Gemini, interprets your requests, orchestrates a team of specialized sub-agents, and delivers a coherent, final response, all while maintaining a persistent memory of your conversations. Key Features ๐ง Intelligent Automation: Uses Google Gemini as a central "Manager Agent" to understand complex requests and delegate tasks to the appropriate tool. ๐ฃ๏ธ Multi-Modal Input: Interact naturally by sending text, voice notes, photos, or documents directly into your Telegram chat. ๐ Integrated Toolset: Comes pre-configured with agents to manage your memory, tasks, emails, calendar, research, and project sheets. ๐๏ธ Persistent Memory: Leverages Airtable as a knowledge base, allowing the assistant to save and recall personal details, company information, or past conversations for context-rich interactions. โ๏ธ Smart Routing: Automatically detects the type of message you send and routes it through the correct processing pipeline (e.g., voice is transcribed, images are analyzed). ๐ Conversational Context: Utilizes a window buffer to maintain short-term memory, ensuring follow-up questions and commands are understood within the current conversation. How It Works The Telegram Trigger node acts as the entry point, receiving all incoming messages (text, voice, photo, document). A Switch node intelligently routes the message based on its type: Voice**: The audio file is downloaded and transcribed into text using a voice-to-text service. Photo**: The image is downloaded, converted to a base64 string, and prepared for visual analysis. Document**: The file is routed to a document handler that extracts its text content for processing. Text**: The message is used as-is. A Merge node gathers the processed input into a unified prompt. The Manager Agent receives this prompt. It analyzes the user's intent and orchestrates one or more specialized agents/tools: memory_base (Airtable): For saving and retrieving information from your long-term knowledge base. todo_and_task_manager (Todoist): To create, assign, or check tasks. email_agent (Gmail): To compose, search, or send emails. calendar_agent (Google Calendar): To schedule events or check your agenda. research_agent (Wikipedia/Web Search): To look up information. project_management (Google Sheets): To provide updates on project trackers. After executing the required tasks, the Manager Agent formulates a final response and sends it back to you via the Telegram node. Setup Instructions Follow these steps to get your AI assistant up and running. Telegram Bot: Create a new bot using the BotFather in Telegram to get your Bot Token. In the n8n workflow, configure the Telegram Trigger node's webhook. Add your Bot Token to the credentials in all Telegram nodes. For proactive messages, replace the chatId placeholders with your personal Telegram Chat ID. Google Gemini AI: In the Google Gemini nodes, add your credentials by providing your Google Gemini API key. Airtable Knowledge Base: Set up an Airtable base to act as your assistant's long-term memory. In the memory_base nodes (Airtable nodes), configure the credentials and provide the Base ID and Table ID. Google Workspace APIs: Connect your Google account credentials for Gmail, Google Calendar, and Google Sheets. In the relevant nodes, specify the Document/Sheet IDs you want the assistant to manage. Connect Other Tools: Add your credentials for Todoist and any other integrated tool APIs. Configure Conversational Memory: This workflow is designed for multi-user support. Verify that the Session Key in the "Window Buffer Memory" nodes is correctly set to a unique user identifier from Telegram (e.g., {{ $json.chat.id }}). This ensures conversations from different users are kept separate. Review Schedule Triggers: Check any nodes designed to run on a schedule (e.g., "At a regular time"). Adjust their cron expressions, times, and timezone to fit your needs (e.g., for daily summaries). Test the Workflow: Activate the workflow. Send a text message to your bot (e.g., "Hello!"). Estimated Setup Time 30โ60 minutes:** If you already have your API keys, account credentials, and service IDs (like Sheet IDs) ready. 2โ3 hours:** For a complete, first-time setup, which includes creating API keys, setting up new spreadsheets or Airtable bases, and configuring detailed permissions.
by Will Carlson
What it does: Collects cybersecurity news from trusted RSS feeds and uses OpenAIโs Retrieval-Augmented Generation (RAG) capabilities with Pinecone to filter for content that is directly relevant to your organizationโs tech stack. โRelevantโ means the AI looks for news items that mention your specific tools, vendors, frameworks, cloud platforms, programming languages, operating systems, or security solutions โ as described in your .txt scope documents. By training on these documents, the system understands the environment you operate in and can prioritize news that could affect your security posture, compliance, or operational stability. Once filtered, summaries of the most important items are sent to your work email every day. How it works Pulls in news from multiple cybersecurity-focused RSS feeds:** The workflow automatically collects articles from trusted, high-signal security news sources. These feeds cover threat intelligence, vulnerability disclosures, vendor advisories, and industry updates. Filters articles for recency and direct connection to your documented tech stack:** Using the publish date, it removes stale or outdated content. Then, leveraging your .txt scope documents stored in Pinecone, it checks each article for references to your technologies, vendors, platforms, or security tools. Uses OpenAI to generate and review concise summaries:** For each relevant article, OpenAI creates a short, clear summary of the key points. The AI also evaluates whether the article provides actionable or critical information before passing it through. Trains on your scope using Pinecone Vector Store (free) for context-aware filtering:** Your scope documents are embedded into a vector store so the AI can โrememberโ your environment. This context ensures the filtering process understands indirect or non-obvious connections to your tech stack. Aggregates and sends only the most critical items to your work email:** The system compiles the highest-priority news items into one daily digest, so you can review key developments without wading through irrelevant stories. What you need to do: Setup your OpenAI and Pinecone credentials in the workflow Create and configure a Pinecone index (dimension 1536 recommended) Pinecone is free to setup. Setup Pinecone with a single free index. Use a namespace like: scope. Make sure the embedding model is the same for all of your Pinecone references. Submit .txt scope documents listing your technologies, vendors, platforms, frameworks, and security products. .txt does not need to be structured. Add as much detail as possible. Update AI prompts to accurately describe your companyโs environment and priorities.