by Harry Siggins
This n8n template automatically processes your industry newsletters and creates AI-powered intelligence briefs that filter signal from noise. Perfect for busy professionals who need to stay informed without information overload, it delivers structured insights directly to Slack while optionally saving content questions to Notion. Who's it for Busy executives, product managers, and content creators at growing companies who subscribe to multiple industry newsletters but lack time to read them all. Ideal for teams that need to spot trends, generate content ideas, and share curated insights without drowning in information. How it works The workflow runs daily to fetch labeled emails from Gmail, combines all newsletter content, and sends it to an AI agent for intelligent analysis. The AI filters developments through your specific business lens, identifies only operationally relevant insights, and generates thought-provoking questions for content creation. Results are formatted as rich Slack messages using Block Kit, with optional Notion integration for tracking content ideas. Requirements Gmail account with newsletter labeling system OpenRouter API key for AI analysis (costs approximately $0.01-0.05 per run) or API key for a specific LLM Slack workspace with bot permissions for message posting Notion account with database setup (optional, for content question tracking) Perplexity API key (optional, for additional AI research capabilities) How to set up 1 Connect your Gmail, OpenRouter, and Slack credentials through n8n's secure credential system. Create a Gmail label for newsletters you want analyzed and setup in the "Get Labeled Newsletters" node. Update the Slack channel ID in the "Send to Slack" node. The template comes pre-configured with sample settings for tech companies, so you can run it immediately after credential setup. How to customize the workflow Edit the "Configuration" node to match your industry and audience - change the 13 pre-defined fields including target audience, business context, relevance filters, and content pillars. Adjust the cron expression in the trigger node for your timezone. Modify the Slack formatting code to change output appearance, or add additional destination nodes for email, Teams, or Discord. Remove Notion nodes if you only need Slack output. The AI analysis framework is fully customizable through the Configuration node, allowing you to adapt from the default tech company focus to any industry including healthcare, finance, marketing, or consulting.
by Rahul Joshi
Description Automates daily EOD summaries from Jira issues into an Excel sheet, then compiles a weekly summary using Azure OpenAI (GPT-4o-mini) and delivers it to stakeholders via email. Gain consistent reporting, clear insights, and hands-free delivery. ✨📧 What This Template Does Fetches Jira issues and extracts key fields. 🧩 Generates End‑of‑Day summaries and stores them in Excel daily. 📄 Aggregates the week’s EOD data from Excel. 📚 Creates a weekly summary using Azure OpenAI (GPT-4o-mini). 🤖 Delivers the weekly report to stakeholders via email. 📬 Key Benefits Saves time with fully automated daily and weekly reporting. ⏱️ Ensures consistent, structured summaries every time. 📏 Improves clarity for stakeholders with readable insights. 🪄 Produces mobile-friendly email summaries for quick consumption. 📱 No-code customization inside n8n. 🛠 Features Jira issue ingestion and transformation. Daily EOD summary generation and Excel storage. Weekly AI summarization with Azure OpenAI (GPT-4o-mini). Styled HTML email output to stakeholders. Scheduling for hands-free execution. Requirements An n8n instance (cloud or self-hosted). Jira access to read issues. Azure OpenAI (GPT-4o-mini) for weekly AI summarization. Email service (Gmail/SMTP) configured in n8n credentials. Excel/Sheet storage set up to append and read daily EOD entries. Target Audience Engineering and product teams needing routine summaries. Project managers tracking daily progress. Operations teams consolidating weekly reporting. Stakeholders who prefer clean email digests. Step-by-Step Setup Instructions Jira: Connect your Jira credentials and confirm issue read access. Azure OpenAI: Deploy GPT-4o-mini and add Azure OpenAI credentials in n8n. Gmail/SMTP: Connect your email account in n8n Credentials and authorize sending. Excel/Sheet: Configure the sheet used to store daily EOD summaries. Import the workflow, assign credentials to nodes, replace placeholders, then run and schedule. Security Best Practices Use scoped API tokens for Jira with read-only permissions. 🔐 Store Azure OpenAI and email credentials in n8n’s encrypted credentials manager. 🧯 Limit email recipients to approved stakeholder lists. 🚦 Review logs regularly and rotate credentials on a schedule. ♻
by Rahul Joshi
Description Automatically generate polished, n8n-ready template descriptions from your saved JSON workflows in Google Drive. This AI-powered automation processes workflow files, drafts compliant descriptions, and delivers Markdown and HTML outputs directly to your inbox. 🚀💌📊💬 What This Template Does Manually triggers the workflow to start processing. Searches a specified Google Drive folder for JSON workflow files. Iterates through each JSON file found in that folder. Downloads each file and prepares it for data extraction. Parses workflow data from the downloaded JSON content. Uses Azure OpenAI GPT-4 to generate concise titles and detailed descriptions. Converts the AI output into structured Markdown for n8n template publishing. Creates an HTML version of the description for email delivery. Logs generated details into a Google Sheet for record-keeping. Sends an email containing the Markdown and HTML descriptions to the target recipient. Key Benefits ✅ Fully automates n8n template description creation. ✅ Ensures consistency with official n8n publishing guidelines. ✅ Saves time while eliminating human writing errors. ✅ Provides dual Markdown + HTML outputs for flexibility. ✅ Centralizes workflow metadata in Google Sheets. ✅ Simplifies collaboration and version tracking via email delivery. Features Manual workflow trigger for controlled execution. Integration with Google Drive for locating and downloading JSON files. Intelligent parsing of workflow data from JSON structure. GPT-4-powered AI for title and description generation. Automatic Markdown + HTML formatting for n8n publishing. Google Sheets integration for persistent record-keeping. Automated Gmail delivery of generated documentation. Requirements n8n instance (cloud or self-hosted). Google Drive OAuth2 credentials with file read permissions. Google Sheets OAuth2 credentials with edit permissions. Azure OpenAI GPT-4 API key for AI text generation. Gmail OAuth2 credentials for email sending. Target Audience n8n content creators documenting workflows. 👩💼 Automation teams handling multiple template deployments. 🔄 Agencies and freelancers managing workflow documentation. 🏢 Developers leveraging AI for faster template creation. 🌐 Technical writers ensuring polished, standardized outputs. 📊 Step-by-Step Setup Instructions Connect your Google Drive account and specify the folder containing JSON workflows. 🔑 Authorize Google Sheets and confirm access to the tracking spreadsheet. ⚙️ Add Azure OpenAI GPT-4 API credentials for AI-powered text generation. 🧠 Connect Gmail credentials for automated email delivery. 📧 Run the workflow manually using a test JSON file to validate all nodes. ✅ Enable the workflow to automatically generate and send descriptions as needed. 🚀
by Felix Kemeth
Overview Staying up to date with fast-moving topics like AI, machine learning, or your specific industry can be overwhelming. You either drown in daily noise or miss important developments between weekly digests. This AI News Agent workflow delivers a curated newsletter only when there's genuinely relevant news. I use it myself for AI and n8n topics. Key features: AI-driven send decision**: An AI agent evaluates whether today's news is worth sending. Deduplication**: Compares candidate articles against past newsletters to avoid repetition. Real-time news**: Uses SerpAPI's DuckDuckGo News engine for fresh results. Frequency guardrails**: Configure minimum and maximum days between newsletters. In this post, I'll walk you through the complete workflow, explain each component, and show you how to set it up yourself. What this workflow does At a high level, the AI News Agent: Fetches fresh news twice daily via SerpAPI's DuckDuckGo News engine. Stores articles in a persistent data table with automatic deduplication. Filters for freshness - only considers articles newer than your last newsletter. Applies frequency guardrails - respects your min/max sending preferences. Makes an editorial decision - AI evaluates if the news is worth sending. Enriches selected articles - uses Tavily web search for fact-checking and depth. Delivers via Telegram - sends a clean, formatted newsletter. Remembers what it sent - stores each edition to prevent future repetition. This allows you to get newsletters only when there's genuinely relevant news - in contrast to a fixed schedule. Requirements To run this workflow, you need: SerpAPI key** Create an account at serpapi.com and generate an API key. They offer 250 free searches/month. Tavily API key** Sign up at app.tavily.com and create an API key. Generous free tier available. OpenAI API key** Get one from OpenAI - required for AI agent calls. Telegram bot + chat ID** A free Telegram bot (via BotFather) and the chat/channel ID where you want the newsletter. See Telegram's bot tutorial for setup. How it works The workflow is organized into five logical stages. Stage 1: Schedule & Configuration Schedule Trigger** Runs the workflow on a cron schedule. Default: 0 0 9,17 * * * (twice daily at 9:00 and 17:00). These frequent checks enable the AI to send newsletters at these times when it observes actually relevant news, not only once a week. I picked 09:00 and 17:00 as natural check‑in points at the start and end of a typical workday, so you see updates when you’re most likely to read them without being interrupted in the middle of deep work. With SerpAPI’s 250 free searches/month, running twice per day with a small set of topics (e.g. 2–3) keeps you comfortably below the limit; if you add more topics or increase the schedule frequency, either tighten the cron window or move to a paid SerpAPI plan to avoid hitting the cap. Set topics and language** A Set node that defines your configuration: topics: comma-separated list (e.g., AI, n8n) language: output language (e.g., English) minDaysBetween: minimum days to wait (0 = no minimum) maxDaysBetween: maximum days without sending (triggers a "must-send" fallback) Stage 2: Fetch & Store News Build topic queries** Splits your comma-separated topics into individual search queries: In DuckDuckGo News via SerpAPI, a query like AI,n8n looks for news where both “AI” and “n8n” appear. For a niche tool like n8n, this is often almost identical to just searching for n8n (docs). It’s therefore better to split the topics, search for each of them separately, and let the AI later decide which news articles to select. return $input.first().json.topics.split(',').map(topic => ({ json: { topic: topic.trim() } })); Fetch news from SerpAPI (DuckDuckGo News)** HTTP Request node calling SerpAPI with: engine: duckduckgo_news q: your topic df: d (last day) Auth is handled via httpQueryAuth credentials with your SerpAPI key. SerpAPI also offers other news engines such as the Google News API (see here). DuckDuckGo News is used here because, unlike Google News, it returns an excerpt/snippet in addition to the title, source, and URL (see here)—giving the AI more context to work with. _Another option is NewsAPI, but its free tier delays articles by 24 hours, so you miss the freshness window that makes these twice-daily checks valuable. DuckDuckGo News through SerpAPI keeps the workflow real-time without that lag._ n8n has official SerpAPI nodes, but as of writing there is no dedicated node for the DuckDuckGo News API. That’s why this workflow uses a custom HTTP Request node instead, which works the same under the hood while giving you full control over the DuckDuckGo News parameters. Split SerpAPI results into articles** Expands the results array so each article becomes its own item. Upsert articles into News table** Stores each article in an n8n data table with fields: title, source, url, excerpt, date. Uses upsert on title + URL to avoid duplicates. Date is normalized to ISO UTC: DateTime.fromSeconds(Number($json.date), {zone: 'utc'}).toISO() Stage 3: Filtering & Frequency Guardrails This is where the workflow gets smart about what to consider and when to send. Get previous newsletters → Sort → Get most recent** Pulls all editions from the Newsletters table and isolates the latest one with its createdAt timestamp. Combine articles with last newsletter metadata** Attaches the last newsletter timestamp to each candidate article. Filter articles newer than last newsletter** Keeps only articles published after the last edition. Uses a safe default date (2024-01-01) if no previous newsletter exists: $json.date_2 > ($json.createdAt_1 || DateTime.fromISO('2024-01-01T00:00:00.000Z')) Stop if last newsletter is too recent** Compares createdAt against your minDaysBetween setting. If you're still in the "too soon to send" window, the workflow short-circuits here. Stage 4: AI Editorial Decision This is the core intelligence of the workflow - an AI that decides whether to send and what to include. This stage is also the actual agentic part of the workflow, where the system makes its own decisions instead of just following a fixed schedule. Aggregate candidate articles for AI** Bundles today's filtered articles into a compact list with title, excerpt, source, and url. Limit previous newsletters to last 5 → Aggregate** Prepares the last 5 newsletter contents for the AI to check against for repetition. Combine candidate articles with past newsletters** Merges both lists so the AI sees "today's candidates" + "recent history" side by side. AI: decide send + select articles** The heart of the workflow. A GPT-5.1 call with a comprehensive editorial prompt: You are an AI Newsletter Editor. Your job is to decide whether today’s newsletter edition should be sent, and to select the best articles. You will receive a list of articles with: 'title', 'excerpt', source, url. You will also receive content of previously sent newsletters (markdown). Your Tasks 1. Decide whether to send the newsletter Output "YES" only if all of the following are satisfied OR the fallback rule applies: Base Criteria There are at least 3 meaningful articles. Meaningful = not trivial, not purely promotional, not clickbait, contains actual informational value. Articles must be non-duplicate and non-overlapping: Not the same topic/headline rephrased Not reporting identical events with minor variations Not the same news covered by multiple sources without distinct insights Articles must be relevant to the user's topics: {{ $('Set topics and language').item.json.topics }} Articles must be novel relative to the topics in previous newsletters: Compare against all previous newsletters below Exclude articles that discuss topics already substantially covered Articles must offer clear value: New information Impact that matters to the user Insight, analysis, or meaningful expansion Fallback rule: Newsletter frequency requirement If at least 1 relevant article exists and the last newsletter was sent more than {{ $('Set topics and language').item.json.maxDaysBetween }} days ago, then you MUST return "YES" as a decision even if the other criteria are not completely met. Last newsletter was sent {{ $('Get most recent newsletter').item.json.createdAt ? Math.floor($now.diff(DateTime.fromISO($('Get most recent newsletter').item.json.createdAt), 'days').days) : 999 }} days ago. Otherwise → "NO" 2. If "YES": Select Articles Select the top 3–5 articles that best fulfill the criteria above. For each selected article, output: title** (rewrite for clarity, conciseness, and impact) summary** (1–2 sentences; written in the output language) source** url** All summaries must be written in: {{ $('Set topics and language').item.json.language }} Output Format (JSON) { "decision": "YES or NO", "articles": [ { "title": "...", "summary": "...", "source": "...", "url": "..." } ] } When "decision": "NO", return an empty array for "articles". Article Input Use these articles: {{ $json.results.map( article => `Title: ${article.title_2} Excerpt: ${article.excerpt_2} Source: ${article.source_2} URL: ${article.url_2}` ).join('\n---\n') }} You must also consider the topics already covered in previous newsletters to avoid repetition: {{ $json.newsletters.map(x => Newsletter: ${x.content}).join('\n---\n') }} The AI outputs structured JSON: { "decision": "YES", "articles": [ { "title": "...", "summary": "...", "source": "...", "url": "..." } ] } If AI decided to send newsletter** Routes based on decision === "YES". If NO, the workflow ends gracefully. Stage 5: Content Enrichment & Delivery Split selected articles for enrichment** Each selected article becomes its own item for individual processing. AI: enrich & write article** An AI Agent node with GPT-5.1 + Tavily web search tool. For each article: You are a research writer that updates short news summaries into concise, factual articles. Input: Title: {{ $json["title"] }} Summary: {{ $json["summary"] }} Source: {{ $json["source"] }} Original URL: {{ $json["url"] }} Language: {{ $('Set topics and language').item.json.language }} Instructions: Use Tavily Search to gather 2–3 reliable, recent, and relevant sources on this topic. Update the title if a more accurate or engaging one exists. Write 1–2 sentences summarizing the topic, combining the original summary and information from the new sources. Return the original source name and url as well. Output (JSON): { "title": "final article title", "content": "concise 1–2 sentence article content", "source": "the name of the original source", "url": "the url of the original source" } Rules: Ensure the topic is relevant, informative, and timely. Translate the article if necessary to comply with the desired language {{ $('Set topics and language').item.json.language }}. The Output Parser enforces the JSON schema with title, content, source, and url fields. Aggregate enriched articles** Collects all enriched articles back into a single array. Insert newsletter content into Newsletters table** Stores the final markdown content for future deduplication: $json.output.map(article => { const title = JSON.stringify(article.title).slice(1, -1); const content = JSON.stringify(article.content).slice(1, -1); const source = JSON.stringify(article.source).slice(1, -1); const url = JSON.stringify(article.url).slice(1, -1); return ${title}\n${content}\nSource: ${source}; }).join('\n\n') Send newsletter to Telegram** Sends the formatted newsletter to your Telegram chat/channel. Why this workflow is powerful Intelligent send decisions** The AI evaluates news quality before sending, leading to a less noisy and more relevant news digest. Memory across editions** By persisting newsletters and comparing against history, the workflow avoids repetition. Frequency guardrails with flexibility** Set boundaries (e.g., "at least 1 day between sends" and "must send within 5 days"), but let the AI decide the optimal moment within those bounds. Source-level deduplication** The news table with upsert prevents the same article from being considered multiple times across runs. Grounded in facts** SerpAPI provides real news sources; Tavily enriches with additional verification. The newsletter stays factual. Configurable and extensible** Change topics, language, frequency - all in one Set node. In addition, the workflow is modular, allowing to add new news sources or new delivery channels without touching the core logic. Configuration guide To customize this workflow for your needs: Topics and language Open Set topics and language and modify: topics: your interests (e.g., machine learning, startups, TypeScript) language: your preferred output language Frequency settings minDaysBetween: minimum days between newsletters (0 = no limit) maxDaysBetween: maximum gap before forcing a send For very high-volume topics (such as "AI"), expect the workflow to send almost every time once minDaysBetween has passed, because the content-quality criteria are usually met. Schedule Modify the Schedule Trigger cron expression. Default runs twice daily at 9:00 am and 5:00 pm; adjust to your preference. Telegram Update the chatId in the Telegram node to your chat/channel. Credentials Set up credentials for: SerpAPI (httpQueryAuth), Tavily, OpenAI, Telegram. Next steps and improvements Here are concrete directions to take this workflow further: Multi-agent architecture** Split the current AI calls into specialized agents: signal detection, relevance scoring, editorial decision, content enhancement, and formatting - each with a single responsibility. 1:1 personalization** Move from static topics to weighted preferences. Learn from click behavior and feedback. Telegram feedback buttons** Add inline buttons (👍 Useful / 👎 Not relevant / 🔎 More like this) and feed signals back into ranking. Email with HTML template** For more flexibility, send the newsletter via email. Incorporating other news APIs or RSS feeds** Add more sources such as other news APIs and RSS feeds from blogs, newsletters, or communities. Adjust for arxiv paper search and research news** Swap SerpAPI for arxiv search or other academic sources to obtain a personal research digest newsletter. Images and thumbnails** Fetch representative images for each article and include them in the newsletter. Web archive** Auto-publish each edition as a web page with permalinks. Retry logic and error handling** Add exponential backoff for external APIs and route failures to an error workflow. Prompt versioning** Move prompts to a data table with versioning for A/B testing and rollback. Audio and video news** Use audio or video models for better news communication. Wrap-up This AI News Agent workflow represents a significant evolution from simple scheduled newsletters. By adding intelligent send decisions, historical deduplication, and frequency guardrails, you get a newsletter that respects the quality of available news. I use this workflow myself to stay informed on AI and automation topics without the overload of daily news or the delayed delivery caused by a fixed newsletter schedule. Need help with your automations? Contact me here.
by Ruth Olatunji
Eliminate 90% of manual work in procurement by automating quote requests, response tracking, price extraction, and supplier follow-ups. This complete automation handles everything from sending personalized emails to extracting pricing data with AI and sending WhatsApp reminders—so you can focus on decision-making, not data entry. This all-in-one workflow transforms a 5-hour manual process into a 10-minute review task, saving 15-20 hours per month while improving supplier response rates by 30%. How it works This workflow contains 4 independent automation modules running on separate schedules: Quote Request Sender (Manual trigger) Reads supplier list from Google Sheets Sends personalized emails via Gmail with category and deadline Logs all requests with timestamps to tracking sheet Response Monitor (Hourly schedule) Automatically checks Gmail for supplier replies with attachments Updates tracking sheet status to "Quote Received" Zero manual email monitoring required AI Price Extraction (Manual trigger) Downloads PDF/Excel attachments from emails Extracts text using n8n's built-in parser Sends to OpenAI GPT-4o-mini to identify products, prices, quantities, currencies Saves structured data to Price Comparison sheet WhatsApp Follow-ups (Daily at 9 AM) Checks for non-responsive suppliers Sends smart reminders at Day 3, 5, and 7 with escalating urgency Falls back to email if no phone number Logs all follow-up history Each module shares data through Google Sheets while running independently. Set up steps Time to set up: 20-30 minutes Create two Google Sheets: "Quote Tracking" (with columns: supplier_name, supplier_email, category, request_date, status, quote_received, phone_number, last_follow_up, follow_up_count) and "Price Comparison" (with columns: supplier_name, supplier_email, product_name, price, currency, quantity, extracted_date, source_file) Connect credentials: Gmail OAuth, Google Sheets OAuth (same account), OpenAI API key, Twilio Account SID + Auth Token Update all Google Sheet IDs in every Google Sheets node (8 nodes total across all modules) Configure Twilio WhatsApp sandbox: Go to Twilio Console → Messaging → WhatsApp → Send join code from your phone → Update "From" number in Send WhatsApp node Add 2-3 test suppliers to Tracking Sheet with your email addresses using + trick (yourname+supplier1@gmail.com) and phone numbers in international format Test each module: Execute Quote Sender → Reply to test email with PDF → Execute AI Extraction → Set supplier date to 3 days ago → Test Follow-ups Activate schedules for Response Monitor (hourly) and Follow-ups (daily at 9 AM) Detailed node configurations and troubleshooting tips are included in sticky notes throughout the workflow canvas. Requirements Gmail account with API access Google Sheets (2 sheets) OpenAI API account (~$5-15/month) Twilio account with WhatsApp (~$10-20/month) n8n (any version supporting HTTP Request node) Who is this for Procurement teams managing multiple supplier quotes Small businesses comparing vendor prices Operations managers handling RFQs Purchasing departments drowning in email attachments Anyone collecting and tracking supplier pricing at scale Time savings: From 5 hours to 10 minutes per quote cycle (90% reduction) Response rate improvement: 50% → 80% with automated follow-ups Accuracy: 95%+ AI extraction accuracy vs 5-10% manual data entry errors
by Avkash Kakdiya
How it works This workflow automatically generates an AI-powered revenue forecast whenever a new deal is created in HubSpot. It collects all active deals, standardizes key sales data, and sends it to an AI model for forecasting and risk analysis. The AI produces best, likely, and worst-case revenue scenarios along with actionable insights. Results are shared with stakeholders via Slack and Email and stored in Google Sheets for tracking. Step-by-step Step 1 : Collect & prepare HubSpot deals** HubSpot Trigger – Starts the workflow when a new deal is created in HubSpot. Get many deals – Fetches all active deals from the sales pipeline. Format HubSpot Data – Cleans and standardizes deal fields like amount, stage, probability, and region. Loop Over Items – Iterates through formatted deals to prepare them for AI analysis. Step 2 : Generate & distribute AI forecast** AI Revenue Forecast & Risk Analysis – Sends pipeline data to the AI model to generate revenue forecasts and insights. Groq Chat Model – Powers the AI analysis and produces structured forecasting output. Format AI response – Extracts key metrics, risks, and recommendations from the AI response. Send a message (Gmail) – Emails the revenue forecast report to stakeholders. Send a message (Slack) – Posts the forecast summary to a selected Slack channel. Append row in sheet – Logs forecast data and insights into Google Sheets. Wait – Adds a controlled pause before looping or completing the workflow. Why use this? Get real-time revenue forecasts triggered directly by CRM activity. Reduce manual pipeline analysis and reporting effort. Identify high-risk deals early with AI-driven insights. Keep leadership aligned through automated Slack and Email updates. Maintain a historical forecast log for audits and performance tracking.
by Incrementors
Wikipedia to LinkedIn AI Content Poster with Image via Bright Data 📋 Overview Workflow Description: Automatically scrapes Wikipedia articles, generates AI-powered LinkedIn summaries with custom images, and posts professional content to LinkedIn using Bright Data extraction and intelligent content optimization. 🚀 How It Works The workflow follows these simple steps: Article Input: User submits a Wikipedia article name through a simple form interface Data Extraction: Bright Data scrapes the Wikipedia article content including title and full text AI Summarization: Advanced AI models (OpenAI GPT-4 or Claude) create professional LinkedIn-optimized summaries under 2000 characters Image Generation: Ideogram AI creates relevant visual content based on the article summary LinkedIn Publishing: Automatically posts the summary with generated image to your LinkedIn profile URL Generation: Provides a shareable LinkedIn post URL for easy access and sharing ⚡ Setup Requirements Estimated Setup Time: 10-15 minutes Prerequisites n8n instance (self-hosted or cloud) Bright Data account with Wikipedia dataset access OpenAI API account (for GPT-4 access) Anthropic API account (for Claude access - optional) Ideogram AI account (for image generation) LinkedIn account with API access 🔧 Configuration Steps Step 1: Import Workflow Copy the provided JSON workflow file In n8n: Navigate to Workflows → + Add workflow → Import from JSON Paste the JSON content and click Import Save the workflow with a descriptive name Step 2: Configure API Credentials 🌐 Bright Data Setup Go to Credentials → + Add credential → Bright Data API Enter your Bright Data API token Replace BRIGHT_DATA_API_KEY in all HTTP request nodes Test the connection to ensure access 🤖 OpenAI Setup Configure OpenAI credentials in n8n Ensure GPT-4 model access Link credentials to the "OpenAI Chat Model" node Test API connectivity 🎨 Ideogram AI Setup Obtain Ideogram AI API key Replace IDEOGRAM_API_KEY in the "Image Generate" node Configure image generation parameters Test image generation functionality 💼 LinkedIn Setup Set up LinkedIn OAuth2 credentials in n8n Replace LINKEDIN_PROFILE_ID with your profile ID Configure posting permissions Test posting functionality Step 3: Configure Workflow Parameters Update Node Settings: Form Trigger:** Customize the form title and field labels as needed AI Agent:** Adjust the system message for different content styles Image Generate:** Modify image resolution and rendering speed settings LinkedIn Post:** Configure additional fields like hashtags or mentions Step 4: Test the Workflow Testing Recommendations: Start with a simple Wikipedia article (e.g., "Artificial Intelligence") Monitor each node execution for errors Verify the generated summary quality Check image generation and LinkedIn posting Confirm the final LinkedIn URL generation 🎯 Usage Instructions Running the Workflow Access the Form: Use the generated webhook URL to access the submission form Enter Article Name: Type the exact Wikipedia article title you want to process Submit Request: Click submit to start the automated process Monitor Progress: Check the n8n execution log for real-time progress View Results: The workflow will return a LinkedIn post URL upon completion Expected Output 📝 Content Summary Professional LinkedIn-optimized text Under 2000 characters Engaging and informative tone Bullet points for readability 🖼️ Generated Image High-quality AI-generated visual 1280x704 resolution Relevant to article content Professional appearance 🔗 LinkedIn Post Published to your LinkedIn profile Includes both text and image Shareable public URL Professional formatting 🛠️ Customization Options Content Personalization AI Prompts:** Modify the system message in the AI Agent node to change writing style Character Limits:** Adjust summary length requirements Tone Settings:** Change from professional to casual or technical Hashtag Integration:** Add relevant hashtags to LinkedIn posts Visual Customization Image Style:** Modify Ideogram prompts for different visual styles Resolution:** Change image dimensions based on LinkedIn requirements Rendering Speed:** Balance between speed and quality Brand Elements:** Include company logos or brand colors 🔍 Troubleshooting Common Issues & Solutions ⚠️ Bright Data Connection Issues Verify API key is correctly configured Check dataset access permissions Ensure sufficient API credits Validate Wikipedia article exists 🤖 AI Processing Errors Check OpenAI API quotas and limits Verify model access permissions Review input text length and format Test with simpler article content 🖼️ Image Generation Failures Validate Ideogram API key Check image prompt content Verify API usage limits Test with shorter prompts 💼 LinkedIn Posting Issues Re-authenticate LinkedIn OAuth Check posting permissions Verify profile ID configuration Test with shorter content ⚡ Performance & Limitations Expected Processing Times Wikipedia Scraping:** 30-60 seconds AI Summarization:** 15-30 seconds Image Generation:** 45-90 seconds LinkedIn Posting:** 10-15 seconds Total Workflow:** 2-4 minutes per article Usage Recommendations Best Practices: Use well-known Wikipedia articles for better results Monitor API usage across all services Test content quality before bulk processing Respect LinkedIn posting frequency limits Keep backup of successful configurations 📊 Use Cases 📚 Educational Content Create engaging educational posts from Wikipedia articles on science, history, or technology topics. 🏢 Thought Leadership Transform complex topics into accessible LinkedIn content to establish industry expertise. 📰 Content Marketing Generate regular, informative posts to maintain active LinkedIn presence with minimal effort. 🔬 Research Sharing Quickly summarize and share research findings or scientific discoveries with your network. 🎉 Conclusion This workflow provides a powerful, automated solution for creating professional LinkedIn content from Wikipedia articles. By combining web scraping, AI summarization, image generation, and social media posting, you can maintain an active and engaging LinkedIn presence with minimal manual effort. The workflow is designed to be flexible and customizable, allowing you to adapt the content style, visual elements, and posting frequency to match your professional brand and audience preferences. For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Kev
Generate ready-to-publish short-form videos from text prompts using AI Click on the image to see the Example output in google drive Transform simple text concepts into professional short-form videos complete with AI-generated visuals, narrator voice, background music, and dynamic text overlays - all automatically generated and ready for Instagram, TikTok, or YouTube Shorts. This workflow demonstrates a cost-effective approach to video automation by combining AI-generated images with audio composition instead of expensive AI video generation. Processing takes 1-2 minutes and outputs professional 9:16 vertical videos optimized for social platforms. The template serves as both a showcase and building block for larger automation systems, with sticky notes providing clear guidance for customization and extension. Who's it for Content creators, social media managers, and marketers who need consistent, high-quality video content without manual production work. Perfect for motivational content, storytelling videos, educational snippets, and brand campaigns. How it works The workflow uses a form trigger to collect video theme, setting, and style preferences. ChatGPT generates cohesive scripts and image prompts, while Google Gemini creates themed background images and OpenAI TTS produces narrator audio. Background music is sourced from Openverse for CC-licensed tracks. All assets are uploaded to JsonCut API which composes the final video with synchronized overlays, transitions, and professional audio mixing. Results are stored in NocoDB for management. How to set up JsonCut API: Sign up at jsoncut.com and create an API key at app.jsoncut.com. Configure HTTP Header Auth credential in n8n with header name x-api-key OpenAI API: Set up credentials for script generation and text-to-speech Google Gemini API: Configure access for Imagen 4.0 image generation NocoDB (Optional): Set up instance for video storage and configure database credentials Requirements JsonCut free account with API key OpenAI API access for GPT and TTS Google Gemini API for image generation NocoDB (optional) for result storage How to customize the workflow This template is designed as a foundation for larger automation systems. The modular structure allows easy modification of AI prompts for different content niches (business, wellness, education), replacement of the form trigger with RSS feeds or database triggers for automated content generation, integration with social media APIs for direct publishing, and customization of visual branding through JsonCut configuration. The workflow can be extended for bulk processing, A/B testing multiple variations, or integration with existing content management systems. Sticky notes throughout the workflow provide detailed guidance for common customizations and scaling options.
by John Alejandro SIlva
Rizz AI: The Multimodal Dating Assistant 💘 Rizz AI is not just a chatbot; it's a full-featured, AI-powered CRM for your dating life. Built entirely in n8n, this workflow turns Telegram into a powerful "Wingman" that helps you craft the perfect reply, remember details about your matches, and optimize your dating strategy using Google Gemini 1.5 Pro. 🔥 Key Features 👁️ Multimodal Vision:** Send a screenshot of a Tinder/Hinge profile or a WhatsApp chat, and the AI will analyze the text, subtext, and vibe to give you tactical advice. 🗣️ Audio Analysis:** Forward voice notes, and the AI will transcribe and analyze the tone to tell you if they are interested. 🧠 Long-Term Memory:** Remembers details about specific matches (e.g., "Sofia likes sushi") so you don't ask the same thing twice. 📊 Lead Management (CRM):** Automatically tracks matching stage, interest level, and next steps in Google Sheets. 🎨 Personalized Style:** Adapts advice to your specific "Rizz Style" (e.g., Mystery, Direct, Funny) defined in your profile. 🛠️ How It Works Ingest: You send text, audio, or images to your private Telegram Bot. Process: The workflow routes the input through Gemini Vision (for images) or Whisper/Gemini (for audio). Retrieve: It queries your Google Sheet to see if this person is a new lead or an existing match. Reason: The AI Agent (with tools) decides the best move: suggesting a reply, logging a red flag, or scheduling a date. Respond: You receive 3 draft options to copy-paste directly into your dating app. 📋 Setup Instructions 1. Google Sheets (Database) Make a copy of the Rizz AI Database Template. Share/Connect your Google Drive credential in n8n. Update the Sheet ID in the Get Rizzler Profile and other Sheet nodes. 2. Telegram Bot Talk to @BotFather on Telegram to create a new bot. Copy the API Token into the Telegram Trigger and Send Message nodes. 3. Google Gemini Get a free API Key from Google AI Studio. Connect it to the Google Gemini Chat Model node. 💡 Need Assistance? If you’d like help customizing or extending this workflow, feel free to reach out: 📧 Email: johnsilva11031@gmail.com 🔗 LinkedIn: John Alejandro Silva Rodríguez
by Robin Geuens
Overview Get a weekly report on website traffic driven by large language models (LLMs) such as ChatGPT, Perplexity, and Gemini. This workflow helps you track how these tools bring visitors to your site. A weekly snapshot can guide better content and marketing decisions. How it works The trigger runs every Monday. Pull the number of sessions on your website by source/medium from Google Analytics. The code node uses the following regex to filter referral traffic from AI providers like ChatGPT, Perplexity, and Gemini: /^.openai.|.copilot.|.chatgpt.|.gemini.|.gpt.|.neeva.|.writesonic.|.nimble.|.outrider.|.perplexity.|.google.bard.|.bard.google.|.bard.|.edgeservices.|.astastic.|.copy.ai.|.bnngpt.|.gemini.google.$/i; Combine the filtered sessions into one list so they can be processed by an LLM. Generate a short report using the filtered data. Email the report to yourself. Setup Get or connect your OpenAI API key and set up your OpenAI credentials in n8n. Enable Google Analytics and Gmail API access in the Google Cloud Console. (Read more here). Set up your Google Analytics and Gmail credentials in n8n. If you're using the cloud version of n8n, you can log in with your Google account to connect them easily. In the Google Analytics node, add your credentials and select the property for the website you’re working with. Alternatively, you can use your property ID, which can be found in the Google Analytics admin panel under Property > Property Details. The property ID is shown in the top-right corner. Add this to the property field. Under Metrics, select the metric you want to measure. This workflow is configured to use sessions, but you can choose others. Leave the dimension as-is, since we need the source/medium dimension to filter LLMs. (Optional) To expand the list of LLMs being filtered, adjust the regex in the code node. You can do this by copying and pasting one of the existing patterns and modifying it. Example: |.example.| The LLM node creates a basic report. If you’d like a more detailed version, adjust the system prompt to specify the details or formatting you want. Add your email address to the Gmail node so the report is delivered to your inbox. Requirements OpenAI API key for report generation Google Analytics API enabled in Google Cloud Console Gmail API enabled in Google Cloud Console Customizing this workflow The regex used to filter LLM referral traffic can be expanded to include specific websites. The system prompt in the AI node can be customized to create a more detailed or styled report.
by Jan Zaiser
Your inbox is overflowing with daily newsletters: Public Affairs, ESG, Legal, Finance, you name it. You want to stay informed, but reading 10 emails every morning? Impossible. What if you could get one single digest summarizing everything that matters, automatically? ❌ No more copy-pasting text into ChatGPT ❌ No more scrolling through endless email threads ✅ Just one smart, structured daily briefing in your inbox Who Is This For Public Affairs Teams: Stay ahead of political and regulatory updates—without drowning in emails. Executives & Analysts: Get daily summaries of key insights from multiple newsletters. Marketing, Legal, or ESG Departments: Repurpose this workflow for your own content sources. How It Works Gmail collects all newsletters from the day (based on sender or label). HTML noise and formatting are stripped automatically. Long texts are split into chunks and logged in Google Sheets. An AI Agent (Gemini or OpenAI) summarizes all content into one clean daily digest. The workflow structures the summary into an HTML email and sends it to your chosen recipients. Setup Guide • You’ll need Gmail and Google Sheets credentials. • Add your own AI Model (e.g., Gemini or OpenAI) with an API key. • Adjust the prompt inside the “Public Affairs Consultant” node to fit your topic (e.g., Legal, Finance, ESG, Marketing). • Customize the email subject and design inside the “Structure HTML-Mail” node. • Optional: Use Memory3 to let the AI learn your preferred tone and style over time. Cost & Runtime Runs once per day. Typical cost: ~$0.10–0.30 per run (depending on model and input length). Average runtime: <2 minutes.
by Meelioo
How it works This beginner-friendly workflow demonstrates the core building blocks of n8n. It guides you through: Triggers – Start workflows manually, on a schedule, via webhooks, or through chat. Data processing** – Use Set and Code nodes to create, transform, and enrich data. Logic and branching – Apply conditions with IF nodes and merge different branches back together. API integrations** – Fetch external data (e.g., users from an API), split arrays into individual items, and extract useful fields. AI-powered steps** – Connect to OpenAI for generating fun facts or build interactive assistants with chat triggers, memory, and tools. Responses** – Return structured results via webhooks or summary nodes. By the end, it demonstrates a full flow: creating data → transforming it → making decisions → calling APIs → using AI → responding with outputs. Set up steps Time required: 5–10 minutes. What you need: An n8n instance (cloud or self-hosted). Optional: API credentials (e.g., OpenAI) if you want to test AI features. Setup flow: Import this workflow. Add your API keys where needed (OpenAI, etc.). Trigger the workflow manually or test with webhooks. >👉 Detailed node explanations and examples are already included as sticky notes inside the workflow itself, so you can learn step by step as you explore.