by Automate With Marc
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🧠 AI-Powered Blog Post Generator Category: Content Automation / AI Writing / Marketing Description: This automated workflow helps you generate fresh, SEO-optimized blog posts daily using AI tools—perfect for solo creators, marketers, and content teams looking to stay on top of the latest AI trends without manual research or writing. For more of such builds and step-by-step Tutorial Guides, check out: https://www.youtube.com/@Automatewithmarc Here’s how it works: Schedule Trigger kicks off the workflow daily (or at your preferred interval). Perplexity AI Node researches the most interesting recent AI news tailored for a non-technical audience. AI Agent (Claude via Anthropic) turns that news into a full-length blog post based on a structured prompt that includes title, intro, 3+ section headers, takeaway, and meta description—designed for clarity, engagement, and SEO. Optional Memory & Perplexity Tool Nodes enhance the agent's responses by allowing it to clarify facts or fetch more context. Google Docs Node automatically saves the final blog post to your selected document—ready for review, scheduling, or publishing. Key Features: Combines Perplexity AI + Claude AI (Anthropic) for research + writing Built-in memory and retrieval logic for deeper contextual accuracy Non-technical, friendly writing style ideal for general audiences Output saved directly to Google Docs Fully no-code, customizable, and extendable Use Cases: Automate weekly blog content for your newsletter or site Repurpose content into social posts or scripts Keep your brand relevant in the fast-moving AI landscape Setup Requirements: Perplexity API Key Anthropic API Key Google Docs (OAuth2 connected)
by Joseph
This workflow automates invoice generation from form submissions, ensuring unique order IDs, creating PDF invoices, storing files, emailing customers, and logging invoice data — all seamlessly integrated. 🔹 Workflow Overview Trigger (Webhook) Starts when an order form is submitted, capturing customer and order details. Generate Random Order ID A Function node creates a unique alphanumeric invoice ID (e.g., INV-X92B7D). Check for Duplicate Order ID Google Sheets looks up the generated order ID in your invoice log sheet to prevent duplicates. Conditional Check (IF Node) If the ID already exists → regenerates a new ID (loops back) If unique → proceeds to invoice creation Prepare Invoice Data A Set node formats customer info, date, order items, and the unique order ID to fit your invoice template. Convert HTML to PDF HTTP Request node sends your invoice HTML to the RapidAPI HTML-to-PDF service and receives the PDF file. Upload PDF to Cloud Storage Save the PDF in Google Drive or Dropbox with a clear file name like Invoice-INV-X92B7D.pdf. Send Invoice Email to Customer Email node attaches the PDF and includes the order ID in the email subject/body. Log Invoice Details Append invoice data (customer info, order ID, total, PDF link) to your Google Sheet for tracking. ⚙️ Node Details & Setup 1. Webhook Trigger Configure to receive form submissions (order details like name, email, items, total). 2. Function: Generate Random Order ID Sample JS code generates unique IDs prefixed by INV-. 3. Google Sheets: Lookup Row Set up connection to your invoice log sheet. Search for existing order ID to avoid duplicates. 4. IF Node: Check Order ID Existence Condition: If order ID found → loop to regenerate. Else → continue workflow. 5. Set Node: Prepare Invoice HTML Define variables like customer name, date, items, and order ID. This data populates your HTML invoice template. 6. HTTP Request: Convert HTML to PDF API URL to get your key Send invoice HTML in the request body. Receive PDF file blob or download URL. 7. Google Drive (or Dropbox) Upload Upload the PDF file. Use file name format: Invoice-{{$json["order_id"]}}.pdf 8. Email Node Recipient: customer email from the form data. Attach generated PDF. Include order ID in email subject or body for reference. 9. Google Sheets: Append Row Log invoice metadata to keep records updated. 📁 Google Sheets Template You can make a copy of the invoice log template here This sheet includes columns for order\_id, customer name, email, total, and invoice PDF link. Customize it as needed. 📌 Additional Notes Customize the invoice HTML template inside the Set node to match your branding. Ensure API credentials for RapidAPI, Google Drive/Dropbox, and email are properly set up in your n8n credentials. You can expand this workflow by adding payment processing or SMS notifications. Need help or want a custom workflow? Reach out via email at joseph@uppfy.com.
by Don Jayamaha Jr
📉 Detect key candlestick reversal patterns and volume divergence on Tesla (TSLA) using GPT-4.1 and real-time OHLCV data. This AI agent evaluates 1-hour and 1-day candles and is an essential part of the Tesla Financial Market Data Analyst Tool. It identifies signals like Doji, Engulfing, Hammer, and volume anomalies to support trade entry and exit logic. ⚠️ Not a standalone template — must be triggered by the Tesla Financial Market Data Analyst Tool 🔐 Requires: Alpha Vantage Premium API Key OpenAI GPT-4.1 access 🔍 What This Agent Does Calls Alpha Vantage to fetch: 🕐 1-hour OHLCV data 📅 1-day OHLCV data GPT-4.1 evaluates: 📊 Candlestick patterns like Doji, Engulfing, Shooting Star 🔄 Volume divergence (price/volume inconsistency) Returns a structured JSON output like: { "summary": "Bearish signs detected on 1-day chart. A shooting star formed on high volume while RSI is elevated. Volume divergence seen on 1h chart as price rises but volume weakens.", "candlestickPatterns": { "1h": "None", "1d": "Shooting Star" }, "volumeDivergence": { "1h": "Bearish", "1d": "None" }, "ohlcv": { "1h": { "close": 174.1, "volume": 1430000, "high": 175.0, "low": 173.8 }, "1d": { "close": 188.3, "volume": 21234000, "high": 189.9, "low": 183.7 } } } 🛠️ Setup Instructions Import the Workflow Name it: Tesla_1hour_and_1day_Klines_Tool Install Dependencies ✅ Tesla Financial Market Data Analyst Tool (this is the trigger parent) Add Required Credentials Alpha Vantage Premium → via HTTP Query Auth OpenAI GPT-4.1 → via OpenAI credentials Verify Web Access This tool fetches data live from Alpha Vantage: /query?function=TIME_SERIES_INTRADAY&interval=60min /query?function=TIME_SERIES_DAILY Run via Execute Workflow Trigger This tool will activate only when called by the Financial Analyst Agent. Inputs: message (optional) sessionId (used for memory continuity) 🧠 Agent Architecture | Component | Description | | ----------------------- | --------------------------------------------------- | | Candlestick Data Hour | Fetches 60min TSLA candles via Alpha Vantage | | Candlestick Data Day | Fetches daily TSLA candles via Alpha Vantage | | OpenAI Chat Model | GPT-4.1 reasoning engine for pattern detection | | Simple Memory | Maintains short-term logic context | | Tesla Klines Agent | LangChain AI agent analyzing both candle and volume | 📌 Sticky Notes Overview 📘 Workflow Purpose 🧠 Short-Term Memory Notes 🔍 1h/1d Data Fetch Logic 📉 Candlestick Pattern Types Detected 📊 Volume Divergence Definitions 🤖 GPT-4.1 Prompt Configuration 🔐 Licensing & Support © 2025 Treasurium Capital Limited Company Logic, pattern reasoning, and prompt structure are proprietary IP. 🔗 Don Jayamaha – LinkedIn 🔗 n8n Creator Profile 🚀 Automate technical edge: detect TSLA candle reversals and volume anomalies with precision using GPT-4.1 and Alpha Vantage. Required by the Tesla Financial Market Data Analyst Tool.
by Don Jayamaha Jr
⏱️ Analyze Tesla (TSLA) short-term market structure and momentum using 6 technical indicators on the 15-minute timeframe. This AI agent tool is part of the Tesla Quant Trading AI Agent system. It is designed to detect intraday shifts in volatility, trend strength, and potential reversal signals. ⚠️ Not standalone. This agent is triggered via Execute Workflow by the Tesla Financial Market Data Analyst Tool. 🔌 Requires: Tesla Quant Technical Indicators Webhooks Tool Alpha Vantage Premium API Key 📊 What It Does This workflow pulls the latest 20 data points for 6 key technical indicators from a webhook-powered source, then uses GPT-4.1 to interpret market momentum and structure: Connected Indicators: RSI (Relative Strength Index)** MACD (Moving Average Convergence Divergence)** BBANDS (Bollinger Bands)** SMA (Simple Moving Average)** EMA (Exponential Moving Average)** ADX (Average Directional Index)** The output is a structured JSON with: Market summary Timeframe (15m) Indicator values 📋 Sample Output { "summary": "TSLA shows fading momentum. RSI dropped below 60, MACD is flattening, and BBANDS are tightening. Expect short-term consolidation.", "timeframe": "15m", "indicators": { "RSI": 58.3, "MACD": { "macd": -0.020, "signal": -0.018, "histogram": -0.002 }, "BBANDS": { "upper": 183.10, "lower": 176.70, "middle": 179.90, "close": 177.60 }, "SMA": 178.20, "EMA": 177.70, "ADX": 19.6 } } 🧠 Agent Components | Module | Role | | --------------------- | -------------------------------------------------------- | | Webhook Data Node | Calls /15minData endpoint for Alpha Vantage indicators | | LangChain Agent | Parses indicator payloads and generates reasoning | | OpenAI GPT-4.1 | Powers the AI logic to interpret technical structure | | Memory Module | Maintains session consistency for multi-agent calls | 🛠️ Setup Instructions Import Workflow into n8n Name it: Tesla_15min_Indicators_Tool Configure Webhook Source Install and publish: Tesla_Quant_Technical_Indicators_Webhooks_Tool Ensure /15minData is publicly reachable (or tunnel-enabled) Add Credentials Alpha Vantage API Key (HTTP Query Auth) OpenAI GPT-4.1 (OpenAI Chat Model) Link as Sub-Agent This workflow is not triggered manually. It is executed using Execute Workflow by: 👉 Tesla_Financial_Market_Data_Analyst_Tool Pass in: message (optional) sessionId (for short-term memory linkage) 📌 Sticky Notes Summary 🟢 Trigger Integration – Receives sessionId and message from parent 🟡 Webhook Fetcher – Pulls Alpha Vantage data from /15minData 🧠 GPT-4.1 Reasoning – Produces structured JSON insight 🔵 Session Memory – Maintains evaluation flow across tools 📘 Tool Description – Explains indicator use and AI output format 🔒 Licensing & Author © 2025 Treasurium Capital Limited Company All logic, formatting, and agent design are protected under copyright. No resale or public re-use without permission. Created by: Don Jayamaha Creator Profile: https://n8n.io/creators/don-the-gem-dealer/ 🚀 Build faster intraday Tesla trading models using clean 15-minute indicator insights—processed by AI. Required by the Tesla Financial Market Data Analyst Tool.
by Hunyao
What it does Pulls up to 700 Amazon reviews per product (recent and top-rated) and writes them straight into a Google Sheet tab you choose. Perfect for • Brand and product managers tracking sentiment • Marketplace sellers analysing competitor feedback • Agencies building product-review dashboards Apps used RapidAPI Real-Time Amazon Data, Google Sheets, n8n Form Trigger How it works Form Trigger collects brand, product and sheet info. Code node extracts the ASIN and builds 70 API requests (10 pages × star ratings). Split-in-batches loops through the request list, throttled by two Wait nodes. HTTP Request fetches reviews from RapidAPI. IF node drops empty or error responses. Split Out breaks arrays into single reviews. Google Sheets appends every review to the target tab. Loop continues until all pages finish. Setup Fill in Brand name, Product / Model Name, Amazon Product URL, Tab URL to insert reviews in the form. Grab your X-RapidAPI-Key from RapidAPI → Add as httpHeaderAuth credential. Connect Google Sheets OAuth2 and make the spreadsheet Anyone with the link can edit. Open Workflow Settings → set timezone if you plan to schedule runs. Hit Execute workflow or share the form link. Credentials • Real-Time Amazon Data (RapidAPI HTTP Header Auth) • Google Sheets OAuth2 Limits and notes • \~100 RapidAPI calls for the free plan. Plan quota accordingly. • Assumes Amazon returns 10 pages per star rating; fewer pages skip silently. • Large sheets may hit Google API write quotas. If you have any questions in running the workflow, feel free to reach out to me at my youtube channel: https://www.youtube.com/@lifeofhunyao
by Stephan Koning
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. **Alternatively, you can delete the community node and use the HTTP node instead. ** Most email agent templates are fundamentally broken. They're stateless—they have no long-term memory. An agent that can't remember past conversations is just a glorified auto-responder, not an intelligent system. This workflow is Part 1 of building a truly agentic system: creating the brain. Before you can have an agent that replies intelligently, you need a knowledge base for it to draw from. This system uses a sophisticated parser to automatically read, analyze, and structure every incoming email. It then logs that intelligence into a persistent, long-term memory powered by mem0. The Problem This Solves Your inbox is a goldmine of client data, but it's unstructured, and manually monitoring it is a full-time job. This constant, reactive work prevents you from scaling. This workflow solves that "system problem" by creating an "always-on" engine that automatically processes, analyzes, and structures every incoming email, turning raw communication into a single source of truth for growth. How It Works This is an autonomous, multi-stage intelligence engine. It runs in the background, turning every new email into a valuable data asset. Real-Time Ingest & Prep: The system is kicked off by the Gmail Trigger, which constantly watches your inbox. The moment a new email arrives, the workflow fires. That email is immediately passed to the Set Target Email node, which strips it down to the essentials: the sender's address, the subject, and the core text of the message (I prefer using the plain text or HTML-as-text for reliability). While this step is optional, it's a good practice for keeping the data clean and orderly for the AI. AI Analysis (The Brain): The prepared text is fed to the core of the system: the AI Agent. This agent, powered by the LLM of your choice (e.g., GPT-4), reads and understands the email's content. It's not just reading; it's performing analysis to: Extract the core message. Determine the sentiment (Positive, Negative, Neutral). Identify potential red flags. Pull out key topics and keywords. The agent uses Window Buffer Memory to recall the last 10 messages within the same conversation thread, giving it the context to provide a much smarter analysis. Quality Control (The Parser): We don't trust the AI's first draft blindly. The analysis is sent to an Auto-fixing Output Parser. If the initial output isn't in a perfect JSON format, a second Parsing LLM (e.g., Mistral) automatically corrects it. This is our "twist" that guarantees your data is always perfectly structured and reliable. Create a Permanent Client Record: This is the most critical step. The clean, structured data is sent to mem0. The analysis is now logged against the sender's email address. This moves beyond just tracking conversations; it builds a complete, historical intelligence file on every person you communicate with, creating an invaluable, long-term asset. Optional Use: For back-filling historical data, you can disable the Gmail Trigger and temporarily connect a Gmail "Get Many" node to the Set Target Email node to process your backlog in batches. Setup Requirements To deploy this system, you'll need the following: An active n8n instance. Gmail** API credentials. An API key for your primary LLM (e.g., OpenAI). An API key for your parsing LLM (e.g., Mistral AI). An account with mem0.ai for the memory layer.
by Vitorio Magalhães
Auto-publish NASA APOD to LinkedIn with AI translation and hashtags Transform NASA's daily astronomical wonders into engaging LinkedIn content automatically. This workflow fetches NASA's Astronomy Picture of the Day, translates it to Brazilian Portuguese using AI, generates strategic hashtags, and publishes everything to your LinkedIn profile with the stunning space image attached. Who's it for Content creators, astronomy enthusiasts, science communicators, and anyone wanting to share high-quality educational content consistently on LinkedIn. Perfect for Portuguese-speaking professionals who want to engage their network with fascinating space discoveries while building their personal brand as a science advocate. How it works The workflow runs on a daily schedule and handles the complete content pipeline automatically. It fetches the latest NASA APOD through the official API, including both the image and detailed explanation. The English description gets professionally translated to selected language using Google Gemini 2.5 Flash, while maintaining scientific accuracy and terminology. Smart hashtag generation combines fixed branding tags with content-specific ones, mixing Portuguese and English for maximum reach. The final post includes the NASA image, translated description, and strategic hashtags, then gets published to your LinkedIn profile automatically. How to set up You'll need accounts for Google AI Studio (free), LinkedIn Developer (free), and a Telegram bot for notifications. The setup takes about 15 minutes and uses only free services and APIs. First, create your Google AI Studio account and get an API key for the AI translation services. Then set up a LinkedIn OAuth2 application to enable posting permissions. Create a Telegram bot through BotFather and get your chat ID for notifications. Configure the Settings node with your Telegram chat ID and preferred language. The workflow comes with all prompts and configurations ready to use. Test each component individually before activating the daily automation. Requirements LinkedIn account with posting permissions Google AI Studio API key (free tier available) Telegram bot token and your chat ID Basic understanding of OAuth2 setup for LinkedIn NASA API key (optional - demo key included) All services used have generous free tiers, making this workflow completely free to operate indefinitely. How to customize the workflow The centralized Settings node makes customization simple. Change the target language from Brazilian Portuguese to any other language by updating the translate_to_language variable. Modify the posting schedule in the CRON trigger to match your preferred timing. Customize the post template in the "Create Final Post Text" node to match your personal brand voice. Adjust the hashtag strategy by editing the AI prompt in the "Generate Hashtags" node. Add additional social platforms by duplicating the LinkedIn publisher with different credentials. The AI prompts can be fine-tuned for different writing styles or specific astronomical topics. You can also extend the workflow to include additional content processing, image enhancements, or cross-posting to multiple platforms while maintaining the core NASA APOD automation.
by Jay Emp0
MCP Tool — Replicate (Flux) Image Generator → WordPress/Twitter Generates images via Replicate Flux models and uploads to WordPress (and optionally Twitter/X). Built to act as an MCP module that other agents/workflows call for on-demand image creation. Models configured in this workflow:\ black-forest-labs/flux-schnell, black-forest-labs/flux-dev, black-forest-labs/flux-1.1-pro Switch rationale: lower cost 💰, broader model choice 🎯, full control of parameters ⚙️ Leonardo API credits cannot be used in the web UI 🙅♂️; separate spend for API vs UI Links: 📜 Prior Leonardo-based workflow: https://n8n.io/workflows/6363-generate-and-upload-images-with-leonardo-ai-wordpress-and-twitter/ 📰 Blog automation consuming these images: https://n8n.io/workflows/6734-ai-blog-automation-publish-hourly-seo-articles-to-wordpress-and-twitter-v3/ 📥 Inputs | Field | Type | Description | | ------ | ------ | --------------------------------- | | prompt | string | Text description for the image | | slug | string | Filename slug for WP media | | model | string | One of the configured Flux models | Example: { "prompt":"Joker watching a Batman movie on his laptop", "slug":"joker-watching-batman", "model":"black-forest-labs/flux-dev" } 📤 Output { "public_image_url": "https://your-wp.com/wp-content/uploads/2025/08/img-joker-watching-batman.webp", "wordpress": {...}, "twitter": {...} } 🔄 Flow Trigger with prompt, slug, model Build model payload (quality/steps/ratio/output format) Call Replicate: POST /v1/models/{model}/predictions (Prefer: wait) Download the generated image URL Upload to WordPress (returns public URL) Optional: upload to Twitter/X Return URL + metadata 🤖 MCP Use at Scale (emp0.com) Operational pattern: I currently use this setup for my blog where i generate 300 posts/month, each with 4 images (banner + 2 to 3 inline images) → 1,000 images/month produced by this MCP. 💡 Hybrid Cost-Optimized Setup: High-priority images* (banners, main visuals): Generated using *Flux Dev** on Leonardo for slightly better prompt adherence. Low-priority images* (inline blog visuals): Generated using *Flux Schnell** on Replicate for maximum cost efficiency. 💰 Pricing Comparison (per image) Leonardo per-image cost uses API Basic math: $9 / 3,500 credits = $0.0025714 per credit. Flux Schnell (Leonardo)** = 7 credits Flux Dev (Leonardo)** = 7 credits Flux 1.1 Pro equivalent in Leonardo* = *Leonardo Phoenix** based on my experience = 10 credits | Flux Model | Replicate | Leonardo API* | | ------------------------ | ------------------------- | ------------------------------- | | flux-schnell | $0.0030 (=$3/1,000) | $0.0180 (7 × $0.0025714) | | flux-dev | $0.0250 | $0.0180 (7 × $0.0025714) | | flux-1.1-pro / Phoenix | $0.0400 | $0.0257 (10 × $0.0025714) | Replicate pricing: https://replicate.com/pricing\ Leonardo pricing: https://leonardo.ai/pricing/\ Leonardo API usage: https://docs.leonardo.ai/docs/commonly-used-api-values 📊 Monthly Cost Example (1,000 images/month) Mix: 300 ×flux-dev on Leonardo, 700 ×flux-schnell on Replicate. | Platform/Model | Images | Price per Image | Total | | ------------------------ | ------ | --------------- | ---------- | | Leonardo flux-dev | 300 | $0.0180 | $5.40 | | Replicate flux-schnell | 700 | $0.0030 | $2.10 | | Total Monthly Spend | 1000 | — | $7.50 | 💵 If using Leonardo for both: 300 × $0.0180 = $5.40 700 × $0.0180 = $12.60 Total = $18.00** Savings: $10.50/month (≈58% lower) with the hybrid setup. 📌 Notes More Replicate models can be added in Code1 node. Parameters tuned for aspect ratio, inference steps, quality, guidance. Leonardo credit model is API-only; credits are not spendable in Leonardo's web UI.
by Ranjan Dailata
Who this is for? Extract & Summarize Indeed Company Info is an automated workflow that extracts the Indeed company profile information using Bright Data Web Unlocker, transform it using Google Gemini’s LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams looking to assess companies quickly during talent sourcing. Job seekers researching potential employers and needing summarized company insights. Market researchers and analysts monitoring competitor or industry players. What problem is this workflow solving? Searching and evaluating company profiles on Indeed manually can be time-consuming and inefficient, especially when dealing with large volumes of companies. Manually browsing, copying, and summarizing company descriptions, reviews, and ratings from Indeed hinders productivity and limits real-time insights. This workflow solves this by: Automating the extraction of company details from Indeed using Bright Data Web Unlocker. Summarizing the raw data using Google Gemini's language model for a quick, human-readable overview. Sending the transformed response with the summary to a chosen endpoint, like Slack, Notion, Airtable, or a custom webhook. What this workflow does This automated pipeline does the following: Scrape Indeed company profile pages (e.g., ratings, description, reviews) using Bright Data’s Web Unlocker. Transform the scraped content into structured JSON using n8n’s built-in tools. Summarize and extract meaningful insights using Google Gemini's large language model. Forward the summarized data to a specified webhook or app for real-time access, storage, or analysis. Forward the formatted response to a specified webhook or app for real-time access, storage, or analysis. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the search query, Bright Data zone by navigating to the Set Indeed Search Query node. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Here’s how you can adapt it to fit your specific use case: Changing the data source**: Replace the Indeed search input with other job or business listing platforms if needed (e.g., Glassdoor, Crunchbase) Refining the LLM prompt**: Tailor the Gemini prompt to transform or summarize the Indeed company information in a specific format. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.
by Custom Workflows AI
Introduction This workflow offers a streamlined solution for uploading multiple files to a GitHub repository simultaneously using GitHub's REST API. It addresses a significant limitation of n8n's native GitHub node, which only supports single-file uploads at a time. By leveraging GitHub's Git Data API, this workflow creates a new Git tree containing multiple files, commits this tree, and updates the target branch—all in a single automated process. The workflow is particularly valuable for automation scenarios that require batch file operations, such as deploying website updates, publishing documentation, or maintaining configuration files across repositories. It eliminates the need for multiple separate API calls when working with multiple files, making your automation more efficient and less prone to partial update issues. By abstracting the complexities of GitHub's Git Data API into a reusable workflow, it provides a practical solution for developers, content managers, and DevOps professionals who need to programmatically manage repository content at scale. Who is this for? This workflow is designed for: Developers and DevOps engineers who need to automate file updates in GitHub repositories Content managers who regularly publish multiple files to GitHub-hosted websites or documentation Automation specialists looking to integrate GitHub operations into larger workflows Teams using n8n for CI/CD processes who need to push code or configuration changes Users should have basic familiarity with GitHub concepts (repositories, branches, commits) and should be comfortable obtaining and using GitHub Personal Access Tokens. While the workflow handles the API complexity, users should understand the fundamentals of version control to effectively utilize and customize it. What problem is this workflow solving? This workflow addresses several key challenges: Limited batch operations: n8n's native GitHub node only supports uploading one file at a time, making multi-file operations cumbersome and inefficient. API complexity: GitHub's Git Data API requires multiple sequential calls with interdependent data to create commits with multiple files, which is complex to implement manually. Automation bottlenecks: Without this workflow, automating multi-file updates would require either multiple separate API calls (risking partial updates) or custom scripting outside of n8n. Consistency issues: When files need to be updated together (e.g., code and corresponding documentation), this workflow ensures they're committed in a single atomic operation. By solving these issues, the workflow enables reliable, atomic updates of multiple files, maintaining repository consistency and simplifying automation processes. What this workflow does Overview This workflow uses GitHub's REST API to push multiple files to a repository in a single operation. It follows Git's internal model by: Retrieving the current state of the repository Creating a new tree with the files to be added or updated Creating a new commit with this tree Updating the branch reference to point to the new commit Process Initialization: The workflow starts with a manual trigger and sets up GitHub credentials and repository information. File Content Definition: Two "Set" nodes define the content for the files to be uploaded. Repository State Retrieval: The workflow fetches the latest commit SHA for the specified branch It then retrieves the base tree SHA from this commit Tree Creation: A new Git tree is created that includes both files (file1.txt and file2.txt), specifying their paths and content. Commit Creation: A new commit is created with the specified commit message, referencing the new tree and the parent commit. Branch Update: Finally, the branch reference is updated to point to the new commit, making the changes visible in the repository. Setup To use this workflow: Import the workflow: Download the workflow JSON and import it into your n8n instance. Create a GitHub Personal Access Token: Go to GitHub Settings → Developer Settings → Personal Access Tokens → Fine-grained tokens Create a new token with "Contents" permission (Read and write) for your target repository Configure the workflow: Update the "Set Github Info" node with: Your GitHub Personal Access Token Your GitHub username Your repository name The target branch (default is "main") A commit message Define file content: Modify the "File 1" and "File 2" nodes with the content you want to upload Adjust file paths if needed: In the "Create new tree" node, update the file paths if you want to change where the files are stored in the repository Save and run the workflow: Click "Test workflow" to execute the process. How to customize this workflow to your needs This workflow can be adapted in several ways: Add more files: Create additional "Set" nodes for more file content In the "Create new tree" node, add more tree entries following the same pattern (path, mode, type, content) Change file locations: Modify the "path" parameters in the "Create new tree" node to place files in different directories Dynamic file content: Replace the static content in the "File" nodes with data from other sources Use previous nodes or HTTP requests to generate file content dynamically Conditional file updates: Add IF nodes to determine which files should be updated based on certain conditions Create separate branches in your workflow for different update scenarios Scheduled updates: Replace the manual trigger with a Schedule node to run the workflow at specific intervals Combine with other triggers like Webhook or database events to push files when certain events occur Error handling: Add Error Trigger nodes to handle potential API failures Implement notification nodes to alert you of successful pushes or failures
by Basil Irfan
🚀 LinkedIn Lead-Gen Flywheel – Apify → GPT-4o → Google Sheets → Phantombuster What this workflow does Collect audience specs – simple web-form asks for your ideal company profile. Generate a laser-targeted Apollo search URL with GPT-4o (no manual filtering). Scrape the matching leads via an Apify actor (returns clean JSON). Craft hyper-personalized icebreakers for each lead using GPT-4o (ultra-short, human-sounding). Log everything to Google Sheets – name, LinkedIn URL, company site, summary, and the icebreaker. (Optional) Auto-launch Phantombuster to fire off those connection requests at scale. Why it matters Zero grunt work:** audience research, scraping, copy-writing, and outreach all run hands-free. Punchy personalization:** micro-icebreakers outperform canned intros, boosting accept rates. Scales with you:** flip a switch to go from 10 to 1 000+ connections/day. Node rundown | Step | Node | Key Inputs | Key Outputs | |------|------|-----------|-------------| | 1 | Form Trigger | Audience description | description_of_company | | 2 | OpenAI (GPT-4o) | Audience text | SearchUrl | | 3 | HTTP Request – Apify | SearchUrl, APIFY_TOKEN | Lead JSON | | 4 | OpenAI (GPT-4o) | Lead JSON | Icebreaker | | 5 | Google Sheets | Lead + Icebreaker | Row append/update | | 6 | Aggregate | Sheet rows | Batched output | | 7 | HTTP Request – Phantombuster | PHANTOM_KEY, AGENT_ID | Launch status | Prerequisites OpenAI API key** (GPT-4o access recommended) Apify API token** with access to actor id Google Service Account creds** shared with your target sheet Phantombuster API key** and Agent ID for your LinkedIn connector Active Apollo account to open the generated search URL (only required for debugging) Setup (5-minute sprint) Import the workflow into n8n. Add the required credentials in Credentials → OpenAI, Apify, Google Sheets, Phantombuster. Paste your Phantombuster Agent ID into the HTTP Request node URL. Publish the Form Trigger URL—this is where you (or your SDRs) describe the target audience. Hit Execute Workflow once to verify data flows end-to-end. Customization tips Titles & keywords:** tweak the prompt in the first GPT-4o node to lock in different roles or industries. Icebreaker style:** adjust the second GPT-4o prompt to match your brand voice. Data columns:** map extra fields from Apify into Google Sheets as needed. Skip outreach:** disable the Phantombuster node if you only want the leads + icebreakers.
by Yaron Been
Create your own intelligent Telegram bot that summarizes articles and processes commands automatically. This powerful workflow turns Telegram into your personal AI assistant, handling /help, /summary <URL>, and /img <prompt> commands with intelligent responses - perfect for teams, content creators, and anyone wanting smart automation in their messaging. 🚀 What It Does Smart Command Processing: Automatically recognizes and routes /help, /summary, and /img commands to appropriate AI-powered responses. Article Summarization: Fetches any URL, extracts content, and generates professional 10-12 bullet point summaries using OpenAI. Image Generation: Processes image prompts and integrates with AI image generation services. Help System: Provides users with clear command instructions and usage examples. 🎯 Key Benefits ✅ Personal AI Assistant: Get instant article summaries in Telegram ✅ Team Productivity: Share quick content summaries with colleagues ✅ Content Research: Rapidly digest articles and web content ✅ 24/7 Availability: Bot works around the clock without maintenance ✅ Easy Commands: Simple /summary <link> format anyone can use ✅ Scalable: Handles multiple users and requests simultaneously 🏢 Perfect For Content Teams & Researchers Journalists quickly summarizing news articles Marketing teams researching competitor content Students processing academic papers and articles Analysts digesting industry reports Business Applications Team Communication**: Share article insights in group chats Research Assistance**: Quick content analysis for decision making Content Curation**: Summarize articles for newsletters or reports Knowledge Sharing**: Help teams stay informed efficiently ⚙️ What's Included Complete Bot Workflow: Ready-to-deploy Telegram bot with all commands AI Integration: OpenAI-powered content summarization and processing Smart Routing: Intelligent command recognition and response system Error Handling: Robust system handles invalid commands gracefully Extensible Design: Easy to add new commands and features 🔧 Quick Setup Requirements n8n Platform**: Cloud or self-hosted instance Telegram Bot Token**: Create via @BotFather (free, 5 minutes) OpenAI API**: For content summarization (pay-per-use) Basic Configuration**: Follow 15-minute setup guide 📱 User Experience Simple Commands: /help - Show available commands /summary https://example.com - Get article summary /img sunset over mountains - Generate image (with supported APIs) Sample Summary Output: 📄 Article Summary: • Company reports 40% revenue growth in Q3 2024 • New AI features driving customer acquisition • Expansion into European markets planned for 2025 • Investment in R&D increased by 25% this quarter • Customer satisfaction scores improved to 94% • Three new product launches scheduled for next year • Remote work policy made permanent post-pandemic • Sustainability goals on track to meet 2025 targets • Partnership with major tech firm announced • Stock price up 15% following earnings report 🎨 Customization Options Command Extensions: Add custom commands for specific workflows Response Formatting: Customize summary style and length Multi-Language: Support different languages for international teams Integration APIs: Connect additional AI services and tools User Permissions: Control who can use specific commands Analytics: Track usage patterns and popular content 🏷️ Tags & Categories #telegram-bot #ai-automation #content-summarization #article-processing #team-productivity #openai-integration #smart-assistant #workflow-automation #messaging-bot #content-research #ai-agent #n8n-workflow #business-automation #telegram-integration #ai-powered-bot 💡 Use Case Examples News Team: Quickly summarize breaking news articles for editorial meetings Marketing Agency: Research competitor content and industry trends efficiently Sales Team: Digest industry reports and share insights with prospects Remote Team: Keep everyone informed with summarized company updates 📈 Expected Results 80% faster** content research and analysis 50% more articles** processed per day vs manual reading 100% team accessibility** through familiar Telegram interface 24/7 availability** for global teams across time zones 🛠️ Setup & Support Quick Start: Deploy your bot in 15 minutes with included guide Video Tutorial: Complete walkthrough available Template Commands: Pre-built responses and formatting Expert Support: Direct help from workflow creator 📞 Get Help & Resources YouTube: https://www.youtube.com/@YaronBeen/videos 💼 Professional Support LinkedIn: https://www.linkedin.com/in/yaronbeen/ 📧 Direct Help Email: Yaron@nofluff.online - Response within 24 hours Ready to build your intelligent Telegram assistant? Get this AI Telegram Bot Agent and transform your messaging app into a powerful content processing tool. Perfect for teams, researchers, and anyone who wants AI-powered assistance directly in Telegram. Stop manually reading long articles. Start getting instant, intelligent summaries with simple commands.