by Abdul Mir
Overview Automate your entire LinkedIn content machine — from research and image generation to scheduling and posting — with this AI-powered workflow. This workflow pulls in past content ideas, researches new ones using Perplexity, generates a new post (with image) using your brand's voice and style, saves the output to Google Sheets, and auto-posts twice a week to LinkedIn. It’s perfect for founders, creators, and marketers who want to stay consistent on LinkedIn without manually writing or designing every post. Who’s it for Solo founders or marketers building a LinkedIn presence Content creators growing their audience Agencies managing client content calendars Anyone who wants to post consistently without spending hours on content How it works Pulls old ideas from a Google Sheet Schedules content creation using n8n’s cron node Uses Perplexity to research current topics and trends Feeds the data into an AI agent (like Claude or GPT) to generate post copy Creates a branded image using a reference style and OpenAI’s image model Saves post content + image URL into Google Sheets Twice a week, selects one ready post, downloads the image, and publishes it to LinkedIn How to set up Add your Google Sheet ID and column names for posts Connect your OpenAI (or Claude) and Perplexity API keys Upload a brand-style reference image to Google Drive Configure your LinkedIn account and connect the node Adjust the cron schedule for both post creation and auto-posting (Optional) Edit the AI prompt to match your personal voice or niche Requirements Google Drive & Sheets access OpenAI or Claude API key Perplexity API key LinkedIn credentials (via n8n’s LinkedIn integration) How to customize Change the prompt for the AI to fit your voice or audience Swap out Perplexity for another research method Adjust how often you want posts scheduled or published Swap LinkedIn for Twitter, Slack, or another platform Add Notion or Airtable as your CMS backend
by Arthur Dimeglio
What this workflow does Automatically: fetches fresh news, filters out aggregators/PR wires and duplicates, writes a human-sounding LinkedIn post with GPT, downloads the article image to verify it’s usable, publishes to LinkedIn (with or without media), and logs the posted titles in Firestore to avoid re-posting. Runs on a daily schedule (cron) and supports two post variants: • Case 1: article has a description → richer post • Case 2: no description → short, still human and casual ⸻ How it works (high level flow) • Schedule Trigger (0 10,12,19,21 * * *): runs at 10:00, 12:00, 19:00, 21:00 (server timezone). • Firestore (Get Previous News Titles): loads previously posted titles (document asma/x20) to de-dupe. • HTTP Request (API NEWS): calls newsapi.org with query “AI Startup” for example, last 24–48h window, searchIn=title, sorted by publishedAt. • Code: Select Articles: • excludes Biztoc and common aggregators/PR wires (Techmeme, TheFly, PRNewswire, GlobeNewswire, MarketWatch press-releases, Medium, Substack, Yahoo consent, etc.), • requires valid URL + image, • groups by topic (normalized title + domain) and picks the best representative, • sorts by recency and returns up to 10 unique articles. • IF (URL & De-dupe checks): ensures link present and not already posted (compares against Firestore titles). • IF (Description Checker): branches on presence of description. • LLM Agents (2 prompts): generate a casual, human LinkedIn post in English (no emojis/links/markdown, 2–3 hashtags). • Post setup: cleans the text, passes the image URL forward. • HTTP Request (Image Downloader): retrieves the image as a file to confirm the link works. • LinkedIn Publisher: • If image OK → posts with media. • Otherwise → posts text-only. • Time Checkers + Firestore Upserts: after a successful publish, writes the article’s title to Firestore (asma/x20 fields title10, title12, title19, title21) so it won’t be posted again at other times. ⸻ Credentials & prerequisites • NewsAPI.org: API key (free tier works to start; mind rate limits). • LinkedIn OAuth2: connected account with permission to create posts on your profile (uses “Person” target in the node). • Google Firebase (Firestore): Service Account with read/write to the asma collection. The workflow uses document ID x20. ⸻ Setup (5 minutes) Import the workflow and open it in n8n. In API NEWS, set your NewsAPI key in the query param apiKey. In Get Previous News Titles and Firebase Article Saver [1–8], attach your Google Service Account and confirm projectId, collection=asma. In LinkedIn Publisher [1–4], attach your LinkedIn OAuth credential and ensure the Person is your profile URN. (Optional) Adjust the cron in Hourly trigger (server timezone). (Optional) Change the search query (q=AI startup), language, or time window in API NEWS. Enable the workflow. ⸻ Customization tips • Search scope: edit q, language, from/to in API NEWS to cover your niche. • Aggregator policy: tweak the aggregatorDomains set in the Select Articles code node. • Post voice: modify the LLM prompt (keeps the “human, slightly messy” tone). • Hashtags: the prompt ends with 2–3 simple tags (#AI #Startups #Innovation) — change as you like. • Posting times: change the cron or the downstream time-checker logic to map specific titles → time slots. • No-image fallback: text-only path is already handled; replace with a placeholder image if you prefer always-with-media. ⸻ Notes & constraints • Timezone: Schedule Trigger uses the n8n server timezone; adjust if your LinkedIn audience is elsewhere. • De-dupe: this template stores last posted titles in one Firestore doc (asma/x20) under title10, title12, title19, title21. You can change the schema or keep a rolling history. • Filtering: items missing URL or image are skipped by design. Yahoo consent pages are also skipped. • LLM costs: posts are short; usage is modest, but keep an eye on your OpenAI billing. • NewsAPI limits: free plans throttle requests; consider caching or widening the time window if you hit limits. ⸻ Troubleshooting • Nothing posts: check NewsAPI quota/response, then see the URL checker and Description Checker branches. • Image errors: some sites block hotlinking; the image download step will fall back to text-only posting. • Duplicates appeared: verify Firestore upserts executed after posting and that your comparison uses the right fields. • Wrong hours: confirm your n8n instance timezone and the cron expression. ⸻ Why this template You get a robust “news → LinkedIn” autoposter that feels authentically human (no corporate vibes), avoids low-quality aggregators, prevents duplicates, and gracefully handles media — all with clean, modular nodes that are easy to tweak.
by Muhammad Farooq Iqbal
Google NanoBanana Model Image Editor for Consistent AI Influencer Creation with Kie AI Image Generation & Enhancement Workflow This n8n template demonstrates how to use Kie.ai's powerful image generation models to create and enhance images using AI, with automated story creation, image upscaling, and organized file management through Google Drive and Sheets. Use cases include: AI-powered content creation for social media, automated story visualization with consistent characters, marketing material generation, and high-quality image enhancement workflows. Good to know The workflow uses Kie.ai's google/nano-banana-edit model for image generation and nano-banana-upscale for 4x image enhancement Images are automatically organized in Google Drive with timestamped folders Progress is tracked in Google Sheets with status updates throughout the process The workflow includes face enhancement during upscaling for better portrait results All generated content is automatically saved and organized for easy access How it works Project Setup: Creates a timestamped folder structure in Google Drive and initializes a Google Sheet for tracking Story Generation: Uses OpenAI GPT-4 to create detailed prompts for image generation based on predefined templates Image Creation: Sends the AI-generated prompt along with 5 reference images to Kie.ai's nano-banana-edit model Status Monitoring: Polls the Kie.ai API to monitor task completion with automatic retry logic Image Enhancement: Upscales the generated image 4x using nano-banana-upscale with face enhancement File Management: Downloads, uploads, and organizes all generated content in the appropriate Google Drive folders Progress Tracking: Updates Google Sheets with status information and image URLs throughout the entire process Key Features Automated Story Creation**: AI-powered prompt generation for consistent, cinematic image creation Multi-Stage Processing**: Image generation followed by intelligent upscaling Smart Organization**: Automatic folder creation with timestamps and file management Progress Tracking**: Real-time status updates in Google Sheets Error Handling**: Built-in retry logic and failure state management Face Enhancement**: Specialized enhancement for portrait images during upscaling How to use Manual Trigger: The workflow starts with a manual trigger (easily replaceable with webhooks, forms, or scheduled triggers) Automatic Processing: Once triggered, the entire pipeline runs automatically Monitor Progress: Check the Google Sheet for real-time status updates Access Results: Find your generated and enhanced images in the organized Google Drive folders Requirements Kie.ai Account**: For AI image generation and upscaling services OpenAI API**: For intelligent prompt generation (GPT-4 mini) Google Drive**: For file storage and organization Google Sheets**: For progress tracking and status monitoring Customizing this workflow This workflow is highly adaptable for various use cases: Content Creation**: Modify prompts for different styles (fashion, product photography, architectural visualization) Batch Processing**: Add loops to process multiple prompts or reference images Social Media**: Integrate with social platforms for automatic posting E-commerce**: Adapt for product visualization and marketing materials Storytelling**: Create sequential images for visual narratives or storyboards The modular design makes it easy to add additional processing steps, change AI models, or integrate with other services as needed. Workflow Components Folder Management**: Dynamic folder creation with timestamp naming AI Integration**: OpenAI for prompts, Kie.ai for image processing File Processing**: Binary handling, URL management, and format conversion Status Tracking**: Multi-stage progress monitoring with Google Sheets Error Handling**: Comprehensive retry and failure management systems
by Kirill Khatkevich
This workflow transforms raw Meta Ads data into actionable, expert-level insights. It acts as a virtual performance marketer, analyzing each creative's performance, comparing it against your historical benchmarks, and delivering clear recommendations on whether to scale, optimize, or stop the ad. By running parallel analyses with both OpenAI and Gemini, it provides a unique, dual-perspective evaluation. This template is the perfect sequel to our "Automation of Creative Testing" workflow but also works powerfully on its own. Use Case Manually sifting through ads manager reports is tedious, and identifying true winners from early data is challenging. This workflow solves these problems by automating the entire analysis pipeline. It's designed for performance marketing teams who need to: Make faster, data-driven decisions on which creatives to scale. Get objective, AI-powered second opinions on ad performance. Systematically evaluate creatives against consistent, pre-defined benchmarks. Maintain a central log in Google Sheets with both raw metrics and qualitative AI analysis. Save hours spent on manual data crunching and report generation. How it Works The workflow is structured into three logical stages: Configuration & Data Ingestion: A central ⚙️ Set parameters node holds all key variables: the data source (Meta or Sheets), campaign_id, and, most importantly, your historical performance benchmarks as a simple text block. An IF node directs the workflow to fetch data either directly from a Meta Ads campaign or from a specified Google Sheet (ideal for analyzing a curated list of ads). Data Processing & AI Analysis (Parallel Execution): After fetching raw performance data (spend, impressions, clicks, actions), the workflow splits into three parallel branches for maximum resilience: Branch 1 (Data Logging): Immediately writes or updates a row in Google Sheets with the raw metrics for the creative. This ensures no data is lost, even if the AI analysis fails. Branch 2 (OpenAI Analysis): Prepares a CSV string of the creative's data, sends it along with the benchmarks to an OpenAI model (e.g., GPT-4), and instructs it to return a structured JSON analysis. Branch 3 (Gemini Analysis): Performs the exact same process but using Google's Gemini model via a LangChain agent, providing a second, independent evaluation. Results Aggregation: The results from both AI models are received as structured JSON. Two final Google Sheets nodes take these results and update the original row (matching by AdID), adding the evaluation, significance, summary, and recommendation into separate columns. The final sheet contains a complete picture: raw data side-by-side with analyses from two different AIs. Setup Instructions Credentials: 1.1 Connect your Meta Ads account. 1.2 Connect your Google account (for Sheets). 1.3 Connect your OpenAI account. 1.4 Connect your Google Gemini (Palm) account. The ⚙️ Set parameters Node: This is the central control panel. Open this first Set node and customize it: source: Set to "Meta" to pull from a campaign or "sheets" to read from a Google Sheet. campaign_id: If source is "Meta", enter your Meta Campaign ID here. benchmarks_data: This is critical. Paste your own historical performance data here as a CSV-formatted text block. The template includes an example. For best results, use an export from Ads Manager of your top-performing creatives, including key metrics. Google Sheets Nodes: There are three Google Sheets nodes that write data. You need to configure all of them to point to the same spreadsheet and sheet. Ad metrics (for raw metrics): Select your spreadsheet and sheet. Ensure "Operation" is set to Append or Update. Ad data from OpenAI (for OpenAI results): Select the same spreadsheet/sheet. Set "Operation" to Update. Ad data from Gemini (for Gemini results): Select the same spreadsheet/sheet. Set "Operation" to Update. Make sure your sheet has columns for all the data fields, e.g., AdID, FileName, spend, impressions, evaluation, summary, recommendation, evaluation G, summary G, etc. Activate the Workflow: Set your desired frequency in the Schedule Trigger node. Save and activate the workflow. Further Ideas & Customization This powerful analysis engine can be extended even further: Add a "Decision" Node: After the AI analyses are logged, add a final step that compares their recommendations. If both AIs say "scale", automatically increase the ad's budget via the Meta Ads API. Create Summary Reports: Add a branch that, after all ads are processed, calculates an overall summary (e.g., "3 creatives recommended for scaling, 5 for stopping") and sends it to a Slack channel. Dynamic Benchmarks: Instead of pasting benchmarks into the Set node, create a step that reads them from a dedicated "Benchmarks" tab in your Google Sheet, making them even easier to update. Experiment with Prompts and Benchmarks: The quality of the AI analysis is highly dependent on the quality of your input. Don't be afraid to: -- Refine the prompts in the AI Agent and Message a model nodes to better match your specific business context and KPIs. -- Curate your benchmarks_data. Test different sets of benchmark data (e.g., "last 30 days top performers" vs. "all-time best") to see how it influences the AI's recommendations. Finding the right combination of prompt and data is key to unlocking the most effective insights.
by Trung Tran
📄 Auto Extract Contacts from Business Cards to Sheet With GPT4o > This smart workflow extracts names, phone numbers, emails, and more from uploaded name card photos using AI, then logs them neatly into your Google Sheet. No typing. No mess. Just upload and go. 👤 Who’s it for Sales & Business Development Teams Recruiters & Talent Acquisition Specialists Event Teams collecting business cards Admins who manage contact databases manually ⚙️ How it works / What it does This workflow automates the extraction of contact details from uploaded name card (business card) images and stores them in a structured Google Sheet for easy tracking and follow-up. Workflow Steps: User uploads one or more name card images through a web form. The uploaded files are saved to a Google Drive folder for archiving. A smart AI agent (with OCR and GPT capabilities) scans each image and extracts relevant contact data into structured JSON format. Data is transformed, cleaned (e.g., removing + from phone numbers), and filtered. Valid contacts are appended to a Google Sheet for central tracking and future use. 🛠 How to set up Create a Form Allow file upload (JPG/PNG format). Label it as “Name Card Uploader” with a clear description. Upload to Google Drive Use the Google Drive node to store uploaded images. Configure Smart Agent Use GPT-4o or similar model with OCR capability. Apply a structured output parser to extract contact fields like name, phone, email, company, etc. Transform Data Use the Code node to clean and structure contact info. Strip out unwanted characters from phone numbers (e.g., +). Filter Invalid Records Remove entries with no meaningful contact data. Append to Google Sheets Use the Google Sheets node with "Append Sheet Row". Map fields to columns like Name, Phone, Email, etc. ✅ Requirements n8n workflow environment Google Drive integration (for file storage) Google Sheets integration (for storing contacts) GPT-4o or any image-capable LLM Clear name card images (PNG/JPG, readable text) (Optional) Slack/email integration for notifications 🧩 How to customize the workflow CRM Sync**: Connect to platforms like HubSpot, Salesforce, or Zoho. Validation Logic**: Ensure records contain key fields like name or email before writing. Uploader Info**: Attach submitter metadata to each contact record. Language Adaptation**: Adjust extracted field labels/output to target your preferred language. Batch Upload**: Handle multiple cards in a single image or multiple uploads in one go.
by Trung Tran
Automated SSL/TLS Certificate Expiry Report for AWS > Automatically generates a weekly report of all AWS ACM certificates, including status, expiry dates, and renewal eligibility. The workflow formats the data into both Markdown (for PDF export to Slack) and HTML (for email summary), helping teams stay on top of certificate compliance and expiration risks. Who’s it for This workflow is designed for DevOps engineers, cloud administrators, and compliance teams who manage AWS infrastructure and need automated weekly visibility into the status of their SSL/TLS certificates in AWS Certificate Manager (ACM). It's ideal for teams that want to reduce the risk of expired certs, track renewal eligibility, and maintain reporting for audit or operational purposes. How it works / What it does This n8n workflow performs the following actions on a weekly schedule: Trigger: Automatically runs once a week using the Weekly schedule trigger. Fetch Certificates: Uses Get many certificates action from AWS Certificate Manager to retrieve all certificate records. Parse Data: Processes and reformats certificate data (dates, booleans, SANs, etc.) into a clean JSON object. Generate Reports: 📄 Markdown Report: Uses the Certificate Summary Markdown Agent (OpenAI) to generate a Markdown report for PDF export. 🌐 HTML Report: Uses the Certificate Summary HTML Agent to generate a styled HTML report for email. Deliver Reports: Converts Markdown to PDF and sends it to Slack as a file. Sends HTML content as a formatted email. How to set up Configure AWS Credentials in n8n to allow access to AWS ACM. Create a new workflow and use the following nodes in sequence: Schedule Trigger: Weekly (e.g., every Monday at 08:00 UTC) AWS ACM → Get many certificates Function Node → Parse ACM Data: Converts and summarizes certificate metadata OpenAI Chat Node (Markdown Agent) with a system/user prompt to generate Markdown Configure Metadata → Define file name and MIME type (.md) Create document file → Converts Markdown to document stream Convert to PDF Slack Node → Upload the PDF to a channel (Optional) Add a second OpenAI Chat Node for generating HTML and sending it via email Connect Output: Markdown report → Slack file upload HTML report → Email node with embedded HTML Requirements 🟩 n8n instance (self-hosted or cloud) 🟦 AWS account with access to ACM 🟨 OpenAI API key (for ChatGPT Agent) 🟥 Slack webhook or OAuth credentials (for file upload) 📧 Email integration (e.g., SMTP or SendGrid) 📝 Permissions to write documents (Google Drive / file node) How to customize the workflow Change report frequency**: Adjust the Weekly schedule trigger to daily or monthly as needed. Filter certificates**: Modify the function node to only include EXPIRED, IN_USE, or INELIGIBLE certs. Add tags or domains to include/exclude. Add visuals**: Enhance the HTML version with colored rows, icons, or company branding. Change delivery channels**: Replace Slack with Microsoft Teams, Discord, or Telegram. Send Markdown as email attachment instead of PDF. Integrate ticketing**: Create a JIRA/GitHub issue for each certificate that is EXPIRED or INELIGIBLE.
by Incrementors
Wikipedia to LinkedIn AI Content Poster with Image via Bright Data 📋 Overview Workflow Description: Automatically scrapes Wikipedia articles, generates AI-powered LinkedIn summaries with custom images, and posts professional content to LinkedIn using Bright Data extraction and intelligent content optimization. 🚀 How It Works The workflow follows these simple steps: Article Input: User submits a Wikipedia article name through a simple form interface Data Extraction: Bright Data scrapes the Wikipedia article content including title and full text AI Summarization: Advanced AI models (OpenAI GPT-4 or Claude) create professional LinkedIn-optimized summaries under 2000 characters Image Generation: Ideogram AI creates relevant visual content based on the article summary LinkedIn Publishing: Automatically posts the summary with generated image to your LinkedIn profile URL Generation: Provides a shareable LinkedIn post URL for easy access and sharing ⚡ Setup Requirements Estimated Setup Time: 10-15 minutes Prerequisites n8n instance (self-hosted or cloud) Bright Data account with Wikipedia dataset access OpenAI API account (for GPT-4 access) Anthropic API account (for Claude access - optional) Ideogram AI account (for image generation) LinkedIn account with API access 🔧 Configuration Steps Step 1: Import Workflow Copy the provided JSON workflow file In n8n: Navigate to Workflows → + Add workflow → Import from JSON Paste the JSON content and click Import Save the workflow with a descriptive name Step 2: Configure API Credentials 🌐 Bright Data Setup Go to Credentials → + Add credential → Bright Data API Enter your Bright Data API token Replace BRIGHT_DATA_API_KEY in all HTTP request nodes Test the connection to ensure access 🤖 OpenAI Setup Configure OpenAI credentials in n8n Ensure GPT-4 model access Link credentials to the "OpenAI Chat Model" node Test API connectivity 🎨 Ideogram AI Setup Obtain Ideogram AI API key Replace IDEOGRAM_API_KEY in the "Image Generate" node Configure image generation parameters Test image generation functionality 💼 LinkedIn Setup Set up LinkedIn OAuth2 credentials in n8n Replace LINKEDIN_PROFILE_ID with your profile ID Configure posting permissions Test posting functionality Step 3: Configure Workflow Parameters Update Node Settings: Form Trigger:** Customize the form title and field labels as needed AI Agent:** Adjust the system message for different content styles Image Generate:** Modify image resolution and rendering speed settings LinkedIn Post:** Configure additional fields like hashtags or mentions Step 4: Test the Workflow Testing Recommendations: Start with a simple Wikipedia article (e.g., "Artificial Intelligence") Monitor each node execution for errors Verify the generated summary quality Check image generation and LinkedIn posting Confirm the final LinkedIn URL generation 🎯 Usage Instructions Running the Workflow Access the Form: Use the generated webhook URL to access the submission form Enter Article Name: Type the exact Wikipedia article title you want to process Submit Request: Click submit to start the automated process Monitor Progress: Check the n8n execution log for real-time progress View Results: The workflow will return a LinkedIn post URL upon completion Expected Output 📝 Content Summary Professional LinkedIn-optimized text Under 2000 characters Engaging and informative tone Bullet points for readability 🖼️ Generated Image High-quality AI-generated visual 1280x704 resolution Relevant to article content Professional appearance 🔗 LinkedIn Post Published to your LinkedIn profile Includes both text and image Shareable public URL Professional formatting 🛠️ Customization Options Content Personalization AI Prompts:** Modify the system message in the AI Agent node to change writing style Character Limits:** Adjust summary length requirements Tone Settings:** Change from professional to casual or technical Hashtag Integration:** Add relevant hashtags to LinkedIn posts Visual Customization Image Style:** Modify Ideogram prompts for different visual styles Resolution:** Change image dimensions based on LinkedIn requirements Rendering Speed:** Balance between speed and quality Brand Elements:** Include company logos or brand colors 🔍 Troubleshooting Common Issues & Solutions ⚠️ Bright Data Connection Issues Verify API key is correctly configured Check dataset access permissions Ensure sufficient API credits Validate Wikipedia article exists 🤖 AI Processing Errors Check OpenAI API quotas and limits Verify model access permissions Review input text length and format Test with simpler article content 🖼️ Image Generation Failures Validate Ideogram API key Check image prompt content Verify API usage limits Test with shorter prompts 💼 LinkedIn Posting Issues Re-authenticate LinkedIn OAuth Check posting permissions Verify profile ID configuration Test with shorter content ⚡ Performance & Limitations Expected Processing Times Wikipedia Scraping:** 30-60 seconds AI Summarization:** 15-30 seconds Image Generation:** 45-90 seconds LinkedIn Posting:** 10-15 seconds Total Workflow:** 2-4 minutes per article Usage Recommendations Best Practices: Use well-known Wikipedia articles for better results Monitor API usage across all services Test content quality before bulk processing Respect LinkedIn posting frequency limits Keep backup of successful configurations 📊 Use Cases 📚 Educational Content Create engaging educational posts from Wikipedia articles on science, history, or technology topics. 🏢 Thought Leadership Transform complex topics into accessible LinkedIn content to establish industry expertise. 📰 Content Marketing Generate regular, informative posts to maintain active LinkedIn presence with minimal effort. 🔬 Research Sharing Quickly summarize and share research findings or scientific discoveries with your network. 🎉 Conclusion This workflow provides a powerful, automated solution for creating professional LinkedIn content from Wikipedia articles. By combining web scraping, AI summarization, image generation, and social media posting, you can maintain an active and engaging LinkedIn presence with minimal manual effort. The workflow is designed to be flexible and customizable, allowing you to adapt the content style, visual elements, and posting frequency to match your professional brand and audience preferences. For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Robin Geuens
Overview Get a weekly report on website traffic driven by large language models (LLMs) such as ChatGPT, Perplexity, and Gemini. This workflow helps you track how these tools bring visitors to your site. A weekly snapshot can guide better content and marketing decisions. How it works The trigger runs every Monday. Pull the number of sessions on your website by source/medium from Google Analytics. The code node uses the following regex to filter referral traffic from AI providers like ChatGPT, Perplexity, and Gemini: /^.openai.|.copilot.|.chatgpt.|.gemini.|.gpt.|.neeva.|.writesonic.|.nimble.|.outrider.|.perplexity.|.google.bard.|.bard.google.|.bard.|.edgeservices.|.astastic.|.copy.ai.|.bnngpt.|.gemini.google.$/i; Combine the filtered sessions into one list so they can be processed by an LLM. Generate a short report using the filtered data. Email the report to yourself. Setup Get or connect your OpenAI API key and set up your OpenAI credentials in n8n. Enable Google Analytics and Gmail API access in the Google Cloud Console. (Read more here). Set up your Google Analytics and Gmail credentials in n8n. If you're using the cloud version of n8n, you can log in with your Google account to connect them easily. In the Google Analytics node, add your credentials and select the property for the website you’re working with. Alternatively, you can use your property ID, which can be found in the Google Analytics admin panel under Property > Property Details. The property ID is shown in the top-right corner. Add this to the property field. Under Metrics, select the metric you want to measure. This workflow is configured to use sessions, but you can choose others. Leave the dimension as-is, since we need the source/medium dimension to filter LLMs. (Optional) To expand the list of LLMs being filtered, adjust the regex in the code node. You can do this by copying and pasting one of the existing patterns and modifying it. Example: |.example.| The LLM node creates a basic report. If you’d like a more detailed version, adjust the system prompt to specify the details or formatting you want. Add your email address to the Gmail node so the report is delivered to your inbox. Requirements OpenAI API key for report generation Google Analytics API enabled in Google Cloud Console Gmail API enabled in Google Cloud Console Customizing this workflow The regex used to filter LLM referral traffic can be expanded to include specific websites. The system prompt in the AI node can be customized to create a more detailed or styled report.
by Punit
WordPress AI Content Creator Overview Transform a few keywords into professionally written, SEO-optimized WordPress blog posts with custom featured images. This workflow leverages AI to research topics, structure content, write engaging articles, and publish them directly to your WordPress site as drafts ready for review. What This Workflow Does Core Features Keyword-to-Article Generation**: Converts simple keywords into comprehensive, well-structured articles Intelligent Content Planning**: Uses AI to create logical chapter structures and content flow Wikipedia Integration**: Researches factual information to ensure content accuracy and depth Multi-Chapter Writing**: Generates coherent, contextually-aware content across multiple sections Custom Image Creation**: Generates relevant featured images using DALL-E based on article content SEO Optimization**: Creates titles, subtitles, and content optimized for search engines WordPress Integration**: Automatically publishes articles as drafts with proper formatting and featured images Business Value Content Scale**: Produce high-quality blog posts in minutes instead of hours Research Efficiency**: Automatically incorporates factual information from reliable sources Consistency**: Maintains professional tone and structure across all generated content SEO Benefits**: Creates search-engine friendly content with proper HTML formatting Cost Savings**: Reduces need for external content creation services Prerequisites Required Accounts & Credentials WordPress Site with REST API enabled OpenAI API access (GPT-4 and DALL-E models) WordPress Application Password or JWT authentication Public-facing n8n instance for form access (or n8n Cloud) Technical Requirements WordPress REST API v2 enabled (standard on most WordPress sites) WordPress user account with publishing permissions n8n instance with LangChain nodes package installed Setup Instructions Step 1: WordPress Configuration Enable REST API (usually enabled by default): Check that yoursite.com/wp-json/wp/v2/ returns JSON data If not, contact hosting provider or install REST API plugin Create Application Password: In WordPress Admin: Users > Profile Scroll to "Application Passwords" Add new password with name "n8n Integration" Copy the generated password (save securely) Get WordPress Site URL: Note your full WordPress site URL (e.g., https://yourdomain.com) Step 2: OpenAI Configuration Obtain OpenAI API Key: Visit OpenAI Platform Create API key with access to: GPT-4 models (for content generation) DALL-E (for image creation) Add OpenAI Credentials in n8n: Navigate to Settings > Credentials Add "OpenAI API" credential Enter your API key Step 3: WordPress Credentials in n8n Add WordPress API Credentials: In n8n: Settings > Credentials > "WordPress API" URL: Your WordPress site URL Username: Your WordPress username Password: Application password from Step 1 Step 4: Update Workflow Settings Configure Settings Node: Open the "Settings" node Replace wordpress_url value with your actual WordPress URL Keep other settings as default or customize as needed Update Credential References: Ensure all WordPress nodes reference your WordPress credentials Verify OpenAI nodes use your OpenAI credentials Step 5: Deploy Form (Production Use) Activate Workflow: Toggle workflow to "Active" status Note the webhook URL from Form Trigger node Test Form Access: Copy the form URL Test form submission with sample data Verify workflow execution completes successfully Configuration Details Form Customization The form accepts three key inputs: Keywords**: Comma-separated topics for article generation Number of Chapters**: 1-10 chapters for content structure Max Word Count**: Total article length control You can modify form fields by editing the "Form" trigger node: Add additional input fields (category, author, publish date) Change field types (dropdown, checkboxes, file upload) Modify validation rules and requirements AI Content Parameters Article Structure Generation The "Create post title and structure" node uses these parameters: Model**: GPT-4-1106-preview for enhanced reasoning Max Tokens**: 2048 for comprehensive structure planning JSON Output**: Structured data for subsequent processing Chapter Writing The "Create chapters text" node configuration: Model**: GPT-4-0125-preview for consistent writing quality Context Awareness**: Each chapter knows about preceding/following content Word Count Distribution**: Automatically calculates per-chapter length Coherence Checking**: Ensures smooth transitions between sections Image Generation Settings DALL-E parameters in "Generate featured image": Size**: 1792x1024 (optimized for WordPress featured images) Style**: Natural (photographic look) Quality**: HD (higher quality output) Prompt Enhancement**: Adds photography keywords for better results Usage Instructions Basic Workflow Access the Form: Navigate to the form URL provided by the Form Trigger Enter your desired keywords (e.g., "artificial intelligence, machine learning, automation") Select number of chapters (3-5 recommended for most topics) Set word count (1000-2000 words typical) Submit and Wait: Click submit to trigger the workflow Processing takes 2-5 minutes depending on article length Monitor n8n execution log for progress Review Generated Content: Check WordPress admin for new draft post Review article structure and content quality Verify featured image is properly attached Edit as needed before publishing Advanced Usage Custom Prompts Modify AI prompts to change: Writing Style**: Formal, casual, technical, conversational Target Audience**: Beginners, experts, general public Content Focus**: How-to guides, opinion pieces, news analysis SEO Strategy**: Keyword density, meta descriptions, heading structure Bulk Content Creation For multiple articles: Create separate form submissions for each topic Schedule workflow executions with different keywords Use CSV upload to process multiple keyword sets Implement queue system for high-volume processing Expected Outputs Article Structure Generated articles include: SEO-Optimized Title**: Compelling, keyword-rich headline Descriptive Subtitle**: Supporting context for the main title Introduction**: ~60 words introducing the topic Chapter Sections**: Logical flow with HTML formatting Conclusions**: ~60 words summarizing key points Featured Image**: Custom DALL-E generated visual Content Quality Features Factual Accuracy**: Wikipedia integration ensures reliable information Proper HTML Formatting**: Bold, italic, and list elements for readability Logical Flow**: Chapters build upon each other coherently SEO Elements**: Optimized for search engine visibility Professional Tone**: Consistent, engaging writing style WordPress Integration Draft Status**: Articles saved as drafts for review Featured Image**: Automatically uploaded and assigned Proper Formatting**: HTML preserved in WordPress editor Metadata**: Title and content properly structured Troubleshooting Common Issues "No Article Structure Generated" Cause: AI couldn't create valid structure from keywords Solutions: Use more specific, descriptive keywords Reduce number of chapters requested Check OpenAI API quotas and usage Verify keywords are in English (default language) "Chapter Content Missing" Cause: Individual chapter generation failed Solutions: Increase max tokens in chapter generation node Simplify chapter prompts Check for API rate limiting Verify internet connectivity for Wikipedia tool "WordPress Publication Failed" Cause: Authentication or permission issues Solutions: Verify WordPress credentials are correct Check WordPress user has publishing permissions Ensure WordPress REST API is accessible Test WordPress URL accessibility "Featured Image Not Attached" Cause: Image generation or upload failure Solutions: Check DALL-E API access and quotas Verify image upload permissions in WordPress Review image file size and format compatibility Test manual image upload to WordPress Performance Optimization Large Articles (2000+ words) Increase timeout values in HTTP request nodes Consider splitting very long articles into multiple posts Implement progress tracking for user feedback Add retry mechanisms for failed API calls High-Volume Usage Implement queue system for multiple simultaneous requests Add rate limiting to respect OpenAI API limits Consider batch processing for efficiency Monitor and optimize token usage Customization Examples Different Content Types Product Reviews Modify prompts to include: Pros and cons sections Feature comparisons Rating systems Purchase recommendations Technical Tutorials Adjust structure for: Step-by-step instructions Code examples Prerequisites sections Troubleshooting guides News Articles Configure for: Who, what, when, where, why structure Quote integration Fact checking emphasis Timeline organization Alternative Platforms Replace WordPress with Other CMS Ghost**: Use Ghost API for publishing Webflow**: Integrate with Webflow CMS Strapi**: Connect to headless CMS Medium**: Publish to Medium platform Different AI Models Claude**: Replace OpenAI with Anthropic's Claude Gemini**: Use Google's Gemini for content generation Local Models**: Integrate with self-hosted AI models Multiple Models**: Use different models for different tasks Enhanced Features SEO Optimization Add nodes for: Meta Description Generation**: AI-created descriptions Tag Suggestions**: Relevant WordPress tags Internal Linking**: Suggest related content links Schema Markup**: Add structured data Content Enhancement Include additional processing: Plagiarism Checking**: Verify content originality Readability Analysis**: Assess content accessibility Fact Verification**: Multiple source confirmation Image Optimization**: Compress and optimize images Security Considerations API Security Store all credentials securely in n8n credential system Use environment variables for sensitive configuration Regularly rotate API keys and passwords Monitor API usage for unusual activity Content Moderation Review generated content before publishing Implement content filtering for inappropriate material Consider legal implications of auto-generated content Maintain editorial oversight and fact-checking WordPress Security Use application passwords instead of main account password Limit WordPress user permissions to minimum required Keep WordPress and plugins updated Monitor for unauthorized access attempts Legal and Ethical Considerations Content Ownership Understand OpenAI's terms regarding generated content Consider copyright implications for Wikipedia-sourced information Implement proper attribution where required Review content licensing requirements Disclosure Requirements Consider disclosing AI-generated content to readers Follow platform-specific guidelines for automated content Ensure compliance with advertising and content standards Respect intellectual property rights Support and Maintenance Regular Maintenance Monitor OpenAI API usage and costs Update AI prompts based on output quality Review and update Wikipedia search strategies Optimize workflow performance based on usage patterns Quality Assurance Regularly review generated content quality Implement feedback loops for improvement Test workflow with diverse keyword sets Monitor WordPress site performance impact Updates and Improvements Stay updated with OpenAI model improvements Monitor n8n platform updates for new features Engage with community for workflow enhancements Document custom modifications for future reference Cost Optimization OpenAI Usage Monitor token consumption patterns Optimize prompts for efficiency Consider using different models for different tasks Implement usage limits and budgets Alternative Approaches Use local AI models for cost reduction Implement caching for repeated topics Batch similar requests for efficiency Consider hybrid human-AI content creation License and Attribution This workflow template is provided under MIT license. Attribution to original creator appreciated when sharing or modifying. Generated content is subject to OpenAI's usage policies and terms of service.
by Kev
Generate ready-to-publish short-form videos from text prompts using AI Click on the image to see the Example output in google drive Transform simple text concepts into professional short-form videos complete with AI-generated visuals, narrator voice, background music, and dynamic text overlays - all automatically generated and ready for Instagram, TikTok, or YouTube Shorts. This workflow demonstrates a cost-effective approach to video automation by combining AI-generated images with audio composition instead of expensive AI video generation. Processing takes 1-2 minutes and outputs professional 9:16 vertical videos optimized for social platforms. The template serves as both a showcase and building block for larger automation systems, with sticky notes providing clear guidance for customization and extension. Who's it for Content creators, social media managers, and marketers who need consistent, high-quality video content without manual production work. Perfect for motivational content, storytelling videos, educational snippets, and brand campaigns. How it works The workflow uses a form trigger to collect video theme, setting, and style preferences. ChatGPT generates cohesive scripts and image prompts, while Google Gemini creates themed background images and OpenAI TTS produces narrator audio. Background music is sourced from Openverse for CC-licensed tracks. All assets are uploaded to JsonCut API which composes the final video with synchronized overlays, transitions, and professional audio mixing. Results are stored in NocoDB for management. How to set up JsonCut API: Sign up at jsoncut.com and create an API key at app.jsoncut.com. Configure HTTP Header Auth credential in n8n with header name x-api-key OpenAI API: Set up credentials for script generation and text-to-speech Google Gemini API: Configure access for Imagen 4.0 image generation NocoDB (Optional): Set up instance for video storage and configure database credentials Requirements JsonCut free account with API key OpenAI API access for GPT and TTS Google Gemini API for image generation NocoDB (optional) for result storage How to customize the workflow This template is designed as a foundation for larger automation systems. The modular structure allows easy modification of AI prompts for different content niches (business, wellness, education), replacement of the form trigger with RSS feeds or database triggers for automated content generation, integration with social media APIs for direct publishing, and customization of visual branding through JsonCut configuration. The workflow can be extended for bulk processing, A/B testing multiple variations, or integration with existing content management systems. Sticky notes throughout the workflow provide detailed guidance for common customizations and scaling options.
by Jan Zaiser
Your inbox is overflowing with daily newsletters: Public Affairs, ESG, Legal, Finance, you name it. You want to stay informed, but reading 10 emails every morning? Impossible. What if you could get one single digest summarizing everything that matters, automatically? ❌ No more copy-pasting text into ChatGPT ❌ No more scrolling through endless email threads ✅ Just one smart, structured daily briefing in your inbox Who Is This For Public Affairs Teams: Stay ahead of political and regulatory updates—without drowning in emails. Executives & Analysts: Get daily summaries of key insights from multiple newsletters. Marketing, Legal, or ESG Departments: Repurpose this workflow for your own content sources. How It Works Gmail collects all newsletters from the day (based on sender or label). HTML noise and formatting are stripped automatically. Long texts are split into chunks and logged in Google Sheets. An AI Agent (Gemini or OpenAI) summarizes all content into one clean daily digest. The workflow structures the summary into an HTML email and sends it to your chosen recipients. Setup Guide • You’ll need Gmail and Google Sheets credentials. • Add your own AI Model (e.g., Gemini or OpenAI) with an API key. • Adjust the prompt inside the “Public Affairs Consultant” node to fit your topic (e.g., Legal, Finance, ESG, Marketing). • Customize the email subject and design inside the “Structure HTML-Mail” node. • Optional: Use Memory3 to let the AI learn your preferred tone and style over time. Cost & Runtime Runs once per day. Typical cost: ~$0.10–0.30 per run (depending on model and input length). Average runtime: <2 minutes.
by Atta
This workflow automates brand monitoring on X by analyzing both the text and the images in posts. It uses multi-modal AI to score brand relevance, filters out noise, logs important mentions in Airtable, and sends real-time alerts to a Telegram group for high-priority posts. What it does Traditional brand monitoring tools often miss the most authentic user content because they only track text. They can't "see" your logo in a photo or your product featured in a video without a direct keyword mention. This workflow acts as an AI agent that overcomes this blind spot. It finds mentions of your brand on X and then uses Google Gemini's multi-modal capabilities to perform a comprehensive analysis of both the text and any attached images. This allows it to understand the full context of a mention, score its relevance to your brand, and take the appropriate action, creating a powerful "visual intelligence" system. How it works The workflow runs on a schedule to find, analyze, and triage brand mentions. Get New Tweets: The workflow begins by using an Apify actor to scrape X for recent posts based on a defined set of search terms (e.g., Tesla OR $TSLA). It then filters these results to find unique mentions not already processed. Check for Duplicates: It cross-references each found tweet with an Airtable base to ensure it hasn't been analyzed before, preventing duplicate work. Analyze Post Content: For each new, unique post, the workflow performs two parallel analyses using Google Gemini: Analyze the Photos: The AI examines the images in the post to describe the scene, identify logos or products, and determine the visual mood. Analyze the Text: A separate AI call analyzes the text of the post to understand its context and sentiment. Final Relevance Check: A "Head Strategist" AI node receives the outputs from both the visual and text analyses. It synthesizes this information to assign a final brand relevance score from 1 to 10. Triage and Action: Based on this score, the workflow automatically triages the post: High Relevance (Score > 7): The post is logged in the Airtable base, and an instant, detailed alert is sent to a Telegram monitoring group. Medium Relevance (Score 4-7): The post is quietly logged in Airtable for later strategic review. Low Relevance (Score < 4): The post is ignored, effectively filtering out noise. Setup Instructions To get this workflow running, you will need to configure your Airtable base and provide credentials for Apify, Google, and Telegram. Required Credentials Apify: You will need an Apify API Token to run the X scraper. Airtable: You will need Airtable API credentials to connect to your base. Google AI: You will need credentials for the Google AI APIs to use the Gemini models. Telegram: You will need a Bot Token and the Chat ID for the channel where you want to receive high-relevance alerts. Of course. Based on the Config node parameters you provided, the setup process is much more centralized. Here is the corrected and rewritten "Step-by-Step Configuration" section. Of course. Here is the rewritten "Step-by-Step Configuration" section with the link to the advanced search documentation. Step-by-Step Configuration Set up Your Airtable Base: Before configuring the workflow, create a new table in your Airtable base. For the workflow to function correctly, this table must contain fields to store the analysis results. Create fields with the following names: postId, postURL, postText, postDateCreated, authorUsername, authorName, sentiment, relevanceScore, relevanceReasoning, mediaPhotosAnalysis, and status. Once the table is created, have your Base ID and Table ID ready to use in the Config node. Edit the Config Node: The majority of the setup is handled in the first Config node. Click on it and edit the following parameters in the "Expressions" tab: searchTerms: Replace the example with the keywords, hashtags, and accounts you want to monitor. The field supports advanced search operators for complex queries. For a full list of available parameters, see the Twitter Advanced Search documentation. airtableBaseId: Paste your Airtable Base ID here. airtableTableId: Paste your Airtable Table ID here. lang: Set the two-letter language code for the posts you want to find (e.g., "en" for English). min_faves: Set the minimum number of "favorites" a post should have to be considered. tweetsToScrape: Define the maximum number of posts the scraper should find in each run. actorId: This is the specific Apify actor for scraping X. You can leave this as is unless you intend to use a different one. Configure the Telegram Node: In the final node, "Send High Relevance Posts to Monitoring Group", you need to manually set the destination for the alerts. Enter the Chat ID for your Telegram group or channel. How to Adapt the Template This workflow is a powerful framework that can be adapted for various monitoring needs. Change the Source:* Replace the *Apify** node with a different trigger or data source. You could monitor Reddit, specific RSS feeds, or a news API for mentions. Customize the AI Logic:* The core of this workflow is in the AI prompts. You can edit the prompts in the *Google Gemini** nodes to change the analysis criteria. For example, you could instruct the AI to check for specific competitor logos, analyze the sentiment of comments, or identify if the post is from an influential account. Modify the Scoring:** Adjust the logic in the "Switch" node to change the thresholds for what constitutes a high, medium, or low-relevance post to better fit your brand's needs. Change the Actions:* Replace the *Telegram** node with a different action. Instead of sending an alert, you could: Create a ticket in a customer support system like Zendesk or Jira. Send a summary email to your marketing team. Add the post to a content curation tool or a social media management platform.