by Julian Ivanov
How it works This workflow automates the transformation of standard product images into professional product photography featuring human models It uses AI to analyze product images, create tailored photography prompts, and generate high-quality enhanced versions Set up steps You'll need an OpenAI API key and access to gpt-image-1 (verify your organization) Set up a Google Sheets spreadsheet with columns: Image-URL, Prompt, Output Create a Google Drive folder to store the generated images Requirements: OpenAI API access (for image generation and analysis) Google Sheets and Google Drive accounts Basic product images (URLs) as input The spreadsheet must contain a column named "Image-URL" with links to the product images This workflow automatically: Reads product image URLs from your Google Sheet Downloads the images for processing Analyzes each image to understand what product it contains Creates specialized photography prompts ensuring each product is shown with a human model Generates professional product photography using OpenAI's image generation capabilities Uploads results to Google Drive and updates your spreadsheet with links Extra: You can also use the included simple image generation workflow to directly create images via prompt without product image input. This option lets you quickly generate images through the OpenAI API using just text prompts
by n8n Team
This workflow digests mentions of n8n on Reddit that can be sent as an single email or Slack summary each week. We use OpenAI to classify if a specific Reddit post is really about n8n or not, and then the summarise it into a bullet point sentence. How it works Get posts from Reddit that might be about n8n; Filter for the most relevant posts (posted in last 7 days and more than 5 upvotes and is original content); Check if the post is actually about n8n; If it is, categorise with OpenAI. Bear in mind: Workflow only considers first 500 characters of each reddit post. So if n8n is mentioned after this amount, it won't register as being a post about n8n.io. Next steps Improve OpenAI Summary node prompt to return cleaner summaries; Extend to more platforms/sources - e.g. it would be really cool to monitor larger Slack communities in this way; Do some classification on type of user to highlight users likely to be in our ICP; Separate a list of data sources (reddit, twitter, slack, discord etc.), extract messages from there and have them go to a sub workflow for classification and summarisation.
by M Shehroz Sajjad
What problem does it solve? Manual candidate screening is time-consuming and inconsistent. This workflow automates initial interviews, providing 24/7 availability, consistent questioning, and objective assessments for every candidate. Who is it for? HR teams handling high-volume recruiting Small businesses without dedicated recruiters Companies scaling their hiring process Remote-first organizations needing asynchronous screening What this workflow does Creates AI interviewers from job descriptions that conduct natural conversations with candidates via BeyondPresence Agents. Automatically analyzes interviews and saves structured assessments to Google Sheets. Setup Copy template sheet: BeyondPresence HR Interview System Template Add credentials: BeyondPresence API Key OpenAI API Google Sheets Configure webhook in BeyondPresence dashboard: https://[your-n8n-instance]/webhook/beyondpresence-hr-interviews Paste job description and run setup Share generated link with candidates How it works Agent Creation: Converts job description into conversational AI interviewer Interview Conduct: Candidates chat naturally with AI via shared link Webhook Trigger: Completed interviews sent to n8n AI Analysis: OpenAI evaluates responses against job requirements Results Storage: Assessments saved to Google Sheets with scores and recommendations Resources Google Sheets Template BeyondPresence Documentation Webhook Setup Guide Example Use Case Tech startup screens 200 applicants for engineering role. Creates AI interviewer in 2 minutes, sends link to all candidates. Receives structured assessments within 24 hours, identifying top 20 candidates for human interviews. Reduces initial screening time from 2 weeks to 2 days.
by Zacharia Kimotho
What it does This workflow scrapes the top 10 pages on SERP and conducts an in-depth analysis of the keyword intent for each ranking keyword, saving the information to a Google Sheet for further analysis. How does this workflow work? We add our keywords and country code to a Google sheet that we need to monitor and research on Run the system Scrape the top 10 pages Analyze the intents of the top 10 and update to a Google sheet Technical Setup Make a copy of this G sheet Add your desired keywords to the Google sheet Map keyword and country code Update the Zone name to match your zone on Bright Data Run the scraper Upon successful scraping, we run an intent classifier to determine the intents for each ranking page and update the G sheet. Setting up the Serp Scraper in Bright Data On Bright Data, go to the Proxies & Scraping tab Under SERP API, create a new zone Give it a suitable name and description. The default is serp_api Add this to your account Add your credentials as a header credential
by Custom Workflows AI
Introduction The Content SEO Audit Workflow is a powerful automated solution that generates comprehensive SEO audit reports for websites. By combining the crawling capabilities of DataForSEO with the search performance metrics from Google Search Console, this workflow delivers actionable insights into content quality, technical SEO issues, and performance optimization opportunities. The workflow crawls up to 1,000 pages of a website, analyzes various SEO factors including metadata, content quality, internal linking, and search performance, and then generates a professional, branded HTML report that can be shared directly with clients. The entire process is automated, transforming what would typically be hours of manual analysis into a streamlined workflow that produces consistent, thorough results. This workflow bridges the gap between technical SEO auditing and practical, client-ready deliverables, making it an invaluable tool for SEO professionals and digital marketing agencies. Who is this for? This workflow is designed for SEO consultants, digital marketing agencies, and content strategists who need to perform comprehensive content audits for clients or their own websites. It's particularly valuable for professionals who: Regularly conduct SEO audits as part of their service offerings Need to provide branded, professional reports to clients Want to automate the time-consuming process of content analysis Require data-driven insights to inform content strategy decisions Users should have basic familiarity with SEO concepts and metrics, as well as a basic understanding of how to set up API credentials in n8n. While no coding knowledge is required to run the workflow, users should be comfortable with configuring workflow parameters and following setup instructions. What problem is this workflow solving? Content audits are essential for SEO strategy but are traditionally labor-intensive and time-consuming. This workflow addresses several key challenges: Manual Data Collection: Gathering data from multiple sources (crawlers, Google Search Console, etc.) typically requires hours of work. This workflow automates the entire data collection process. Inconsistent Analysis: Manual audits can suffer from inconsistency in methodology. This workflow applies the same comprehensive analysis criteria to every page, ensuring thorough and consistent results. Report Generation: Creating professional, client-ready reports often requires additional design work after the analysis is complete. This workflow generates a fully branded HTML report automatically. Data Integration: Correlating technical SEO issues with actual search performance metrics is difficult when working with separate tools. This workflow seamlessly integrates crawl data with Google Search Console metrics. Scale Limitations: Manual audits become increasingly difficult with larger websites. This workflow can efficiently process up to 1,000 pages without additional effort. What this workflow does Overview The Content SEO Audit Workflow crawls a specified website, analyzes its content for various SEO issues, retrieves performance data from Google Search Console, and generates a comprehensive HTML report. The workflow identifies issues in five key categories: status issues (404 errors, redirects), content quality (thin content, readability), metadata SEO (title/description issues), internal linking (orphan pages, excessive click depth), and performance (underperforming content). The final report includes executive summaries, detailed issue breakdowns, and actionable recommendations, all branded with your company's colors and logo. Process Initial Configuration: The workflow begins by setting parameters including the target domain, crawl limits, company information, and branding colors. Website Crawling: The workflow creates a crawl task in DataForSEO and periodically checks its status until completion. Data Collection: Once crawling is complete, the workflow: Retrieves the raw audit data from DataForSEO Extracts all URLs with status code 200 (successful pages) Queries Google Search Console API for each URL to get clicks and impressions data Identifies 404 and 301 pages and retrieves their source links Data Analysis: The workflow analyzes the collected data to identify issues including: Technical issues: 404 errors, redirects, canonicalization problems Content issues: thin content, outdated content, readability problems SEO metadata issues: missing/duplicate titles and descriptions, H1 problems Internal linking issues: orphan pages, excessive click depth, low internal links Performance issues: underperforming pages based on GSC data Report Generation: Finally, the workflow: Calculates a health score based on the severity and quantity of issues Generates prioritized recommendations Creates a comprehensive HTML report with interactive tables and visualizations Customizes the report with your company's branding Provides the report as a downloadable HTML file Setup To set up this workflow, follow these steps: Import the workflow: Download the JSON file and import it into your n8n instance. Configure DataForSEO credentials: Create a DataForSEO account at https://app.dataforseo.com/api-access (they offer a free $1 credit for testing) Add a new "Basic Auth" credential in n8n following the HTTP Request Authentication guide Assign this credential to the "Create Task", "Check Task Status", "Get Raw Audit Data", and "Get Source URLs Data" nodes Configure Google Search Console credentials: Add a new "Google OAuth2 API" credential following the Google OAuth guide Ensure your Google account has access to the Google Search Console property you want to analyze Assign this credential to the "Query GSC API" node Update the "Set Fields" node with: dfs_domain: The website domain you want to audit dfs_max_crawl_pages: Maximum number of pages to crawl (default: 1000) dfs_enable_javascript: Whether to enable JavaScript rendering (default: false) company_name: Your company name for the report branding company_website: Your company website URL company_logo_url: URL to your company logo brand_primary_color: Your primary brand color (hex code) brand_secondary_color: Your secondary brand color (hex code) gsc_property_type: Set to "domain" or "url" depending on your Google Search Console property type Run the workflow: Click "Start" and wait for it to complete (approximately 20 minutes for 500 pages). Download the report: Once complete, download the HTML file from the "Download Report" node. How to customize this workflow to your needs This workflow can be adapted in several ways to better suit your specific requirements: Adjust crawl parameters: Modify the "Set Fields" node to change: The maximum number of pages to crawl (dfs_max_crawl_pages). This workflow supports up to 1000 pages. Whether to enable JavaScript rendering for JavaScript-heavy sites (dfs_enable_javascript) Customize issue detection thresholds: In the "Build Report Structure" code node, you can modify: Word count thresholds for thin content detection (currently 1500 words) Click depth thresholds (currently flags pages deeper than 4 clicks) Title and description length parameters (currently 40-60 chars for titles, 70-155 for descriptions) Readability score thresholds (currently flags Flesch-Kincaid scores below 55) Modify the report design: In the "Generate HTML Report" code node, you can: Adjust the HTML/CSS to change the report layout and styling Add or remove sections from the report Change the recommendations logic Modify the health score calculation algorithm Add additional data sources: You could extend the workflow by: Adding Pagespeed Insights data for performance metrics Incorporating backlink data from other APIs Adding keyword ranking data from rank tracking APIs Implement automated delivery: Add nodes after the "Download Report" to: Send the report directly to clients via email Upload it to cloud storage Create a PDF version of the report
by Ranjan Dailata
Who this is for? The Brand Content Extract, Summarization & Sentiment Analysis workflow is designed for professionals and teams who need to monitor, understand, and act on public brand perception at scale. It is ideal for: Brand Managers - Looking to track how their brand is portrayed online. Marketing Analysts - Seeking insights from competitor and industry content. PR & Communications Teams - Evaluating media tone and potential reputation risks. Data Scientists & AI Developers - Automating content intelligence pipelines. Growth Hackers - Performing large-scale web listening for campaign optimization. What problem is this workflow solving? Manually tracking and interpreting how your brand is mentioned across blogs, news sites, or product reviews is labor-intensive and unscalable. Traditional scraping tools return raw data but lack insights like summarization, sentiment analysis etc. This workflow addresses: Scalable extraction of brand-related content using Bright Data's infrastructure. Textual data extract for easy decision-making or alerting. Automated summarization of verbose or multi-paragraph articles using Gemini. Sentiment analysis of how a brand is being portrayed. What this workflow does Receives input: A brand URL for the data extraction and analysis. Uses Bright Data's Web Unlocker to extract content from relevant sites. Cleans and preprocesses the scraped content for readability. Sends the content to Google Gemini for: Enriched results including: Cleaned content Summary Sentiment Analysis Sends the response to a target system via Webhook notification Perists the response to disk Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set URL and Bright Data Zone for setting the brand content URL and the Bright Data Zone name. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Update Source** : Update the workflow input to read from Google Sheet or Airbase for dynamically tracking multiple brands or topics. AI Prompt Customization** : Tailor Gemini prompts for: Summary length (brief vs. detailed) Detailed Sentiment with the custom structured data format. Brand-specific tone detection (e.g., trust, excitement, dissatisfaction) Output Destinations**: Configure the output node to send the responses to various platforms, such as Slack, CRM systems, or databases.
by Mujahid Kabae
How it works This workflow scrapes the latest Artificial Intelligence articles from TechCrunch, then processes and classifies the content using OpenAI and LangChain nodes. The final result is saved to Google Sheets and sent as a summary to a Telegram group. Workflow Logic: Trigger: Schedules daily at 6AM Bangkok time. Scraper: Extracts URLs and publish dates from TechCrunch's AI category. Filter: Only continues if the article is from yesterday (to avoid duplication). Content Fetch: Downloads and extracts article body text. AI Agent: Summarizes the article in Thai. Scores it using strict journalism criteria (max 100). Categorizes the news into one of 9 predefined categories. Output: Saves all structured data to Google Sheets. Sends a summary to a Telegram group. Set up steps 🕒 Estimated setup time: 10–15 minutes Connect your credentials: Google Sheets (OAuth2) Telegram OpenAI account (via LangChain model) Update the Telegram chatId and Google Sheets documentId/sheetName values. Deploy and activate the workflow. It runs daily without manual intervention.
by Giannis Kotsakiachidis
🏦 GoCardless ⇄ Maybe Finance — Automatic Multi-Bank Sync & Weekly Overview 💸 Who’s it for 🤔 Freelancers, founders, households, and side-hustlers who work with several bank accounts but want one, always-up-to-date budget inside Maybe Finance—no more CSV exports or copy-paste. How it works / What it does ⚙️ Schedule Trigger (cron) fires every Monday 📅 (switch to Manual Trigger while testing) Get access token — fresh 24 h GoCardless token 🔑 Fetch transactions for each account: Revolut Pro Revolut Personal ABN AMRO (add extra HTTP Request nodes for any other GoCardless-supported banks) Extract booked — keep only settled items 🗂️ Set transactions … — map every record to Maybe Finance’s schema 📝 Merge all arrays into one payload 🔄 Create transactions to Maybe — POSTs each item via API 🚀 Resend Email — sends you a “Weekly transactions overview” 📧 All done in a single run — your Maybe dashboard is refreshed and you get an inbox alert. How to set up 🛠️ Import the template into n8n (cloud or self-hosted). Create credentials GoCardless secret_id & secret_key Maybe Finance API key (Optional) Resend API key for email notifications One-time GoCardless config (run the blocks on the left): /token/new/ → obtain token /institutions → find institution IDs /agreements/enduser/ → create agreements /requisitions/ → get the consent URL & finish bank login /requisitions/{id} → copy the GoCardless account_ids Create the same accounts in Maybe Finance and run the HTTP GET request in the purple frame and copy their account_ids. Open each Set transactions … node and paste the correct Maybe account_id. Adjust the Schedule Trigger (e.g. daily, monthly). Save & activate 🎉 Requirements 📋 n8n 1.33 + GoCardless app (secret ID & key, live or sandbox) Maybe Finance account & API key (Optional) Resend account for email How to customize ✨ Include pending transactions**: change the Item Lists filter. Add more banks**: duplicate the “Get … transactions” → “Extract booked” → “Set transactions” path and plug its output into the Merge node. Different interval**: edit the cron rule in Schedule Trigger. Disable emails**: just remove or deactivate the Resend node. Send alerts to Slack / Teams**: branch after the Merge node and add a chat node. Happy budgeting! 💰
by Leandro Melo
Keep your Hostinger VPS servers secure with automated backups! This n8n (self-hosted) workflow for is designed to create daily snapshots and send server metrics effortlessly, ensuring you always have an up-to-date recovery copy. Key Features: ✅ Automated Snapshots: Daily execution with zero manual intervention. ✅ Smart Replacement: Hostinger allows only 1 snapshot per VPS—the workflow automatically replaces the previous one. ✅ Notifications: Alerts via WhatsApp (Evolution API) or other configurable channels for execution confirmation. Quick Setup: Prerequisites: Install the Community Node n8n-nodes-hostinger-api and n8n-nodes-evolution-api in your n8n instance. Generate a Hostinger API Key in their dashboard: hpanel.hostinger.com/profile/api. Workflow Configuration: Add the Hostinger API credential in the first node and reuse it across the workflow. Customize the schedule (e.g., daily at 2 AM) and notification method (Evolution API for WhatsApp, email, etc.). Important Note: Hostinger overwrites the previous snapshot with each new execution, keeping only the latest version. VPS Metrics avaliables (send in messages): 🔹Status: snapshot status 🔹Date: snapshot date time 🔹Server: server name 🔹IP: external server IP ⚙️ Métrics: 🔹 Number of vCPUs 🔹 Ram usage / avaliable 🔹 Hard Disk usage / avaliable 🔹 Operational Sys and version 🔹 Uptime time (days, hours)
by Solomon
Learn how to build an MCP Server and Client in n8n with official nodes. > ⚠ Requires n8n version 1.88.0 or higher. In this example, we use Google Calendar and custom functions as two separate MCP Servers, demonstrating how to integrate both native and custom tools. How it works The AI Agent connects to two MCP Servers. Each MCP Trigger (Server) generates a URL exposing its tools. This URL is used by an MCP Client linked to the AI Agent. Whenever you make changes to the tools, there’s no need to modify the MCP Client. It automatically keeps the AI Agent informed on how to use each tool, even if you change them over time. That’s the power of MCP 🙌 Who is this template for Anyone looking to use MCP with their AI Agents. How to set up Instructions are included within the workflow itself. Check out my other templates 👉 https://n8n.io/creators/solomon/
by Nadia Privalikhina
This n8n template offers a free and automated way to convert images from a Google Drive folder into a single PDF document. It uses Google Slides as an intermediary, allowing you to control the final PDF's page size and orientation. If you're looking for a no-cost solution to batch convert images to PDF and need flexibility over the output dimensions (like A4, landscape, or portrait), this template is for you! It's especially handy for creating photo albums, visual reports, or simple portfolios directly from your Google Drive. How it works The workflow first copies a Google Slides template you specify. The page setup of this template (e.g., A4 Portrait) dictates your final PDF's dimensions. It then retrieves all images from a designated Google Drive folder, sorts them by creation date. Each image is added to a new slide in the copied presentation. Finally, the entire Google Slides presentation is converted into a PDF and saved back to your Google Drive. How to use Connect your Google Drive and Google Slides accounts in the relevant nodes. In the "Set Pdf File Name" node, define the name for your output PDF. In the "CopyPdfTemplate" node: Select your Google Slides template file (this sets the PDF page size/orientation). Choose the Google Drive folder containing your source images. Ensure your images are in the specified folder. For best results, images should have an aspect ratio similar to your chosen Slides template. Run the workflow to generate your PDF by clicking 'Test Workflow' Requirements Google Drive account. Google Slides account. Google Slides Template stored on your Google Drive Customising this workflow Adjust the "Filter: Only Images" node if you use image formats other than PNG (e.g., image/jpeg for JPGs). Modify the image sorting logic in the "Sort by Created Date" node if needed.
by Tomas Lubertino
This template monitors a Google Drive folder, converts PDF documents into clean text chunks with Unstructured, generates OpenAI embeddings, and upserts vectors into Pinecone. It’s a practical, production-ready starting point for Retrieval-Augmented Generation (RAG) that you can plug into a chatbot, semantic search, or internal knowledge tools. How it works 1) Google Drive Trigger detects new files in a selected folder and downloads them. 2) The files are sent to Unstructured where they are split into smaller pieces (chunks). 3) The chunks are prepared to be sent to OpenAI where they are converted into vectors (embeddings). 4) The embeddings are recombined with their original data and the payload is prepared for upsert into the Pinecone index. Set up steps 1) In Pinecone, create an index with 1536 dimensions and configure it for text-embedding-3-small. 2) Copy the host url and paste it on the 'Pinecone Upsert' node. It should look something like this: https://{your-index-name}.pinecone.io/vectors/upsert. 3) Add Google Drive, OpenAI and Pinecone credentials in n8n. 4) Point the trigger to your ingest folder (you can use this article for demo). 5) Click the 'Open chat' button and enter the following: Which Git provider do the authors use?