by ist00dent
This n8n template empowers you to instantly summarize long pieces of text by sending a simple webhook request. By integrating with ApyHub's summarization API, you can distil complex articles, reports, or messages into concise summaries, significantly boosting efficiency across various domains. π§ How it works Receive Content Webhook:** This node acts as the entry point, listening for incoming POST requests. It expects a JSON body containing: content: The long text you want to summarize. summary_length (optional): The desired length of the summary (e.g., 'short', 'medium', 'long'). Defaults to 'medium'. And a header containing your apy-token for the ApyHub API. Start Summarization Job:** This node sends a POST request to ApyHub's summarization endpoint (api.apyhub.com/sharpapi/api/v1/content/summarize). It passes the content and summary_length from the webhook body, along with your apy-token from the headers. ApyHub processes the text asynchronously, and this node immediately returns a job_id. Get Summarization Result:** Since ApyHub's summarization is an asynchronous process, this node is crucial. It polls ApyHub's job status endpoint (api.apyhub.com/sharpapi/api/v1/content/summarize/job/status/{{job_id}}) using the job_id obtained from the previous step. It continues to check the status until the summarization is finished, at which point it retrieves the final summarized text. Respond with Summarized Content:** This node sends the final, distilled summarized text back to the service that initiated the webhook. π€ Who is it for? This workflow is extremely useful for: Content Creators & Marketers:** Quickly summarize articles for social media snippets, email newsletters, or blog post intros. Researchers & Students:** Efficiently get the gist of academic papers, reports, or long documents without reading every word. Customer Support & Sales Teams:** Summarize customer inquiries, long email chains, or call transcripts to quickly understand key issues or discussion points. News Aggregators & Media Monitoring:** Automatically generate summaries of news articles from various sources for quick consumption. Business Professionals:** Condense lengthy reports, meeting minutes, or project updates into digestible summaries for busy stakeholders. Legal & Compliance:** Summarize legal documents or regulatory texts to highlight critical clauses or changes. Anyone Dealing with Information Overload:** Use it to save time and extract key information from overwhelming amounts of text. πData Structure When you trigger the webhook, send a POST ** request with a **JSON body and an apy-token in the headers: { "content": "Your very long text goes here. This could be an article, a report, a transcript, or any other textual content you want to summarize. The longer the text, the more valuable summarization becomes!", "summary_length": "medium" // Optional: "short", "medium", or "long" } Headers: apy-token: YOUR_APYHUB_API_KEY Note: You'll need to obtain an API Key from ApyHub to use their API services. They typically offer a free tier for testing. The workflow will return a JSON response similar to this (the summary content will vary based on input): { "summary": "Max Verstappen believes the Las Vegas Grand Prix is '99% show and 1% sporting event', not looking forward to the razzmatazz. Other drivers, like Fernando Alonso, were more equivocal about the hype, acknowledging the investment and spectacle. Lewis Hamilton praised the city's energy but emphasized it's 'a business, ultimately', believing there will still be good racing.", "status": "finished", "result_file_id": "..." // ApyHub might provide a file ID for larger results } βοΈ Setup Instructions Get an ApyHub API Key:** Go to https://apyhub.com/ and sign up to get your API key. Import Workflow:** In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path:** Double-click the Receive Content Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /summarize-content). Activate Workflow:** Save and activate the workflow. π Tips This content summarizer is a powerful component. Here's how to supercharge it and make it an indispensable part of your automation arsenal: Integrate with Document/File Storage:** Google Drive/Dropbox/OneDrive:* Automatically summarize documents uploaded to these services. Add a Watch New Files trigger (if available for your service) or a Cron node to regularly check for new files. Then, read the file content, pass it to this summarizer, and save the summary back to a designated folder or as a comment on the original file. CRM/CMS Systems:* Pull long notes, customer interactions, or article drafts from your CRM/CMS, summarize them, and update the records with the concise version. Email Processing & Triage:** Email Trigger: Use an Email node to trigger the workflow when new emails arrive. Extract the email body, summarize it, and then: Send a shortened summary as a notification to your Slack or Telegram. Add a summary to a task management tool (e.g., Trello, Asana) for quicker triaging. Create a summary for an email digest. Slack/Discord Bot Integration:** Create a Slack/Discord command (using a custom webhook or a dedicated Slack/Discord node) where users can paste long text. The bot then sends the summarized version back to the channel. Dynamic Summary Length & Options:** Allow the user to specify summary_length (short, medium, long) in the webhook body, as already implemented. Explore ApyHub's documentation for more parameters (if any) and dynamically pass them. Error Handling & User Feedback:** Add an IF node after Get Summarization Result to check for status: 'failed' or error messages. If an error occurs, send a helpful message back to the webhook caller or an internal alert. For very long texts that might exceed API limits, add a Function node to truncate the input content if it's too long, and notify the user. Multi-language Support (if ApyHub offers it):** If ApyHub supports summarization in multiple languages, extend the webhook to accept a language parameter and pass it to the API. Web Scraping & Article Summaries:** Combine this with a HTTP Request node to scrape content from a web page (e.g., a news article). Then, pass the extracted article text to this summarizer to get quick insights. Data Storage & Archiving:** Store the original content alongside its summary in a database (e.g., PostgreSQL, MongoDB) or a simple spreadsheet (Google Sheets, Airtable). This creates a searchable, summarized archive of your content. Automated Report Generation:** If you receive daily/weekly reports, use this workflow to summarize key sections, then compile these summaries into a concise digest or dashboard using a Merge node and send it out automatically.
by Jimleuk
This n8n template demonstrates how to use AI to compose or "stitch" separate images together to generate a new image which retains the source assets and consistent style. Use cases are many: Try producing storyboard scenes with consistent characters, marketing material with existing product assets or trying on different articles on fashion! Good to know At time of writing, each image generated will cost $0.039 USD. See Gemini Pricing for updated info. The model used in this workflow is geo-restricted! If it says model not found, it may not be available in your country or region. How it works We'll import our required assets via our Cloud storage using the HTTP node. The images are then converted to base64 strings and aggregated so we can use it for our AI model. Gemini's image generation model is used which takes all 3 images and a prompt that we define. Our prompt instructs the model on how to compose the final image. Gemini generates a new image but uses the original 3 assets to do so. The consistency to the source images is very high and shows little signs of hallucinations! Gemini's output is base64 so we use a "Convert to file" node to convert the data to binary. The final binary image is then uploaded to Google Drive to complete the demonstration. How to use The manual trigger node is used as an example but feel free to replace this with other triggers such as webhook or even a form. Technically, you should be able to compose even more images but of course, the generation will take longer and cost more. Requirements Gemini account for LLM and Image generation Google drive for upload Customising this workflow AI Image editing can be used for many use-cases. Try a popular use-case such as virtual try-on for fashion or applying branding on editing image assets.
by Ranjan Dailata
Who this is for? Google SERP Tracker + Trends and Recommendations is an AI-powered n8n workflow that extracts Google search results via Bright Data, parses them into structured JSON using Google Gemini, and generates actionable recommendations and search trends. It outputs CSV reports and sends real-time Webhook notifications. This workflow is ideal for: SEO Agencies needing automated rank & trend tracking Growth Marketers seeking daily/weekly search-based insights Product Teams monitoring brand or competitor visibility Market Researchers performing search behavior analysis No-code Builders automating search intelligence workflows What problem is this workflow solving? Traditional tracking of search engine rankings and search trends is often fragmented and manual. Analyzing SERP changes and trends requires: Manual extraction or using unstable scrapers Unstructured or cluttered HTML data Lack of actionable insights or recommendations This workflow solves the problem by: Automating real-time Google SERP data extraction using Bright Data Structuring unstructured search data using Google Gemini LLM Generating actionable recommendations and trends Exporting both CSV reports automatically to disk for downstream use Notifying external systems via Webhook What this workflow does Accepts search input, zone name, and webhook notification URL Uses Bright Data to extract Google Search Results Uses Google Gemini LLM to parse the SERP data into structured JSON Loops over structured results to: Extract recommendations Extract trends Saves both as .csv files (example below): Google_SERP_Recommendations_Response_2025-06-10T23-01-50-650Z.csv Google_SERP_Trends_Response_2025-06-10T23-01-38-915Z.csv Sends a Webhook with the summary or file reference LLM Usage Google Gemini LLM handles: Parsing Google Search HTML into structured JSON Summarizing recommendation data Deriving trends from the extracted SERP metadata Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set input fields with the search criteria, Bright Data Zone name, Webhook notification URL. How to customize this workflow to your needs Input Customization Set your target keyword/phrase in the search field Add your webhook_notification_url for external triggers or notifications SERP Source You can extend the Bright Data search logic to include other engines like Bing or DuckDuckGo. Output Format Edit the .csv structure in the Convert to File nodes if you want to include/exclude specific columns. LLM Prompt Tuning The Gemini LLM prompt inside the Recommendation or Trends extractor nodes can be fine-tuned for domain-specific insight (e.g., SEO vs eCommerce focus).
by Amit Mehta
How it Works This workflow reads sheet details from a source Google Spreadsheet, creates a new spreadsheet, replicates the sheet structure, enriches the content by reading data, and writes it into the corresponding sheets in the new spreadsheet. The process is looped for every sheet, providing an automated way to duplicate and transform structured data. π― Use Case Automate duplication and data enrichment for multi-sheet Google Spreadsheets Replicate templates across new documents with consistent formatting Data team workflows requiring repetitive structured Google Sheets setup Setup Instructions 1. Required Google Sheets You must have a source spreadsheet with multiple sheets. The destination spreadsheet will be created automatically. 2. API Credentials Google Sheets OAuth2** β connect to both read and write spreadsheets. HTTP Request Auth** β if external API headers are needed. 3. Configure Fields in Write Sheet Ensure you define appropriate columns and mapping for the destination sheet. π Workflow Logic Manual Trigger: Starts the flow on user demand. Create New Spreadsheet: Generates a blank spreadsheet. HTTP Request: Retrieves all sheet names from the source spreadsheet. JavaScript Code: Extracts titles and metadata from the HTTP response. Loop Over Sheets: Iterates through each sheet retrieved. Delete Default Sheet: Removes the placeholder 'Sheet1'. Create Sheets: Replicates each original sheet in the new document. Read Spreadsheet1: Pulls data from the matching original sheet. Write Sheet: Appends the data to the newly created sheets. π§© Node Descriptions | Node Name | Description | |-----------|-------------| | Manual Trigger | Starts the workflow manually by user test. | | Create New Spreadsheet | Creates a new Google Spreadsheet for output. | | HTTP Request | Fetches metadata from the source spreadsheet including sheet names. | | Code | Processes sheet metadata into a list for iteration. | | Loop Over Items | Loops over each sheet to replicate and populate. | | Google Sheets2 | Deletes the default 'Sheet1' from the new spreadsheet. | | Create Sheets | Creates a new sheet matching each source sheet. | | Read Spreadsheet1 | Reads data from the source sheet. | | Write sheet | Writes the data into the corresponding new sheet. | π οΈ Customization Tips Adjust the Google Sheet title to be dynamic or user-input driven Add filtering logic before writing data Append custom audit columns like 'Timestamp' or 'Processed By' Enable logging or Slack alerts after each sheet is created π Required Files | File Name | Purpose | |-----------|---------| | My_workflow_4.json | Main workflow JSON file for sheet duplication and enrichment | π§ͺ Testing Tips Test with a spreadsheet containing 2β3 simple sheets Validate whether all sheets are duplicated Check if columns and data structure remain intact Watch for authentication issues in Google Sheets nodes π· Suggested Tags & Categories #GoogleSheets #Automation #DataEnrichment #Workflow #Spreadsheet
by Eric
Why use this You need to delete (many) posts on a WordPress website and also delete the featured image associated with each post. Hours of rote work cut into a fraction with this automation. How it works set your wordpress URL in the manual trigger node set your WP post search parameters (WP API returns 10 posts by default; you could also set up pagination for scaling this automation beyond 10 posts per execution) decide (and build) your filter/approval process What you can expect this automation is set up to run the 10 oldest pending posts, with oldest first if you remove the 'Filter' node from the workflow, after each run, another 10 posts will be returned from WP Notes on Filter/Approval This is arbitrary and depends on your own use case. Maybe you have an editor who needs to approve the post deletion. You might want to get approval by email, slack msg or ticketing system. Or maybe you just want to monitor the process and spare specific posts from deletion. I used the Filter node to only grab the first item (itemIndex < 1) which in this case was the oldest pending post. This could also be expanded to two separate workflows: One triggered when a pending post is created that sends an approval request A second triggered by the approval/rejection that either publishes or deletes the post, depending on the approval result This would require another HTTP request, similar to the DELETE post request, that instead publishes the post.
by Alex
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How It Works This template orchestrates a multi-step workflow that constructs a comprehensive four-zone automation matrixβGreen, Yellow, Red, and Whiteβgrounded in the Human Agency Scale (HAS). When a user sends a job title via Telegram, the workflow routes both text and voice messages appropriately. Voice messages are transcribed via OpenAI's Whisper, while text inputs bypass transcription. Both streams merge into a single data flow. The AI Agent node, powered by GPT-4, analyzes the user's profession and core tasks. It also leverages live context by calling the Tavily search tool, ensuring the analysis incorporates up-to-date information. After the evaluation, the workflow formats and returns the completed matrix, with detailed task examples and rationales for each zone, back to the user via Telegram. Setup Instructions Create an OpenAI credential in n8n (model: GPT-4.1 mini). Add a Tavily credential with your API key (FREE plan available). Configure a Telegram Bot credential: API bot token. Import this JSON as a new workflow in n8n and map credentials in each node. Activate the workflow; test by sending sample job titles; adjust node timeouts and webhook settings as needed. Requirements n8n v1.0.0 or higher Active OpenAI API key (GPT-4.1 mini access) Tavily API key for web context search Telegram Bot token with correctly configured webhook Stable internet connectivity Audience & Problem This template is designed for consultants, HR professionals, and analysts who need a scalable, standardized approach to evaluate which routine tasks in a given profession can be automated, which require human oversight, and which should remain manual to preserve strategic judgment, creativity, and expertise.
by Ria
This is a demo workflow to showcase how to use Supabase to embed a document, retrieve information from the vector store via chat and update the database. Setup steps: set your credentials for Supabase set your credentials for an AI model of your choice set credentials for any service you want to use to upload documents please follow the guidelines in the workflow itself (Sticky Notes) Feedback & Questions If you have any questions or feedback about this workflow - Feel free to get in touch at ria@n8n.io
by Joseph LePage
Transform your local N8N instance into a powerful chat interface using any local & private Ollama model, with zero cloud dependencies βοΈ. This workflow creates a structured chat experience that processes messages locally through a language model chain and returns formatted responses π¬. How it works π π Chat messages trigger the workflow π§ Messages are processed through Llama 3.2 via Ollama (or any other Ollama compatible model) π Responses are formatted as structured JSON β‘ Error handling ensures robust operation Set up steps π οΈ π₯ Install N8N and Ollama βοΈ Download Ollama 3.2 model (or other model) π Configure Ollama API credentials β¨ Import and activate workflow This template provides a foundation for building AI-powered chat applications while maintaining full control over your data and infrastructure π.
by Rully Saputra
Whoβs it for This workflow is ideal for marketing teams, growth analysts, and business owners who need regular Google Analytics insights without manually digging through data. Itβs also perfect for organizations that want to ensure positive performance updates reach stakeholders quickly while negative trends get immediate attention from the internal team. How it works / What it does The workflow runs weekly on a set schedule, pulls key performance metrics from Google Analytics, and aggregates the data into a clean summary. An AI Agent (powered by Google Gemini and connected to Simple Memory for historical context) analyzes the data, generates actionable insights, and classifies the sentiment as Positive, Negative, or Neutral. Positive sentiment β Automatically emailed to stakeholders via Gmail. Negative sentiment β Sent instantly to a designated Telegram group for faster response. This ensures wins are celebrated, and issues are addressed promptly. How to set up Configure the Schedule Trigger for your preferred reporting day/time. Connect the Google Analytics node with your property ID and metrics/dimensions. Set up the AI Agent with Google Gemini/others model API credentials. Connect Gmail and Telegram accounts to their respective nodes. Adjust sentiment routing rules. Requirements Google Analytics account with API access Google Gemini API key Gmail account with OAuth connection Telegram bot token and group chat ID How to customize the workflow Modify the AI prompt to include custom KPIs or industry-specific recommendations. Change the schedule frequency (daily, monthly, or on-demand). Add Neutral sentiment handling (e.g., log to Google Sheets). Extend with Slack, Discord, or other notification channels.
by Joseph LePage
π This n8n workflow integrates Tavily's search and extract APIs with AI summarization capabilities to process web content efficiently. Quick Setup Get your Tavily API key from https://app.tavily.com/home Replace tvly-YOUR_API_KEY in the "Tavily API Key" node Connect your OpenAI credentials to the "OpenAI Chat Model" node Deploy the workflow and start the chat trigger Core Features Search & Extract π― Intelligent web searching with relevance filtering Automated content extraction from top results AI-powered content summarization in markdown format User Interaction π¬ Chat-based search topic input Real-time processing pipeline Structured markdown output The workflow demonstrates practical implementation of Tavily's API endpoints while handling the complete process from search to summarization in a single automated pipeline.
by Viktor
Nightly Discord Channel Cleanup This workflow runs every day at 9:00 p.m. and: Retrieves all Discord channels using your provided credentials. Pauses briefly to respect Discord API rate limits. Loops through each channel and fetches messages. Filters out messages older than seven days. Deletes those older messages, again pausing to stay within deletion rate limits. By setting up this workflow on a schedule, you can automatically keep Discord channels tidy and compliant with retention policies. π¨βπ€ Setup Add your Discord credentials Change the server in each Discord node to the correct one Click the Test Workflow button Activate the workflow to run on a schedule
by DanielV
This workflow is designed to translate SRT subtitle files from one language to another using Google Translate. The workflow follows these main steps: Accept an SRT file upload and target language selection Extract and parse the SRT file content Split the content into translatable segments Translate each segment using Google Translate Reassemble the translated content into a proper SRT format Return the translated file to the user You'll need a Google Console Cloud account to access the Translate API. Who is this for? This workflow is designed for content creators, video editors, translators, and anyone who needs to translate subtitle files (.srt) from one language to another. It's particularly useful for those working with international content, educational materials, or preparing videos for global audiences. What problem does this workflow solve? Translating subtitle files manually is time-consuming and error-prone. Professional translation services can be expensive, especially for multiple videos or long content. This workflow automates the translation process while maintaining the proper SRT format including timestamps and subtitle numbering. Setup Set up Google Translate credentials: -- Create a Google Cloud project and enable the Google Translate API -- Create OAuth credentials and configure them in the Google Translate node Customize language options: -- The default workflow includes English (EN) and Japanese (JP) options -- Add more language options by editing the dropdown field in the "Receive SRT File to Translate" node -- Use standard language codes that Google Translate supports Add more languages: -- Edit the form trigger node to include additional language options in the dropdown