by David Roberts
Overview This workflow takes some French text, and translates it into spoken audio. It then transcribes that audio back into text, translates it into English and generates an audio file of the English text. To do so, it uses ElevenLabs (which has a free tier) and OpenAI. Setup These steps should only take a few minutes: In ElevenLabs, add a voice to your voice lab and copy its ID. Add it to the 'Set voice ID' node Get your ElevenLabs API key (click your name in the bottom-left of ElevenLabs and choose ‘profile’) In the 'Generate French audio' node, create a new header auth cred. Set the name to xi-api-key and the value to your API key In the 'credential' field of the 'Transcribe audio' node, create a new OpenAI cred with your OpenAI API key Run the workflow by clicking the orange button at the bottom of the canvas
by Dr. Firas
Auto-Publish Social Videos to 9 Platforms via Google Sheets and Blotato Who is this workflow for? This workflow is ideal for marketers, content creators, virtual assistants, and automation specialists managing multi-platform video content. It’s especially useful for teams who want to centralize publishing via a spreadsheet and automate social distribution in one shot. What problem does this workflow solve? Manually posting videos to multiple social platforms is tedious and time-consuming. This workflow allows you to streamline video distribution using Blotato’s API — no more switching between platforms or re-uploading the same video multiple times. What this workflow does This automation reads video metadata (URL, caption, title) from a Google Sheet, uploads the video to Blotato, and automatically publishes it to Instagram, YouTube, TikTok, Facebook, LinkedIn, Threads, Twitter (X), Pinterest, and Bluesky. It also updates the sheet to reflect the publishing status (STATUS = DONE), ensuring that your data remains clean and trackable. Setup Set up your Google Sheet with the required columns: PROMPT, DESCRIPTION, URL VIDEO, Titre, row_number, and STATUS. Add your Blotato API key in the headers of the Upload Video and Post to X nodes. Replace the platform-specific IDs in the Assign Social Media IDs node (Instagram ID, Facebook Page ID, etc.). Set the schedule in the Schedule Trigger node to define when the publishing happens. > ⚠️ Disclaimer: This workflow uses Community Nodes. These are only available on self-hosted n8n instances. How to customize this workflow Add logic to skip rows already marked as DONE. Expand to more platforms supported by Blotato. Use a webhook or Telegram trigger instead of the scheduler for more interactivity. Modify content per platform if needed (caption formatting, hashtags, etc.). 📄 Documentation: Notion Guide Demo Video 🎥 Watch the full tutorial here: YouTube Demo
by Joseph LePage
Transform your local N8N instance into a powerful chat interface using any local & private Ollama model, with zero cloud dependencies ☁️. This workflow creates a structured chat experience that processes messages locally through a language model chain and returns formatted responses 💬. How it works 🔄 💭 Chat messages trigger the workflow 🧠 Messages are processed through Llama 3.2 via Ollama (or any other Ollama compatible model) 📊 Responses are formatted as structured JSON ⚡ Error handling ensures robust operation Set up steps 🛠️ 📥 Install N8N and Ollama ⚙️ Download Ollama 3.2 model (or other model) 🔑 Configure Ollama API credentials ✨ Import and activate workflow This template provides a foundation for building AI-powered chat applications while maintaining full control over your data and infrastructure 🚀.
by Yaron Been
This cutting-edge n8n automation is a powerful digital marketing tool designed to streamline the process of transforming Google Drive videos into Facebook advertising assets. By intelligently connecting cloud storage, video upload, and ad creation platforms, this workflow: Discovers Marketing Content: Automatically scans Google Drive Identifies video marketing materials Eliminates manual content searching Seamless Video Distribution: Downloads selected video files Uploads directly to Facebook Prepares videos for advertising Instant Ad Creative Generation: Creates Facebook ad creatives Leverages uploaded video content Accelerates marketing campaign setup Automated Platform Integration: Connects Google Drive and Facebook Reduces manual intervention Speeds up content deployment Key Benefits 🤖 Full Automation: Zero-touch video marketing 💡 Smart Content Management: Effortless video distribution 📊 Rapid Campaign Setup: Quick ad creative generation 🌐 Multi-Platform Synchronization: Seamless content flow Workflow Architecture 🔹 Stage 1: Content Discovery Manual Trigger**: Workflow initiation Google Drive Integration**: Video file scanning Intelligent File Selection**: Identifies MP4 video files Prepares for marketing use 🔹 Stage 2: Video Preparation Automatic Download** File Validation** Marketing-Ready Formatting** 🔹 Stage 3: Facebook Upload Direct Video Upload** Ad Account Integration** Seamless Platform Transfer** 🔹 Stage 4: Ad Creative Generation Automated Creative Setup** Video-Based Ad Creation** Instant Marketing Asset Preparation** Potential Use Cases Digital Marketing Teams**: Rapid content deployment Social Media Managers**: Streamlined ad creation Content Creators**: Efficient video marketing Small Business Owners**: Simplified advertising workflow Marketing Agencies**: Scalable content distribution Setup Requirements Google Drive Connected Google account Configured video folder Appropriate sharing settings Facebook Ads Ad account credentials Page ID configuration API access token n8n Installation Cloud or self-hosted instance Workflow configuration API credential management Future Enhancement Suggestions 🤖 AI-powered video selection 📊 Performance tracking integration 🔔 Campaign launch notifications 🌐 Multi-platform ad deployment 🧠 Intelligent content routing Technical Considerations Implement robust error handling Use secure API authentication Maintain flexible file processing Ensure compliance with platform guidelines Ethical Guidelines Respect copyright and usage rights Maintain transparent marketing practices Ensure appropriate content selection Provide clear advertising disclosures Hashtag Performance Boost 🚀 #MarketingAutomation #VideoAdvertising #FacebookAds #DigitalMarketing #ContentMarketing #AIMarketing #WorkflowAutomation #SocialMediaStrategy #AdTech #MarketingInnovation Workflow Visualization [Manual Trigger] ⬇️ [List Drive Videos] ⬇️ [Download Video] ⬇️ [Upload to Facebook] ⬇️ [Create Ad Creative] Connect With Me Ready to revolutionize your digital marketing? 📧 Email: Yaron@nofluff.online 🎥 YouTube: @YaronBeen 💼 LinkedIn: Yaron Been Transform your marketing workflow with intelligent, automated solutions!
by Mutasem
Use Case Automatically archive emails in your Gmail inbox from the last day, unless they have been starred. Been using this with my personal and work emails to stick to an Inbox Zero strategy, without having to click or swipe a lot. Setup Add your Gmail creds How to adjust this template Set your own schedule for when to run this. Otherwise, should be good to go. 🤞🏽
by Manu
In Grist, when I mark a row as confirmed (via a toggle): a webhook is set up to notify n8n, and this workflow will create derived records in the destination table. Design decisions Confirmation-based In the source table there is a boolean column "Confirmed" that will trigger the transfer. This way there is a manual check involved & it's a conscious step to trigger the workflow. Runs once If the destination table already contains an entry, we will not re-create/update it (as it might've already been changed manually) Setup Create a boolean column Confirmed in source table Add a webhook in Grist Settings Add grist API credentials in n8n Set document ID & source table ID/Name in the 'get existing' node Set docID, the destination table ID/Name - and the columns & values you want in the Create Row node
by Victor Gonzalez
Who is this for? This template is designed for businesses and organizations that use Mautic for email marketing and want to automate the process of removing contacts from specific segments when they receive an unsubscribe request via email. What problem is this workflow solving? / use case Many email recipients, especially those who are less tech-savvy, may not follow the standard unsubscribe link provided in emails. Instead, for example in Gmail, they click the "Unsubscribe" button in the Gmail web interface, which in turn sends an email with a consistent format, these emails contain the word unsubscribe in the 'To' field using the following structure: hello+unsubscribe_6629823aa976f053068426@example.com This workflow automates the process of identifying such unsubscribe emails and removing the contact from the relevant Mautic segments, ensuring compliance with unsubscribe requests and maintaining a clean mailing list. What this workflow does Monitors a Gmail account for incoming emails. Identifies unsubscribe emails based on specific patterns in the "To" field (e.g., containing the word "unsubscribe"). Retrieves the contact's ID from Mautic based on the email address. Removes the contact from the specified "newsletter" segment in Mautic. Adds the contact to the "unsubscribed" segment in Mautic. Sends a confirmation email to the contact, acknowledging their unsubscribe request. Setup Configure your email address and unsubscribe message in the "Edit Fields" node. Set your credentials in the Gmail trigger and in the Mautic nodes. Set the segments for the "newsletter" and "unsubscribed" in the Mautic nodes. Make sure your n8n installation has a public endpoint for your Gmail trigger to work correctly. Deploy the workflow. How to customize this workflow to your needs Adjust the conditions for identifying unsubscribe emails based on your specific requirements. Modify the segments or actions taken in Mautic according to your desired behavior. Customize the confirmation email message and sender details. Note: This workflow assumes a consistent structure for unsubscribe emails, where the "From" field contains the word "unsubscribe" using the "+" sign. If your email provider follows a different convention, adjust the conditions in the "Is automated unsubscribe?" node accordingly.
by Alfred Nutile
This guide will show you how to use a workflow as a reusable tool in n8n, such as integrating an AI Agent or other specialized processes into your workflows. By the end of this example, you'll have a simple, reusable workflow that can be easily plugged into larger projects, making your automations more efficient and scalable. With this approach, you can create reusable workflows like "Scrape a Page," "Search Brave," or "Generate an Image," which you can then call whenever needed. While n8n makes it easy to build these workflows from scratch, setting them up as reusable components saves time as your automations grow in complexity. Setup Add the "Execute Workflow Trigger" node Add the node(s) to perform the desired tasks in the workflow Add a final "Set" or "Edit Fields" node at the end to ensure all external workflows return a consistent output format Details In this example, the "Execute Workflow Trigger" expects input in the following JSON format: [ { "query": { "url": "https://en.wikipedia.org/wiki/some_info" } } ] Once your external workflow is ready, you can instruct the AI Agent to use this tool by connecting it to the external workflow. Set up the schema type to "Generate from JSON Example" using this structure: { "url": "URL_TO_GET" } Finally, ensure your external workflow includes a "Set" or "Edit Fields" node at the end to define the response format. This helps keep the outputs of your reusable workflows consistent and predictable.
by ist00dent
This n8n template empowers you to instantly summarize long pieces of text by sending a simple webhook request. By integrating with ApyHub's summarization API, you can distil complex articles, reports, or messages into concise summaries, significantly boosting efficiency across various domains. 🔧 How it works Receive Content Webhook:** This node acts as the entry point, listening for incoming POST requests. It expects a JSON body containing: content: The long text you want to summarize. summary_length (optional): The desired length of the summary (e.g., 'short', 'medium', 'long'). Defaults to 'medium'. And a header containing your apy-token for the ApyHub API. Start Summarization Job:** This node sends a POST request to ApyHub's summarization endpoint (api.apyhub.com/sharpapi/api/v1/content/summarize). It passes the content and summary_length from the webhook body, along with your apy-token from the headers. ApyHub processes the text asynchronously, and this node immediately returns a job_id. Get Summarization Result:** Since ApyHub's summarization is an asynchronous process, this node is crucial. It polls ApyHub's job status endpoint (api.apyhub.com/sharpapi/api/v1/content/summarize/job/status/{{job_id}}) using the job_id obtained from the previous step. It continues to check the status until the summarization is finished, at which point it retrieves the final summarized text. Respond with Summarized Content:** This node sends the final, distilled summarized text back to the service that initiated the webhook. 👤 Who is it for? This workflow is extremely useful for: Content Creators & Marketers:** Quickly summarize articles for social media snippets, email newsletters, or blog post intros. Researchers & Students:** Efficiently get the gist of academic papers, reports, or long documents without reading every word. Customer Support & Sales Teams:** Summarize customer inquiries, long email chains, or call transcripts to quickly understand key issues or discussion points. News Aggregators & Media Monitoring:** Automatically generate summaries of news articles from various sources for quick consumption. Business Professionals:** Condense lengthy reports, meeting minutes, or project updates into digestible summaries for busy stakeholders. Legal & Compliance:** Summarize legal documents or regulatory texts to highlight critical clauses or changes. Anyone Dealing with Information Overload:** Use it to save time and extract key information from overwhelming amounts of text. 📑Data Structure When you trigger the webhook, send a POST ** request with a **JSON body and an apy-token in the headers: { "content": "Your very long text goes here. This could be an article, a report, a transcript, or any other textual content you want to summarize. The longer the text, the more valuable summarization becomes!", "summary_length": "medium" // Optional: "short", "medium", or "long" } Headers: apy-token: YOUR_APYHUB_API_KEY Note: You'll need to obtain an API Key from ApyHub to use their API services. They typically offer a free tier for testing. The workflow will return a JSON response similar to this (the summary content will vary based on input): { "summary": "Max Verstappen believes the Las Vegas Grand Prix is '99% show and 1% sporting event', not looking forward to the razzmatazz. Other drivers, like Fernando Alonso, were more equivocal about the hype, acknowledging the investment and spectacle. Lewis Hamilton praised the city's energy but emphasized it's 'a business, ultimately', believing there will still be good racing.", "status": "finished", "result_file_id": "..." // ApyHub might provide a file ID for larger results } ⚙️ Setup Instructions Get an ApyHub API Key:** Go to https://apyhub.com/ and sign up to get your API key. Import Workflow:** In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path:** Double-click the Receive Content Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /summarize-content). Activate Workflow:** Save and activate the workflow. 📝 Tips This content summarizer is a powerful component. Here's how to supercharge it and make it an indispensable part of your automation arsenal: Integrate with Document/File Storage:** Google Drive/Dropbox/OneDrive:* Automatically summarize documents uploaded to these services. Add a Watch New Files trigger (if available for your service) or a Cron node to regularly check for new files. Then, read the file content, pass it to this summarizer, and save the summary back to a designated folder or as a comment on the original file. CRM/CMS Systems:* Pull long notes, customer interactions, or article drafts from your CRM/CMS, summarize them, and update the records with the concise version. Email Processing & Triage:** Email Trigger: Use an Email node to trigger the workflow when new emails arrive. Extract the email body, summarize it, and then: Send a shortened summary as a notification to your Slack or Telegram. Add a summary to a task management tool (e.g., Trello, Asana) for quicker triaging. Create a summary for an email digest. Slack/Discord Bot Integration:** Create a Slack/Discord command (using a custom webhook or a dedicated Slack/Discord node) where users can paste long text. The bot then sends the summarized version back to the channel. Dynamic Summary Length & Options:** Allow the user to specify summary_length (short, medium, long) in the webhook body, as already implemented. Explore ApyHub's documentation for more parameters (if any) and dynamically pass them. Error Handling & User Feedback:** Add an IF node after Get Summarization Result to check for status: 'failed' or error messages. If an error occurs, send a helpful message back to the webhook caller or an internal alert. For very long texts that might exceed API limits, add a Function node to truncate the input content if it's too long, and notify the user. Multi-language Support (if ApyHub offers it):** If ApyHub supports summarization in multiple languages, extend the webhook to accept a language parameter and pass it to the API. Web Scraping & Article Summaries:** Combine this with a HTTP Request node to scrape content from a web page (e.g., a news article). Then, pass the extracted article text to this summarizer to get quick insights. Data Storage & Archiving:** Store the original content alongside its summary in a database (e.g., PostgreSQL, MongoDB) or a simple spreadsheet (Google Sheets, Airtable). This creates a searchable, summarized archive of your content. Automated Report Generation:** If you receive daily/weekly reports, use this workflow to summarize key sections, then compile these summaries into a concise digest or dashboard using a Merge node and send it out automatically.
by n8n Team
This workflow reads PDF textual content and sends the text to OpenAI. Attachments of interest will then be uploaded to a specified Google Drive folder. For example, you may wish to send invoices received from an email to an inbox folder in Google Drive for later processing. This workflow has been designed to easily change the search term to match your needs. See the workflow for more details. Prerequisites OpenAI credentials. Google credentials. How it works Triggers off on the On email received node. Iterates over the attachments in the email. Uses the OpenAI node to filter out the attachments that do not match the search term set in the Configure node. You could match on various PDF files (i.e. invoice, receipt, or contract). If the PDF attachment matches the search term, the workflow uses the Google Drive node to upload the PDF attachment to a specific Google Drive folder.
by Jimleuk
This n8n template demonstrates how to use AI to compose or "stitch" separate images together to generate a new image which retains the source assets and consistent style. Use cases are many: Try producing storyboard scenes with consistent characters, marketing material with existing product assets or trying on different articles on fashion! Good to know At time of writing, each image generated will cost $0.039 USD. See Gemini Pricing for updated info. The model used in this workflow is geo-restricted! If it says model not found, it may not be available in your country or region. How it works We'll import our required assets via our Cloud storage using the HTTP node. The images are then converted to base64 strings and aggregated so we can use it for our AI model. Gemini's image generation model is used which takes all 3 images and a prompt that we define. Our prompt instructs the model on how to compose the final image. Gemini generates a new image but uses the original 3 assets to do so. The consistency to the source images is very high and shows little signs of hallucinations! Gemini's output is base64 so we use a "Convert to file" node to convert the data to binary. The final binary image is then uploaded to Google Drive to complete the demonstration. How to use The manual trigger node is used as an example but feel free to replace this with other triggers such as webhook or even a form. Technically, you should be able to compose even more images but of course, the generation will take longer and cost more. Requirements Gemini account for LLM and Image generation Google drive for upload Customising this workflow AI Image editing can be used for many use-cases. Try a popular use-case such as virtual try-on for fashion or applying branding on editing image assets.
by Amit Mehta
How it Works This workflow reads sheet details from a source Google Spreadsheet, creates a new spreadsheet, replicates the sheet structure, enriches the content by reading data, and writes it into the corresponding sheets in the new spreadsheet. The process is looped for every sheet, providing an automated way to duplicate and transform structured data. 🎯 Use Case Automate duplication and data enrichment for multi-sheet Google Spreadsheets Replicate templates across new documents with consistent formatting Data team workflows requiring repetitive structured Google Sheets setup Setup Instructions 1. Required Google Sheets You must have a source spreadsheet with multiple sheets. The destination spreadsheet will be created automatically. 2. API Credentials Google Sheets OAuth2** – connect to both read and write spreadsheets. HTTP Request Auth** – if external API headers are needed. 3. Configure Fields in Write Sheet Ensure you define appropriate columns and mapping for the destination sheet. 🔁 Workflow Logic Manual Trigger: Starts the flow on user demand. Create New Spreadsheet: Generates a blank spreadsheet. HTTP Request: Retrieves all sheet names from the source spreadsheet. JavaScript Code: Extracts titles and metadata from the HTTP response. Loop Over Sheets: Iterates through each sheet retrieved. Delete Default Sheet: Removes the placeholder 'Sheet1'. Create Sheets: Replicates each original sheet in the new document. Read Spreadsheet1: Pulls data from the matching original sheet. Write Sheet: Appends the data to the newly created sheets. 🧩 Node Descriptions | Node Name | Description | |-----------|-------------| | Manual Trigger | Starts the workflow manually by user test. | | Create New Spreadsheet | Creates a new Google Spreadsheet for output. | | HTTP Request | Fetches metadata from the source spreadsheet including sheet names. | | Code | Processes sheet metadata into a list for iteration. | | Loop Over Items | Loops over each sheet to replicate and populate. | | Google Sheets2 | Deletes the default 'Sheet1' from the new spreadsheet. | | Create Sheets | Creates a new sheet matching each source sheet. | | Read Spreadsheet1 | Reads data from the source sheet. | | Write sheet | Writes the data into the corresponding new sheet. | 🛠️ Customization Tips Adjust the Google Sheet title to be dynamic or user-input driven Add filtering logic before writing data Append custom audit columns like 'Timestamp' or 'Processed By' Enable logging or Slack alerts after each sheet is created 📎 Required Files | File Name | Purpose | |-----------|---------| | My_workflow_4.json | Main workflow JSON file for sheet duplication and enrichment | 🧪 Testing Tips Test with a spreadsheet containing 2–3 simple sheets Validate whether all sheets are duplicated Check if columns and data structure remain intact Watch for authentication issues in Google Sheets nodes 🏷 Suggested Tags & Categories #GoogleSheets #Automation #DataEnrichment #Workflow #Spreadsheet