by n8n Team
This workflow performs various Git operations. It starts with a manual trigger, sets the local repository path, decodes a file and then updates a file's content, adds, commits, and pushes changes to a GitHub repository, and finally pulls changes. The upper branch of the workflow retrieves a specific file ("README.md") from a GitHub repository ("git_push_article") owned by "teds-tech-talks." It then decodes the file's binary data into readable text using a code node. The decoded content is used to update the file by adding a timestamp and data. Finally, the modified file is pushed back to the repository using a GitHub node, completing the process of editing and updating the file directly via the workflow. This bottom branch of the workflow makes changes to a local Git repository. It starts by updating the "README.md" file with a timestamp and some content. Then, it adds the modified files, commits the changes with a message, and pushes them to a remote GitHub repository owned by "teds-tech-talks." Additionally, the workflow allows pulling changes from the remote repository into the local repository. The goal is to demonstrate how to perform various Git operations using n8n nodes, including adding, committing, pushing, and pulling changes.
by Yaron Been
Prunaai Hidream E1.1 Image Generator Description Edit an image with a prompt. This is the hidream-e1.1 model accelerated with the pruna optimisation engine. Overview This n8n workflow integrates with the Replicate API to use the prunaai/hidream-e1.1 model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt Optional Parameters seed** (integer, default: -1): Random seed (-1 for random) image** (string, default: None): Input image to edit. speed_mode** (string, default: Juiced ๐ฅ (more speed)): Speed optimization level clip_cfg_norm** (boolean, default: True): Whether to use CLIP CFG normalization output_format** (string, default: webp): Output format guidance_scale** (number, default: 2.5): Guidance scale output_quality** (integer, default: 100): Output quality (for jpg and webp) refine_strength** (number, default: 0.3): Strength of refinement num_inference_steps** (integer, default: 28): Number of inference steps image_guidance_scale** (number, default: 1): Image guidance scale How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: prunaai/hidream-e1.1 API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Yaron Been
Google Veo 3 Fast Video Generator Description A faster and cheaper version of Googleโs Veo 3 video model, with audio Overview This n8n workflow integrates with the Replicate API to use the google/veo-3-fast model. This powerful AI model can generate high-quality video content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Text prompt for video generation Optional Parameters seed** (integer, default: None): Random seed. Omit for random generations resolution** (string, default: 720p): Resolution of the generated video negative_prompt** (string, default: None): Description of what to discourage in the generated video How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate video content Access the generated output from the final node API Reference Model: google/veo-3-fast API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of video generation parameters
by Yaron Been
Bytedance Seedance 1 Lite Video Generator Description A video generation model that offers text-to-video and image-to-video support for 5s or 10s videos, at 480p and 720p resolution Overview This n8n workflow integrates with the Replicate API to use the bytedance/seedance-1-lite model. This powerful AI model can generate high-quality video content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Text prompt for video generation Optional Parameters fps** (string, default: 24): Frame rate (frames per second) seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image-to-video generation duration** (string, default: 5): Video duration in seconds resolution** (string, default: 720p): Video resolution aspect_ratio** (string, default: 16:9): Video aspect ratio. Ignored if an image is used. camera_fixed** (boolean, default: False): Whether to fix camera position last_frame_image** (string, default: None): Input image for last frame generation. This only works if an image start frame is given too. How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate video content Access the generated output from the final node API Reference Model: bytedance/seedance-1-lite API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of video generation parameters
by Yaron Been
Zsxkib Canary Qwen 2.5b Text Generator Description ๐คThe best open-source speech-to-text model as of Jul 2025, transcribing audio with record 5.63% WER and enabling AI tasks like summarization directly from speechโจ Overview This n8n workflow integrates with the Replicate API to use the zsxkib/canary-qwen-2.5b model. This powerful AI model can generate high-quality text content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters audio** (string): Audio file to transcribe Optional Parameters llm_prompt** (string, default: None): Optional LLM analysis prompt show_confidence** (boolean, default: False): Show AI reasoning in analysis include_timestamps** (boolean, default: True): Include timestamps in transcript How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate text content Access the generated output from the final node API Reference Model: zsxkib/canary-qwen-2.5b API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of text generation parameters
by Yaron Been
Wan Video Wan 2.2 I2v A14b Video Generator Description Image-to-video at 720p and 480p with Wan 2.2 A14B Overview This n8n workflow integrates with the Replicate API to use the wan-video/wan-2.2-i2v-a14b model. This powerful AI model can generate high-quality video content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for video generation image** (string): Input image to generate video from Optional Parameters seed** (integer, default: None): Random seed. Leave blank for random num_frames** (integer, default: 81): Number of video frames. 81 frames give the best results resolution** (string, default: 480p): Resolution of video. 832x480px corresponds to 16:9 aspect ratio, and 480x832px is 9:16 sample_shift** (number, default: 5): Sample shift factor sample_steps** (integer, default: 30): Number of generation steps. Fewer steps means faster generation, at the expensive of output quality. 30 steps is sufficient for most prompts frames_per_second** (integer, default: 16): Frames per second. Note that the pricing of this model is based on the video duration at 16 fps How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate video content Access the generated output from the final node API Reference Model: wan-video/wan-2.2-i2v-a14b API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of video generation parameters
by Yaron Been
Lucataco Seed X Ppo Text Generator Description Seed-X-PPO-7B by ByteDance-Seed, a powerful series of open-source multilingual translation language models Overview This n8n workflow integrates with the Replicate API to use the lucataco/seed-x-ppo model. This powerful AI model can generate high-quality text content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters text** (string): Text to translate target_language** (string): Target language (e.g., 'Chinese', 'French', 'Spanish') Optional Parameters num_beams** (integer, default: 4): Number of beams for beam search max_length** (integer, default: 512): Maximum length of generated text source_language** (string, default: auto): Source language (use 'auto' for automatic detection) How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate text content Access the generated output from the final node API Reference Model: lucataco/seed-x-ppo API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of text generation parameters
by Viktor Klepikovskyi
Base64 Encode Multiple Binary Files with a Code Node This template demonstrates how to handle multiple binary files in n8n by using a Code node to convert them into a Base64 encoded string. It's particularly useful when an API requires file uploads in this format and the standard 'Extract From File' node is not sufficient for batch processing. The workflow starts by downloading a ZIP file, unzipping it to get multiple binary files, and then uses a Code node with custom JavaScript to encode each file individually. Instructions Download and import this template into your n8n instance. Run the workflow once to see how it downloads, unzips, and then encodes multiple files. Modify the 'HTTP Request' node to download your own binary file or a ZIP file containing multiple files. Update the 'Code' node if you need to adjust the output format or file paths. Use the output of the 'Code' node in a subsequent node, such as another 'HTTP Request' to send the Base64-encoded files to your desired API. A link to the full blog post is available here
by Harshil Agrawal
This workflow demonstrates how to merge data for different executions. The Merge Data Function node fetches the data from different executions of the RSS Feed Read node and merges them under a single object. Note: If you want to process the items that get merged, you will have to convert the single item into n8n understandable multiple items.
by Julien DEL RIO
๐ Description This workflow serves a 1x1 transparent PNG image via a webhook, which can be embedded in an email to track when the email is opened. When the image is loaded by the recipient's email client, the webhook is triggered, optionally capturing a userId to identify who opened the email. ๐ Workflow Steps Webhook Trigger (Request img) Path: /webhook/change-with-your-id Triggered by an HTTP request (e.g. when the image is loaded in an email). Accepts a query parameter id to identify the recipient. Set Base64 Data (Create data pix) Creates a variable data containing a Base64-encoded transparent PNG image (1x1 pixel). Convert to Binary (Create img bin) Converts the Base64 data string into a binary file. Sets MIME type to image/png. Respond to Webhook (Respond to Webhook) Sends the binary image file in the HTTP response. Logging (Do anything to log) Placeholder node to log or process the id or request metadata. You can access the id using {{$json"query"}}. You can also use any parameter you want โ๏ธ How to Use in Emails Embed the image in an HTML email like this: When the email is opened and the image is loaded, the workflow will be triggered. ๐ ๏ธ Notes Some email clients block images by default; this may prevent tracking. You can enhance the workflow to store open events in a database, log the timestamp, IP, or user agent. Make sure to comply with data privacy and consent regulations (e.g. GDPR).
by Thomas
๐ง Writes original, thought-provoking blog posts using AI ๐ Runs every 12 hours automatically โ๏ธ Publishes directly to Ghost blog with title, tags, and SEO meta ๐ง Features Scheduled every 12 hours OpenAI generates a multi-part blog post with metadata Markdown-compatible output (no HTML) Automatically published to Ghost CMS using authenticated API (๐ no hardcoded keys) Fully modular and general-purpose โ edit prompt for any blog theme! โ๏ธ Nodes Overview Step Node Type Purpose 1๏ธโฃ Schedule Trigger Runs every 12 hours 2๏ธโฃ OpenAI Generates blog post + meta info 3๏ธโฃ Code Extracts content, title, meta, and tags 4๏ธโฃ Code Formats content as Ghost mobiledoc payload 5๏ธโฃ HTTP Request Publishes post to Ghost via Admin API ๐ OpenAI Prompt (Generalized) Write a high-quality blog post on a creative or thought-provoking topic. The tone should be engaging and immersive. Length: 2โ4 paragraphs. Then add a brief paragraph offering an alternative perspective or logical counterpoint. Finally, generate: Blog post title Meta description 5 tags ๐ Notes โ No hardcoded API keys ๐ ๏ธ Ghost Admin API credentials must be set using the Credential Manager ๐ Prompt and Ghost URL are both easily customizable
by mahavishnu
This automation runs daily at 8:00 AM to automatically collect and organize business idea insights from IdeaBrowser.com into a structured Google Docs document. The workflow performs the following actions: Data Collection: Fetches the "idea of the day" content from ideabrowser.com/idea-of-the-day using authenticated HTTP requests. Content Processing: Extracts the base idea path and generates links to all related insight pages including value ladder, market analysis, proof signals, execution plans, and community insights. The workflow also cleans the HTML content to extract readable text. Document Creation: Creates a new Google Docs document in a specified folder with a timestamp and idea name in the title format. Content Aggregation: Systematically visits each insight page (main idea page, value ladder, why now, proof signals, market gap, execution plan, value equation, value matrix, ACP, community signals, and keywords) and collects their content. Document Population: Processes the collected content through markdown formatting and appends it to the Google Docs document, creating a comprehensive report of the daily business idea with all its associated insights. Automated Scheduling: Runs automatically every day at 8 AM, ensuring you have fresh business idea analysis delivered to your Google Drive without manual intervention. This automation is perfect for entrepreneurs, business analysts, or anyone who wants to stay updated with curated business ideas and their detailed market analysis in an organized, searchable format.