by Fahmi Oktafian
This workflow is designed for content creators or AI artists who want to automate posting unique AI-generated images to their Facebook Page multiple times a day. It uses Google Gemini via LangChain to generate imaginative image prompts, and Pollinations AI to generate the images. Posts are published with hashtags and a clean caption. Who Is It For AI artists Facebook page managers Digital marketers looking for automated creative content What It Does Triggers 3x daily at 7:00, 11:00, and 17:00 (local time) Generates random AI image prompts in a retro-futuristic, cinematic, or surreal style using Google Gemini Fetches images from Pollinations AI using custom prompts Posts images automatically to your Facebook Page with hashtags Requirements n8n self-hosted or desktop (workflow uses schedule trigger) Pollinations API (no auth needed) Facebook Page with Facebook Graph API token: Required scopes: pages_manage_posts, pages_read_engagement, pages_show_list Google Gemini API Key (used via LangChain node) How to Customize Change the prompt style in the Basic LLM Chain node (promptType: define) to suit your theme. Adjust Set The Generator Image node if you want: Different image sizes (width / height) Different seed randomness Other Pollinations models (&model=kontext) Add Telegram/Twitter nodes if you want cross-posting Use Set node to allow easy user-level configuration of models, hashtags, times, etc.
by AlQaisi
Transforming Emails into Podcasts ποΈ Check out this channel for example. The n8n workflow described here aims to revolutionize the way users engage with promotional emails by converting them into entertaining audio podcasts. This innovative project leverages automation through n8n to streamline tasks and enhance user experience. Project Benefit π§π The primary goal of this project is to transform "CATEGORY_PROMOTIONS" emails into engaging audio content. By converting text into speech, users can enjoy promotional material hands-free, making it easier to consume information while on the go or relaxing. The workflow consists of several key steps orchestrated seamlessly to deliver a delightful experience to users. How to Use the Workflow: Gmail trigger Node: Initiates the workflow by fetching "CATEGORY_PROMOTIONS" emails at regular intervals. The Gmail Trigger node in your N8N workflow is set to poll for new emails every minute and is configured to filter emails with the label "CATEGORY_PROMOTIONS" before triggering the workflow. Steps to Use Filters Inside the Gmail Trigger Node: Configure Gmail Trigger Node: Set "Poll Times" to "Every Minute" to check for new emails at regular intervals. Enable the "Simple" toggle if you want to simplify the node interface. Under "Filters", specify the label IDs you want to filter by. In this case, it's set to "CATEGORY_PROMOTIONS". Adjust any additional options as needed. // Configure Gmail Trigger node pollTimes: { item: [ { mode: "everyMinute" } ] }, simple: false, filters: { labelIds: [ "CATEGORY_PROMOTIONS" ] }, options: {} Save and Execute: Save your workflow and execute it to start monitoring your Gmail account for new emails with the specified label filter. By following these steps, your workflow will effectively trigger based on new emails that match the "CATEGORY_PROMOTIONS" label in your Gmail account. Get message content Node: Extracts the email content for processing. Summarization Chain Node: Generates concise summaries using advanced methods for better readability. Delete the unnecessary items Node: Removes irrelevant details from the email content. Text to Free TTS Node: Converts the summary text into speech using Free TTS technology. Convert from base64 to File Node: Transforms the audio data into a compatible file format. Merge Text with Audio Node: Combines the text and audio components seamlessly. Aggregate in same cell Node: Gathers all processed data for finalization. Send Message to Telegram Node: Dispatches the audio message along with a caption to a designated Telegram chat ID. By automating these tasks, the workflow ensures efficient communication and delivers content in a more engaging format, fostering a positive user experience. Configuration Instructions: The configuration of this workflow involves setting up the necessary nodes and establishing connections between them. Each node performs a specific function crucial to the overall operation of the workflow. Additionally, credentials need to be provided for accessing Gmail and OpenAI services to enable seamless data processing and summarization. Utilizing Text-to-Speech API π§ In addition to n8n automation, an external Text-to-Speech API plays a pivotal role in generating audio content from text data. By sending a POST request with JSON data containing the text and voice preferences, users can quickly receive audio files of the converted content. The API offers a straightforward interface for text-to-speech conversion, making it ideal for creating audio clips efficiently. To access this API, simply submit the desired text and voice selection to receive the generated speech audio file. The API endpoint can be accessed at https://tiktok-tts.weilnet.workers.dev/api/generation or through https://tiktokvoicegenerator.com/. In conclusion, this n8n workflow coupled with a Text-to-Speech API presents a powerful solution for transforming emails into captivating podcasts, enhancing user engagement and communication effectiveness. By embracing automation and innovative technologies, this project aims to improve user experience and streamline content delivery processes. πβ¨π
by Marketing Canopy
UTM Link Creator & QR Code Generator with Scheduled Google Analytics Reports This workflow enables marketers to generate UTM-tagged links, convert them into QR codes, and automate performance tracking in Google Analytics with scheduled reports every 7 days. This solution helps monitor traffic sources from different marketing channels and optimize campaign performance based on analytics data. Prerequisites Before implementing this workflow, ensure you have the following: Google Analytics 4 (GA4) Account & Access Ensure you have a GA4 property set up. Access to the GA4 Data API to schedule performance tracking. Refer to the Google Analytics Data API Overview for more information. Airtable Account & API Key Create an Airtable base to store UTM links, QR codes, and analytics data. Obtain an Airtable API key from your Account Settings. Detailed instructions are available in the Airtable API Authentication Guide. Step-by-Step Guide to Setting Up the Workflow 1. Generate UTM Links Create a form or interface to input: Base URL** (e.g., https://example.com) Campaign Name** (utm_campaign) Source** (utm_source) Medium** (utm_medium) Term** (Optional: utm_term) Content** (Optional: utm_content) Append UTM parameters to generate a trackable URL. 2. Store UTM Links & QR Codes in Airtable Set up an Airtable base with the following columns: UTM Link** QR Code** Campaign Name** Source** Medium** Date Created** Adjust as needed based on your tracking requirements. For guidance on setting up your Airtable base and using the API, refer to the Airtable Web API Documentation. 3. Convert UTM Links to QR Codes Use a QR code generator API (e.g., goqr.me, qrserver.com) to generate QR codes for each UTM link and store them in Airtable. 4. Schedule Google Analytics Performance Reports (Every 7 Days) Use the Google Analytics Data API to pull weekly performance reports based on UTM parameters. Extract key metrics such as: Sessions Users Bounce Rate Conversions Revenue (if applicable) Store the data in Airtable for tracking and analysis. Adjust timeframe as needed For more details on accessing and using the Google Analytics Data API, consult the Google Analytics Data API Overview. Benefits of This Workflow β Track Marketing Campaigns: Easily monitor which channels drive traffic. β Automate QR Code Creation: Seamless integration of UTM links with QR codes. β Scheduled Google Analytics Reports: No manual reportingβeverything runs automatically. β Improve Data-Driven Decisions: Optimize ad spend and marketing strategies based on performance insights. This version ensures proper Markdown structure, includes relevant documentation links, and improves readability. Let me know if you need any further refinements! π
by David w/ SimpleGrow
Receive Webhook Notification The workflow starts when a webhook receives a POST request from Whapi, notifying that a new participant has joined a WhatsApp group. Filter the Event The workflow checks two conditions: The event is for the correct WhatsApp group (matching the specific group ID). The action type is "add" (meaning a user was added to the group). Send Welcome Message If both conditions are met, the workflow sends a personalized welcome message to the new participant via Whapi. The message explains the group rules and how the user can earn points and participate in weekly raffles. Create Airtable Record After sending the welcome message, the workflow creates a new record in the Airtable database for the new participant. The record includes: The participantβs WhatsApp ID An initial engagement count of 100 points The date of the last interaction (set to today) Result Every new group member is automatically welcomed and registered in your engagement database with starter points, ready to participate in your groupβs activities and rewards. This workflow ensures new users are greeted, informed, and instantly included in your engagement tracking system.
by Zacharia Kimotho
This workflow helps marketers verify and update data using EffiBotics Email Verifier API. Copy and create a list with emails as on this one https://docs.google.com/spreadsheets/d/1rzuojNGTaBvaUEON6cakQRDva3ueGg5kNu9v12aaSP4/edit#gid=0 The trigger checks for any updates in the number of rows that are present in a sheet and updates the verified emails on Google sheets Once you update a new cell, the new data is read, and the email is checked for its validity before. The results are then updated in real-time on the sheet. Happy Emailing!
by Yaron Been
Replace manual task prioritization with intelligent AI reasoning that thinks like a Chief Operating Officer. This workflow automatically fetches your Asana tasks every morning, analyzes them using advanced AI models, and delivers the single most critical task with detailed reasoning - ensuring your team always focuses on what matters most. β¨ What This Workflow Does: π Automated Task Collection**: Fetches all assigned Asana tasks daily at 9 AM π€ AI-Powered Analysis**: Uses OpenAI GPT-4 to evaluate urgency, impact, and strategic importance π― Smart Prioritization**: Identifies the #1 most critical task with detailed reasoning π§ Contextual Memory**: Leverages vector database for historical context and pattern recognition πΎ Structured Storage**: Saves prioritized tasks to PostgreSQL with full audit trail π Continuous Learning**: Builds organizational knowledge over time for better decisions π§ Key Features: Daily automation** with zero manual intervention Context-aware AI** that learns from past prioritization decisions Strategic reasoning** explaining why each task is prioritized Vector-powered memory** using Pinecone for intelligent context retrieval Clean structured output** with task names, priority levels, and detailed justifications Database integration** for reporting and historical analysis π Prerequisites: Asana account with API access OpenAI API key (GPT-4 recommended) PostgreSQL database Pinecone account (for vector storage and context) π― Perfect For: Operations teams managing multiple competing priorities Startups needing systematic task management Project managers juggling complex workflows Leadership teams requiring strategic focus Any organization wanting AI-driven operational intelligence π‘ How It Works: Morning Automation: Triggers every day at 9 AM Data Collection: Pulls all relevant tasks from Asana AI Analysis: Evaluates each task using COO-level strategic thinking Context Retrieval: Searches vector database for similar past tasks Smart Prioritization: Identifies the single most important task Structured Output: Delivers priority level with detailed reasoning Data Storage: Saves results for reporting and continuous improvement π¦ What You Get: Complete n8n workflow with all AI components configured PostgreSQL database schema for task storage Vector database setup for contextual intelligence Comprehensive documentation and setup guide Sample task data and output examples π‘ Need Help or Want to Learn More? Created by Yaron Been - Automation & AI Specialist π§ Support: Yaron@nofluff.online π₯ YouTube Tutorials: https://www.youtube.com/@YaronBeen/videos πΌ LinkedIn: https://www.linkedin.com/in/yaronbeen/ Discover more advanced automation workflows and AI integration tutorials on my channels! π·οΈ Tags: AI, OpenAI, Asana, Task Management, COO, Prioritization, Automation, Vector Database, Operations, GPT-4
by Zacharia Kimotho
How to scrap emails from websites This workflow shows how to quickly build an Email scraping API using n8n. Email marketing is at the core of most marketing strategies, be it content marketing, sales, etc. As such, being able to find contacts in bulk for your business on a large scale is key. There are available tools available in the market that can do this, but most are premium; why not build a custom one with n8n? Usage The workflow gets the data from a website and performs an extraction based on the date around on the website Copy the webhook URL to your browser Add a query parameter eg ?Website=https://mailsafi.com . This should give you a URL like this {{$n8nhostingurl/webhook/ea568868-5770-4b2a-8893-700b344c995e?Website=https://mailsafi.com Click on the URL and wait for the extracted email to be displayed. This will return the email address on the website, or if there is no email, the response will be "workflow successfully executed." Make sure to use HTTP:// for your domains Otherwise, you may get an error.
by Nskha
Overview This workflow automates the process of notifying users about new emails via Telegram and temporarily hosting the email content on a secret HTML page. It is ideal for users who need immediate notifications and a secure, temporary web view of their email content. Use Cases Immediate notification of new emails in Telegram with the ability to preview the email content in a secure, temporary HTML format. Automation for users who need to keep track of their emails without constantly checking their email client. Useful for teams or individuals who require instant updates on critical emails and prefer a quick preview through a web interface. My Personal Use Case: Secure Subscription Sharing From time to time, I find myself wanting to share my paid subscriptions with friends, but giving out OTP codes manually or sharing my email isn't a good idea due to security concerns. I attempted to use the IMAP node to integrate with Telegram secret channel for this purpose but encountered numerous problems, such as difficulties in scraping content from emails. Additionally, the Telegram API sometimes rejects certain special characters found within email contents. After facing these challenges, I discovered that rendering emails as HTML pages and sharing them directly is the most effective solution. This approach bypasses the issues with character limitations and content scraping, providing a seamless way to share subscription benefits securely. Services/APIs Used | Service/API | Node Type | Description | |----------------------|-------------------------|------------------------------------------------------| | IMAP Email Server | Email Trigger (IMAP) | Triggers the workflow on receiving a new email. | | Telegram API | Telegram | Sends notifications and manages messages in Telegram.| | GitHub Gist API | HTTP Request (Github Gist) | Temporarily hosts email content on GitHub Gist. | | GitHub Gist API (Deletion) | HTTP Request (Github Gist β) | Deletes the temporary GitHub Gist after a specified time. | | Wait | Wait | Delays the workflow for a specified period. | Configuration Steps Email Trigger (IMAP): Configure your IMAP email credentials to enable the workflow to check for new emails. Telegram Nodes: Insert your Telegram bot's API credentials and your chat ID in both Telegram nodes to send and delete messages. Github Gist Nodes: Provide your GitHub API credentials to create and delete Gists. Ensure the GitHub token has the necessary permissions to create and delete Gists. Wait Node: Adjust the wait time according to your preference for how long the email content should be accessible via the HTML page. Screenshot Additional Notes Ensure all credentials are securely stored and have the minimum necessary permissions for the workflow to function. Test the workflow with non-sensitive emails to ensure it operates as expected before using it with critical email accounts. Consider the privacy and security implications of temporarily hosting email content on GitHub Gist. For any questions or issues, refer to the respective API documentation for each service used or consult the n8n community for support.
by Yaron Been
Fire Part Crafter Image Generator Description PartCrafter is a structured 3D mesh generation model that creates multiple parts and objects from a single RGB image. Overview This n8n workflow integrates with the Replicate API to use the fire/part-crafter model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters image** (string): Input image for 3D mesh generation Optional Parameters seed** (integer, default: 0): Random seed for reproducibility. Use 0 for random seed num_parts** (integer, default: 16): Number of parts to generate num_tokens** (string, default: 2048): Number of tokens for generation guidance_scale** (number, default: 7): Guidance scale for generation remove_background** (boolean, default: False): Remove background from input image use_flash_decoder** (boolean, default: False): Use flash decoder for faster inference (Tempermental?) num_inference_steps** (integer, default: 50): Number of inference steps How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: fire/part-crafter API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by n8n Team
This workflow generates CSV files containing a list of 10 random users with specific characteristics using OpenAI's GPT-4 model. It then splits this data into batches, converts it to CSV format, and saves it to disk for further use. The execution of the workflow begins from here when triggered manually. "OpenAI" Node. This uses the OpenAI API to generate random user data. The input to the OpenAI API is a fixed string, which asks for a list of 10 random users with some specific attributes. The attributes include a name and surname starting with the same letter, a subscription status, and a subscription date (if they are subscribed). There is also a short example of the JSON object structure. This technique is called one-shot prompting. "Split In Batches" Node. This node is used to handle the OpenAI responses one by one. "Parse JSON" Node. This node converts the content of the message received from the OpenAI node (which is in string format) into a JSON object. "Make JSON Table" Node. This node is used to convert the JSON data into a tabular format, which is easier to handle for further data processing. "Convert to CSV" Node. This node converts the table format data received from the "Make JSON Table" node into CSV format and assigns a file name. "Save to Disk" Node. This node is used to save the CSV generated in the previous node to disk in the ".n8n" directory. The workflow is designed in a circular manner. So, after saving the file to disk, it goes back to the "Split In Batches" node to process the OpenAI output, until all batches are processed.
by Abbas Ali
This automation fetches the latest article from a WordPress blog, summarizes it using OpenAI, and sends the summary to a list of subscribers via email. Ideal for content creators and bloggers who want to distribute digestible content without manual effort. Use Case Perfect for: β’ Newsletter creators β’ Content marketers β’ Bloggers β’ Knowledge managers Nodes Used β’ Schedule Trigger β’ HTTP Request β’ Set β’ OpenAI β’ Google Sheets β’ Email (Gmail/SMTP) β’ IF β’ SplitInBatches Workflow Steps Trigger: Starts on a schedule (e.g., daily at 9:00 AM). Fetch Blog Post: Retrieves the most recent post from a WordPress blog via HTTP Request. Extract Fields: A Set node extracts the title, link, and content. Summarize Article: OpenAI processes the article and returns a 3-point summary. Fetch Subscribers: Google Sheets reads email addresses from a subscriber list. Loop Emails: SplitInBatches and Send Email nodes loop through subscribers. Conditional Logic: IF node skips articles shorter than 300 words. Credentials Required β’ OpenAI API Key (for content summarization) β’ Google Sheets OAuth2 (to read subscriber emails) β’ Gmail or SMTP (for sending emails) Test Instructions Replace blog URL in HTTP Request node. Connect OpenAI API key. Link your Google Sheet with a column named Email. Set up Gmail or SMTP credentials. Run manually for testing, then activate schedule.
by Yaron Been
Fire Flux Image Generator Description The image generation model tailored for local development and personal use Overview This n8n workflow integrates with the Replicate API to use the fire/flux model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image Optional Parameters seed** (integer, default: 0): Random seed. Set for reproducible generation go_fast** (boolean, default: True): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 megapixels** (string, default: 1): Approximate number of megapixels for generated image num_outputs** (integer, default: 1): Number of outputs to generate aspect_ratio** (string, default: 2:1): Aspect ratio for the generated image output_format** (string, default: png): Format of the output images output_quality** (integer, default: 80): Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs num_inference_steps** (integer, default: 4): Number of denoising steps. 4 is recommended, and lower number of steps produce lower quality outputs, faster. disable_safety_checker** (boolean, default: False): Disable safety checker for generated images. How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: fire/flux API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters