by Kevin Cole
How It Works This workflow sends an HTTP request to OpenAI's Text-to-Speech (TTS) model, returning an .mp3 audio recording of the provided text. This template is meant to be adapted for your individual use case, and requires a valid OpenAI credential. Gotchas Per OpenAI's Usage Policies, you must provide a clear disclosure to end users that the TTS voice they are hearing is AI-generated and not a human voice, if you are using this workflow to provide audio output to users.
by Yaron Been
CFO Forecasting Agent - Marketplace Listing Headlines (Choose Your Favorite) Option 1 - Direct & Professional "AI-Powered CFO Forecasting Agent: Automated Revenue Predictions from Stripe Data" Option 2 - Benefit-Focused "Automate Your Financial Forecasting: Daily Revenue Predictions with AI Intelligence" Option 3 - Action-Oriented "Transform Stripe Sales Data into Intelligent 3-Month Revenue Forecasts Automatically" Marketplace Description 🚀 AI-Powered Financial Forecasting on Autopilot Turn your Stripe sales data into intelligent revenue forecasts with this comprehensive CFO Forecasting Agent. This workflow automatically analyzes your transaction history, identifies trends, and generates professional 3-month revenue predictions using OpenAI's GPT-4. ✨ What This Workflow Does: 📊 Automated Data Collection**: Fetches and processes all Stripe charges daily 🤖 AI-Powered Analysis**: Uses OpenAI GPT-4 to analyze trends and predict future revenue 📈 Structured Forecasting**: Generates monthly forecasts with confidence levels and insights 💾 Multi-Platform Storage**: Saves results to both Supabase database and Google Sheets 🕒 Scheduled Execution**: Runs automatically every day to keep forecasts current 🧠 Smart Context**: Optional Pinecone integration for historical context and improved accuracy 🔧 Key Features: Daily automated execution** at 9 AM Structured JSON output** with forecasts, trends, and confidence levels Dual storage system** for data backup and easy reporting RAG-enabled** for enhanced forecasting with historical context Professional CFO-grade insights** and trend analysis 📋 Prerequisites: Stripe account with API access OpenAI API key (GPT-4 recommended) Google Sheets API credentials Supabase account (optional) Pinecone account (optional, for enhanced context) 🎯 Perfect For: SaaS companies tracking subscription revenue E-commerce businesses needing sales forecasts Startups requiring investor-ready financial projections Finance teams automating reporting workflows 📦 What You Get: Complete n8n workflow with all nodes configured Detailed documentation and setup instructions Sample data structure and output formats Ready-to-use Google Sheets template 💡 Need Help or Want to Learn More? Created by Yaron Been - Automation & AI Specialist 📧 Support: Yaron@nofluff.online 🎥 YouTube Tutorials: https://www.youtube.com/@YaronBeen/videos 💼 LinkedIn: https://www.linkedin.com/in/yaronbeen/ Get more automation tips, tutorials, and advanced workflows on my channels! 🏷️ Tags: AI, OpenAI, Stripe, Forecasting, Finance, CFO, Automation, Revenue, Analytics, GPT-4
by Deborah
Want to learn the basics of n8n? Our comprehensive quick quickstart tutorial is here to guide you through the basics of n8n, step by step. Designed with beginners in mind, this tutorial provides a hands-on approach to learning n8n's basic functionalities.
by Harshil Agrawal
This workflow allows you to receive updates from Wise and add information of a transfer to a base in Airtable. Wise Trigger node: This node will trigger the workflow when the status of your transfer changes. Wise node: This node will get the information about the transfer. Set node: We use the Set node to ensure that only the data that we set in this node gets passed on to the next nodes in the workflow. We set the value of Transfer ID, Date, Reference, and Amount in this node. Airtable node: This node will append the data that we set in the previous node to a table.
by Yaron Been
This workflow automatically identifies and tracks backlink opportunities by analyzing competitor link profiles and finding potential linking websites. It saves you time by eliminating the need to manually research backlink prospects and provides a systematic approach to link building and SEO improvement. Overview This workflow automatically scrapes competitor backlink profiles and analyzes potential linking opportunities by examining referring domains, anchor text patterns, and link quality metrics. It uses Bright Data to access backlink data sources and AI to intelligently identify high-value linking opportunities for your SEO strategy. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping backlink analysis platforms without being blocked OpenAI**: AI agent for intelligent backlink opportunity analysis Google Sheets**: For storing backlink opportunities and tracking data How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your backlink tracking spreadsheet Customize: Define target domains and backlink analysis parameters Use Cases SEO Teams**: Identify high-quality backlink opportunities for link building campaigns Content Marketing**: Find websites that might be interested in linking to your content Competitive Analysis**: Analyze competitor link profiles to discover new opportunities Digital PR**: Identify potential media outlets and industry websites for outreach Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #backlinks #seo #linkbuilding #brightdata #webscraping #seotools #n8nworkflow #workflow #nocode #linkanalysis #backlinkresearch #seoautomation #linkprospecting #digitalmarketing #backlinkmonitoring #seoanalysis #linkopportunities #competitoranalysis #seoresearch #linkstrategy #backlinkanalysis #domainanalysis #linktracking #seomonitoring #searchmarketing #organicseo #linkbuilding #seocampaigns
by Sunny
Workflow Description: Automated Content Publishing for WordPress This n8n workflow automates the entire process of content generation, image selection, and scheduled publishing to a self-hosted WordPress website. It is designed for bloggers, marketers, and businesses who want to streamline their content creation and posting workflow. 🌟 Features ✅ AI-Powered Content Generation Uses ChatGPT to generate engaging, market-ready blog articles Dynamically incorporates high-search volume keywords ✅ Automated Image Selection Searches for relevant stock images from Pexels Embeds images directly into posts (Optional)* Supports *Featured Image from URL (FIFU) plugin** for WordPress ✅ Scheduled & Randomized Posting Automatically schedules posts at predefined intervals Supports randomized delay (0-6 hours) for natural publishing ✅ WordPress API Integration Uses WordPress REST API to directly publish posts Configures featured images, categories, and metadata Supports SEO-friendly meta fields ✅ Flexible & Customizable Works with any WordPress website (self-hosted) Can be modified for other CMS platforms 🔧 How It Works 1️⃣ Trigger & Scheduling Automatically runs at preset times or on-demand Supports cron-like scheduling 2️⃣ AI Content Generation Uses a well-crafted prompt to generate high-quality blog posts Extracts relevant keywords for both SEO and image selection 3️⃣ Image Fetching from Pexels Searches and retrieves high-quality images Embeds image credits and ensures proper formatting 4️⃣ WordPress API Integration Sends post title, content, image, and metadata via HTTP Request Can include custom fields, categories, and tags 5️⃣ Randomized Delay Before Publishing Ensures natural posting behavior Avoids bulk publishing issues 📌 Requirements Self-hosted WordPress website* with *REST API enabled** FIFU Plugin* (optional) for *external featured images** n8n Self-Hosted or Cloud Instance** 🚀 Who Is This For? ✅ Bloggers who want to automate content publishing ✅ Marketing teams looking to scale content production ✅ Business owners who want to boost online presence ✅ SEO professionals who need consistent, optimized content 💡 Ready to Automate? 👉 Click here to get this workflow! (Replace with Purchase URL)
by Sami Abid
This workflow will trigger daily at 6am to retrieve your day's calendar events from Google Calendar and send them as a summary message to Slack. I've used a low-code method to filter the dates as I can't code much in JSON :) Contact me on https://twitter.com/sami_abid if you have any questions!
by Don Jayamaha Jr
Track NFT listings, offers, orders, and trait-based pricing in real time! This workflow integrates OpenSea API, AI-powered analytics (GPT-4o-mini), and n8n automation to provide instant insights into NFT trading activity. Ideal for NFT traders, collectors, and investors looking to monitor the market and identify profitable opportunities. How It Works A user submits a query about NFT listings, offers, or order history. The OpenSea Marketplace Agent determines the correct API tool: Retrieve active NFT listings for a collection. Fetch valid offers for individual NFTs or entire collections. Identify the cheapest NFT listings by collection or token ID. Track the highest offer made for a single NFT. Access detailed order history for a transaction. The OpenSea API (requires API key) is queried to fetch real-time data. The AI engine processes and structures the response, making it easy to interpret. The NFT marketplace insights are delivered via Telegram, Slack, or stored in a database. What You Can Do with This Agent 🔹 Find the Best NFT Listings → Retrieve the cheapest available listings in any collection. 🔹 Track Offers on NFTs → See all active offers, including highest bids. 🔹 Analyze Collection-Wide Market Data → Compare listings, offers, and sales activity. 🔹 Retrieve Order Details → Search by order hash to check buyer, seller, and transaction status. 🔹 Fetch NFT Trait-Based Offers → Identify rare traits that receive premium bids. 🔹 Monitor Multi-Chain Listings → Works across Ethereum, Polygon (Matic), Arbitrum, Optimism, and more. Example Queries You Can Use ✅ "Show me the 10 cheapest listings for Bored Ape Yacht Club." ✅ "Find the highest bid for CryptoPunk #1234." ✅ "Track all open offers for Azuki NFTs." ✅ "Retrieve details for this OpenSea order: 0x123abc... on Ethereum." ✅ "List all NFTs for sale in the 'CloneX' collection." Available API Tools & Endpoints 1️⃣ Get All Listings by Collection → /api/v2/listings/collection/{collection_slug}/all (Fetches active listings for a collection) 2️⃣ Get All Offers by Collection → /api/v2/offers/collection/{collection_slug}/all (Retrieves all offers for a collection) 3️⃣ Get Best Listing by NFT → /api/v2/listings/collection/{collection_slug}/nfts/{identifier}/best (Finds the lowest-priced NFT listing) 4️⃣ Get Best Listings by Collection → /api/v2/listings/collection/{collection_slug}/best (Fetches the cheapest listings per collection) 5️⃣ Get Best Offer by NFT → /api/v2/offers/collection/{collection_slug}/nfts/{identifier}/best (Retrieves the highest offer for an NFT) 6️⃣ Get Collection Offers → /api/v2/offers/collection/{collection_slug} (Shows collection-wide offers) 7️⃣ Get Item Offers → /api/v2/orders/{chain}/{protocol}/offers (Fetches active item-specific offers) 8️⃣ Get Listings by Chain & Protocol → /api/v2/orders/{chain}/{protocol}/listings (Retrieves active listings across blockchains) 9️⃣ Get Order Details by Hash → /api/v2/orders/chain/{chain}/protocol/{protocol_address}/{order_hash} (Checks order status using an order hash) 🔟 Get Trait-Based Offers → /api/v2/offers/collection/{collection_slug}/traits (Fetches offers for specific NFT traits) Set Up Steps Get an OpenSea API Key Sign up at OpenSea API and request an API key. Configure API Credentials in n8n Add your OpenSea API key under HTTP Header Authentication. Connect the Workflow to Telegram, Slack, or Database (Optional) Use n8n integrations to send alerts to Telegram, Slack, or save results to Google Sheets, Notion, etc. Deploy and Test Send a query (e.g., "Get the best listing for BAYC #5678") and receive instant insights! Stay ahead of the NFT market—gain powerful insights with AI-powered OpenSea analytics!
by Yohita
This workflow template creates an audio stream session on UltraVox compatible with Plivo and sends it to Plivo. How It Works : Plivo initiates a call and requests the Answer URL. The workflow responds with Plivo XML to join the session. Note: Ensure you update the UltraVox API Key in the credentials. Update System Prompt based on your requirements. Check Youtube Video
by Lucas Perret
Enrich your company lists with OpenAI GPT-3 ↓ You’ll get valuable information such as: Market (B2B or B2C) Industry Target Audience Value Proposition This will help you to: add more personalization to your outreach make informed decisions about which accounts to target I've made the process easy with an n8n workflow. Here is what it does: Retrieve website URLs from Google Sheets Extract the content for each website Analyze it with GPT-3 Update Google Sheets with GPT-3 data
by Davide
Imagine having an AI chatbot on Slack that seamlessly integrates with your company’s workflow, automating repetitive requests. No more digging through emails or documents to find answers about IT requests, company policies, or vacation days—just ask the bot, and it will instantly provide the right information. With its 24/7 availability, the chatbot ensures that team members get immediate support without waiting for a colleague to be online, making assistance faster and more efficient. Moreover, this AI-powered bot serves as a central hub for internal communication, allowing everyone to quickly access procedures, documents, and company knowledge without searching manually. A simple Slack message is all it takes to get the information you need, enhancing productivity and collaboration across teams. How It Works Slack Trigger: The workflow starts when a user mentions the AI bot in a Slack channel. The trigger captures the message and forwards it to the AI Agent. AI Agent Processing: The AI Agent, powered by Anthropic's Claude 3.7 Sonnet model, processes the query. It uses Retrieval-Augmented Generation (RAG) to fetch relevant information from the company’s internal knowledge base stored in Qdrant (a vector database). A Simple Memory buffer retains recent conversation context (last 10 messages) for continuity. Knowledge Retrieval: The RAG tool searches Qdrant’s vector store using OpenAI embeddings to find the most relevant document chunks (top 10 matches). Response Generation: The AI synthesizes the retrieved data into a concise, structured response (1-2 sentences for the answer, 2-3 supporting details, and a source citation). The response is formatted in Slack-friendly markdown (bullet points, blockquotes) and sent back to the user. Set Up Steps Prepare Qdrant Vector Database: Create a Qdrant collection via HTTP request (Create collection node). Optionally, refresh/clear the collection (Refresh collection node) before adding new documents. Load Company Documents: Fetch files from a Google Drive folder (Get folder → Download Files). Process documents: Split text into chunks (Token Splitter) and generate embeddings (Embeddings OpenAI2). Store embeddings in Qdrant (Qdrant Vector Store1). Configure Slack Bot: Create a Slack bot via Slack API with required permissions Add the bot to the desired Slack channel and note the channelId for the workflow. Deploy AI Components: Connect the AI Agent to Anthropic’s model, RAG tool, and memory buffer. Ensure OpenAI embeddings are configured for both RAG and document processing. Test & Activate: Use the manual trigger (When clicking ‘Test workflow’) to validate document ingestion. Activate the workflow to enable real-time Slack interactions. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Dataki
This workflow allows you to easily evaluate and compare the outputs of two language models (LLMs) before choosing one for production. In the chat interface, both model outputs are shown side by side. Their responses are also logged into a Google Sheet, where they can be evaluated manually or automatically using a more advanced model. Use Case You're developing an AI agent, and since LLMs are non-deterministic, you want to determine which one performs best for your specific use case. This template is designed to help you compare them effectively. How It Works The user sends a message to the chat interface. The input is duplicated and sent to two different LLMs. Each model processes the same prompt independently, using its own memory context. Their answers, along with the user input and previous context, are logged to Google Sheets. You can review, compare, and evaluate the model outputs manually (or automate it later). In the chat, both responses are also shown one after the other for direct comparison. How To Use It Copy this Google Sheets template (File > Make a Copy). Set up your System Prompt and Tools in the AI Agent node to suit your use case. Start chatting! Each message will trigger both models and log their responses to the spreadsheet. Note: This version is set up for two models. If you want to compare more, you’ll need to extend the workflow logic and update the sheet. About Models You can use OpenRouter or Vertex AI to test models across providers. If you're using a node for a specific provider, like OpenAI, you can compare different models from that provider (e.g., gpt-4.1 vs gpt-4.1-mini). Evaluation in Google Sheets This is ideal for teams, allowing non-technical stakeholders (not just data scientists) to evaluate responses based on real-world needs. Advanced users can automate this evaluation using a more capable model (like o3 from OpenAI), but note that this will increase token usage and cost. Token Considerations Since each input is processed by two different models, the workflow will consume more tokens overall. Keep an eye on usage, especially if working with longer prompts or running multiple evaluations, as this can impact cost.