by Zacharia Kimotho
How it works This workflow gets the search console results data and exports this to google sheets. This makes it easier to visualize and do other SEO related tasks and activities without having to log into Search Console Setup and use Set your desired schedule Enter your desired domain Connect to your Google sheets or make a copy of this sheet. Detailed Setup Inputs and Outputs:** Input: API response from Google Search Console regarding keywords, page data, and date data. Output: Entries written to Google Sheets containing keyword data, clicks, impressions, CTR, and positions. Setup Instructions: Prerequisites:** An n8n instance set up and running. Active Google Account with access to Google Search Console and Google Sheets. Google OAuth 2.0 credentials for API access. Step-by-Step Setup:** Open n8n and create a new workflow. Add the nodes as described in the JSON. Configure the Google OAuth2 credentials in n8n to enable API access. Set your domain in the Set your domain node. Customize the Google Sheets document URLs to your personal sheets. Adjust the schedule in the Schedule Trigger node as per your requirements. Save the workflow. Configuration Options:** You can customize the date ranges in the body of the HttpRequest nodes. Adjust any fields in the Edit Fields nodes based on different data requirements. Use Case Examples: Useful in tracking website performance over time using Search Console metrics. Ideal for digital marketers, SEO specialists, and web analytics professionals. Offers value in compiling performance reports for stakeholders or team reviews. Running and Troubleshooting: Running the Workflow:** Trigger the workflow manually or wait for the schedule to run it automatically. Monitoring Execution:** Check the execution logs in n8n's dashboard to ensure all nodes complete successfully. Common Issues:** Invalid OAuth credentials – ensure credentials are set up correctly. Incorrect Google Sheets URLs – double-check document links and permissions. Scheduling conflicts – make sure the schedule set does not overlap with other workflows.
by simonscrapes
Use Case Transform web pages into AI-friendly markdown format: You need to process webpage content for LLM analysis You want to extract both content and links from web pages You need clean, formatted text without HTML markup You want to respect API rate limits while crawling pages What this Workflow Does The workflow uses Firecrawl.dev API to process webpages: Converts HTML content to markdown format Extracts all links from each webpage Handles API rate limiting automatically Processes URLs in batches from your database Setup Create a Firecrawl.dev account and get your API key Add your Firecrawl API key to the HTTP Request node's Authorization header Connect your URL database to the input node (column name must be "Page") or edit the array in Example fields from data source Configure your preferred output database connection How to Adjust it to Your Needs Modify input source to pull URLs from different databases Adjust rate limiting parameters if needed Customize output format for your specific use case More templates and n8n workflows >>> @simonscrapes
by Sebastian/OptiLever
Tired of spending HOURS writing product descriptions that don’t rank or convert? This could be your solution. This free Product Description Writer workflow for n8n uses a multi-agent AI system to turn your product list into conversion-focused, SEO-ready copy. It analyzes your product images, identifies key features, and writes optimized titles and descriptions for platforms like Shopify and Google Shopping. It can process your entire catalog in minutes, saving you countless hours of manual work. This workflow is perfect for: 🛒 Shopify stores 🛒 Etsy sellers 🛒 Product managers 🛒 Digital marketers 🛒 Anyone who hates writing product copy manually! How it works This workflow automates the entire product description process in a few high-level steps: Reads Your Products: The workflow starts by reading product data from your specified Google Sheet, including the product name, an image URL, and optional fields like brand voice or target market. Analyzes Product Images: It downloads each product image and uses an AI vision model (GPT-4o-mini) to perform a detailed visual analysis, extracting objective information like materials, colors, features, and structure. Writes Optimized Copy: The visual analysis and your original data are passed to two specialized AI agents. The first drafts a Shopify-optimized title and description, while the second refines it and generates additional SEO-focused copy for Google Merchant Center. Updates Your Spreadsheet: The final, optimized product titles and descriptions for both Shopify and Google are automatically written back to the original Google Sheet. Set up steps Setting up this workflow takes only a few minutes. You will need to configure credentials for the following services: Google Sheets**: To allow the workflow to read your product list and write back the results. OpenAI**: To power the AI agents that analyze images and generate the copy. Detailed instructions and customization tips are included in the sticky notes inside the workflow itself. Benefits Automated Vision-Based Copywriting**: Reduces manual description writing time. Multi-Channel Ready**: Outputs are optimized for both Shopify and Google Merchant Center standards. Brand Alignment**: Uses optional user-provided draft descriptions and brand voice to maintain brand tone. SEO and Conversion Focus**: Titles and descriptions are optimized for both search engines and consumer engagement. Image-Centric Accuracy**: Uses actual product images for accurate attribute extraction, minimizing errors from missing or vague text data. Tips & Customization To adjust brand voice or tone, modify the system prompts in the Shopify and GMC AI agents. To extend the workflow for scheduled runs, add a cron trigger or a Google Sheets "status column" filter. For QA/debugging, consider adding logging nodes to Slack or Discord, or export AI outputs to a review sheet before updating the main sheet. To improve Shopify or GMC field mappings, edit the final Google Sheets update node's column settings. For speed optimization, the batch size in the Loop Over Items node can be adjusted, but be mindful of API rate limits.
by Yang
📄 What this workflow does This workflow captures a full-page screenshot of any website added to a Google Sheet and automatically uploads the screenshot to a designated Google Drive folder. It uses Dumpling AI’s screenshot API to generate the image and manages file storage through Google Drive. 👤 Who is this for This is ideal for: Marketers and outreach teams capturing snapshots of client or lead websites Lead generation specialists tracking landing page visuals Researchers or analysts who need to archive website visuals from URLs Anyone looking to automate website screenshot collection at scale ✅ Requirements A Google Sheet with a column labeled Website where URLs will be added Dumpling AI** API access for screenshot capture A connected Google Drive account with an accessible folder to store screenshots ⚙️ How to set up Replace the Google Sheet and folder IDs in the workflow with your own. Connect your Dumpling AI and Google credentials in n8n. Make sure your sheet contains a Website column with valid URLs. Activate the workflow to begin watching for new entries. 🔁 How it works (Workflow Steps) Watch New Row in Google Sheets: Triggers when a new row is added to the sheet. Request Screenshot from Dumpling AI: Sends the website URL to Dumpling AI and gets a screenshot URL. Download Screenshot: Fetches the image file from the returned URL. Upload Screenshot to Google Drive: Uploads the file to a selected folder in Google Drive. 🛠️ Customization Ideas Add timestamped filenames using the current date or domain name Append the Google Drive URL back to the same row in the sheet for easy access Extend the workflow to send Slack or email notifications when screenshots are saved Add filters to validate URLs before sending them to Dumpling AI
by Jason Guest
Automatically deploy n8n workflows by simply dropping JSON files into a Google Drive folder—this template watches for new exports, cleans and imports them into your n8n instance, applies a tag, and then archives the processed files. Who is this template for? This workflow template is designed for n8n power users, and automation specialists who need a simple, reliable way to bulk‑deploy or version‑control n8n workflows via Google Drive. It’s perfect if you: Manage multiple n8n instances (staging, production, etc.) Want an easy “drop‑in” approach to publish new or updated workflows Prefer storing/exporting JSON in Drive rather than editing in the UI Use case Manually importing .json exports into n8n is slow and error‑prone. With this template you can: Keep your workflows in a shared Drive folder (version control friendly) Automatically sanitize each file so only supported settings go through Tag deployed workflows consistently for easy filtering Move processed files to a “Deployed” folder for clear change tracking How it works Watch “ToDeploy” folder in Google Drive for new .json files Download & parse each file into a JSON object Clean payload: strip out everything except the allowed executionOrder (and timezone if you choose) POST the cleaned workflow to your n8n instance via /api/v1/workflows PUT a predefined tag onto the newly created workflow Move file to your “Deployed” folder when import succeeds, or capture the workflow name & error if it fails Setup instructions 1. In Google Drive create a ToDeploy folder and a Deployed folder Update "Google Drive Trigger -ToDeploy folder" to your ToDeploy folder Update "Move JSON file to Deployed folder" to you Deployed folder 2. Create a n8n API key: +Go to Settings > n8n API +Select Create an API key +Copy API Key 3. In "Get Existing Workflow Tags" node: Create n8n API Authentication Authentication: Predefined Credential Type Credential Type: n8n API Create new credential: +Paste in API key +Baseurl: https://SUBDOMAIN.YOURDOMAINNAME.com/api/v1/ 4. Add n8n API authentication to: "Create n8n Workflow" node "Set Workflow Tag" node 5. Add your N8N instance URL to the N8N_Instance_URL variable in "Set n8n URL variable" node. 6. Run "1. Get Workflow Tags" flow and copy the ID of your chosen tag. 7. In "Set n8n API URL & Tag ID variables" node: Add the Workflow Tag ID to the N8N_Instance_Tag variable Add your N8N instance URL to the N8N_Instance_URL variable 8. Set workflow to Active How to adjust it to your needs Use different tags: run Get Existing Workflow Tags on start‑up to refresh available tags, or hard‑code multiple tags in the Set Workflow Tag node. Add notifications**: connect the error branch to Slack or Email nodes so you get alerted if an import fails. Swap Drive for another storage**: replace Google Drive nodes with Dropbox, S3, or GitHub triggers if you prefer a different source for your JSON files.
by Akhil Varma Gadiraju
n8n Workflow: Sync Workflows with GitLab How It Works This workflow ensures that your self-hosted n8n workflows are version-controlled in a GitLab repository. It compares each current workflow from n8n with its stored counterpart in GitLab. If any differences are detected, the GitLab file is updated with the latest version. Core Logic: Retrieve Workflows – Fetch all workflows from the n8n REST API. Compare with GitLab – For each workflow, fetch the corresponding file from GitLab and compare the JSON. Update if Changed – If differences exist, commit the updated workflow to GitLab using its API. Setup Before using the workflow, ensure the following: Prerequisites: n8n**: Self-hosted instance with access to the /rest/workflows API. GitLab**: A repository where workflows will be stored, and a Personal Access Token (PAT) with api and write_repository permissions. n8n Nodes Required**: HTTP Request (to call n8n and GitLab APIs) Code or Function nodes (for diffing and formatting) Looping (SplitInBatches or similar) Configuration: Set environment variables or workflow credentials for: GITLAB_TOKEN GITLAB_REPO GITLAB_BRANCH (e.g., main) GITLAB_FILE_PATH_PREFIX (e.g., n8n-workflows/) How to Use Import the Workflow into your n8n instance. Configure GitLab API Credentials: Set the GitLab PAT as a header in the HTTP Request node: Private-Token: {{ $env.GITLAB_TOKEN }} Map Workflows to GitLab Paths: Use the workflow name or ID to create the file path. Example: n8n-workflows/workflow-name.json Trigger the Workflow: Can be manually triggered, or scheduled to run at intervals (e.g., daily). Review Commits in GitLab: Each updated workflow will be committed with a message like: "Update workflow: Sample Workflow" Disclaimer This workflow does not handle merge conflicts or manual edits made directly in GitLab. Always ensure proper coordination if multiple sources are modifying workflows. Only structural changes are tracked. Non-functional metadata (like timestamps or IDs) may trigger false positives unless filtered. Use at your own risk. Test in a safe environment before applying to production workflows.
by Angel Menendez
Analyze & Sort Suspicious Email Contents with ChatGPT and Jira Who is this for? This workflow is tailored for IT security teams, managed service providers (MSPs), and organizations aiming to streamline the detection and reporting of phishing emails. It's especially useful for teams handling high email volumes and requiring quick, automated analysis. What problem is this workflow solving? Phishing emails pose a significant cybersecurity threat, and manual review processes are time-consuming and prone to human error. This workflow automates the identification of malicious emails, provides AI-driven insights, and generates structured reports, enabling faster and more efficient responses to email-based threats. What this workflow does This workflow integrates Gmail or Microsoft Outlook to monitor and capture incoming emails. It processes the email content and headers, converts the email's body to a visual screenshot for clarity, and uses ChatGPT's advanced AI to analyze the email for phishing indicators. Based on the analysis, it categorizes emails as potentially malicious or benign, creating detailed Jira tickets for each case. Attachments, including the email body and screenshots, are automatically uploaded for comprehensive reporting. Key steps include: Email Integration: Captures emails from Gmail or Microsoft Outlook. Content Processing: Extracts and organizes email content and metadata. AI Analysis: Uses ChatGPT to evaluate email content and headers. Classification: Categorizes emails as malicious or benign. Automated Reporting: Creates Jira tickets with detailed analysis and attachments. Setup Authentication: Configure Gmail or Microsoft Outlook credentials in n8n. API Keys: Add credentials for the HTML screenshot service (hcti.io) and OpenAI. Jira Configuration: Set up project and issue types in the Jira nodes. Customization: Update sticky notes and nodes to fit your organizational requirements, such as modifying the AI prompt or Jira ticket fields. How to customize this workflow to your needs Adjust email triggers to include or exclude specific senders or subjects. Refine the AI prompt in the ChatGPT node to tailor phishing detection criteria. Modify Jira ticket content to include additional fields or match specific workflows. This workflow is ideal for automating email threat detection, reducing response times, and enhancing overall cybersecurity processes. By leveraging AI-powered insights, it helps organizations stay ahead of phishing attacks.
by Anderson Adelino
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Build intelligent AI chatbot with RAG and Cohere Reranker Who is it for? This template is perfect for developers, businesses, and automation enthusiasts who want to create intelligent chatbots that can answer questions based on their own documents. Whether you're building customer support systems, internal knowledge bases, or educational assistants, this workflow provides a solid foundation for document-based AI conversations. How it works This workflow creates an intelligent AI assistant that combines RAG (Retrieval-Augmented Generation) with Cohere's reranking technology for more accurate responses: Chat Interface: Users interact with the AI through a chat interface Document Processing: PDFs from Google Drive are automatically extracted and converted into searchable vectors Smart Search: When users ask questions, the system searches through vectorized documents using semantic search Reranking: Cohere's reranker ensures the most relevant information is prioritized AI Response: OpenAI generates contextual answers based on the retrieved information Memory: Conversation history is maintained for context-aware interactions Setup steps Prerequisites n8n instance (self-hosted or cloud) OpenAI API key Supabase account with vector extension enabled Google Drive access Cohere API key 1. Configure Supabase Vector Store First, create a table in Supabase with vector support: CREATE TABLE cafeina ( id SERIAL PRIMARY KEY, content TEXT, metadata JSONB, embedding VECTOR(1536) ); -- Create a function for similarity search CREATE OR REPLACE FUNCTION match_cafeina( query_embedding VECTOR(1536), match_count INT DEFAULT 10 ) RETURNS TABLE( id INT, content TEXT, metadata JSONB, similarity FLOAT ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT cafeina.id, cafeina.content, cafeina.metadata, 1 - (cafeina.embedding <=> query_embedding) AS similarity FROM cafeina ORDER BY cafeina.embedding <=> query_embedding LIMIT match_count; END; $$; 2. Set up credentials Add the following credentials in n8n: OpenAI**: Add your OpenAI API key Supabase**: Add your Supabase URL and service role key Google Drive**: Connect your Google account Cohere**: Add your Cohere API key 3. Configure the workflow In the "Download file" node, replace URL DO ARQUIVO with your Google Drive file URL Adjust the table name in both Supabase Vector Store nodes if needed Customize the agent's tool description in the "searchCafeina" node 4. Load your documents Execute the bottom workflow (starting with "When clicking 'Execute workflow'") This will download your PDF, extract text, and store it in Supabase You can repeat this process for multiple documents 5. Start chatting Once documents are loaded, activate the main workflow and start chatting with your AI assistant through the chat interface. How to customize Different document types**: Replace the Google Drive node with other sources (Dropbox, S3, local files) Multiple knowledge bases**: Create separate vector stores for different topics Custom prompts**: Modify the agent's system message for specific use cases Language models**: Switch between different OpenAI models or use other LLM providers Reranking settings**: Adjust the top-k parameter for more or fewer search results Memory window**: Configure the conversation memory buffer size Tips for best results Use high-quality, well-structured documents for better search accuracy Keep document chunks reasonably sized for optimal retrieval Regularly update your vector store with new information Monitor token usage to optimize costs Test different reranking thresholds for your use case Common use cases Customer Support**: Create bots that answer questions from product documentation HR Assistant**: Build assistants that help employees find information in company policies Educational Tutor**: Develop tutors that answer questions from course materials Research Assistant**: Create tools that help researchers find relevant information in papers Legal Helper**: Build assistants that search through legal documents and contracts
by Trung Tran
📒 Telegram Expense Tracker to Google Sheets with GPT-4.1 👤 Who’s it for This workflow is for anyone who wants to log their daily expenses by simply chatting with a Telegram bot. Ideal for: Individuals who want a quick way to track spending Freelancers who log receipts and purchases on the go Teams or small business owners who want lightweight expense capture ⚙️ How it works / What it does User sends a text message on Telegram describing an expense (e.g., “Bought coffee for 50k at Highlands”) Message format is validated If the message is text, it proceeds to GPT-4.1 Mini for processing. If it's not text (e.g. image or file), the bot sends a fallback message. OpenAI GPT-4.1 Mini parses the message and returns: relevant: true/false expense_record: structured fields (date, amount, currency, category, description, source) message: a friendly confirmation or fallback If valid: The bot replies with a fun acknowledgment The data is saved to a connected Google Sheet If invalid: A fallback message is sent to encourage proper input 🛠️ How to set up 1. Telegram Bot Setup Create a bot using BotFather on Telegram Copy the bot token and paste it into the Telegram Trigger node 2. Google Sheet Setup Create a Google Sheet with these columns: Date | Amount | Currency | Category | Description | SourceMessage Share the sheet with your n8n service account email 3. OpenAI Configuration Connect the OpenAI Chat Model node using your OpenAI API key Use GPT-4.1 Mini as the model Apply a system prompt that extracts structured JSON with: relevant, expense_record, and message 4. Add Parser Use the Structured Output Parser node to safely parse the JSON response 5. Conditional Logic Nodes Is text message? Checks if the message is in text format Supported scenario? Checks if relevant = true in the LLM response 6. Final Actions If relevant**: Send confirmation via Telegram Append row to Google Sheet If not relevant**: Send fallback message via Telegram ✅ Requirements Telegram bot token OpenAI GPT-4.1 Mini API access n8n instance (self-hosted or cloud) Google Sheet with access granted to n8n Basic understanding of n8n node configuration 🧩 How to customize the workflow | Feature | How to Customize | |----------------------------------|-------------------------------------------------------------------| | Add multi-currency support | Update system prompt to detect and extract different currencies | | Add more categories | Modify the list of categories in the system prompt | | Track multiple users | Add username or chat ID column to the Google Sheet | | Trigger alerts | Add Slack, Email, or Telegram alerts for specific expense types | | Weekly summaries | Use a cron node + Google Sheet query + Telegram message | | Visual dashboards | Connect the sheet to Looker Studio or Google Data Studio | Built with 💬 Telegram + 🧠 GPT-4.1 Mini + 📊 Google Sheets + ⚡ n8n
by slow-groovin@api2o.com
AI Comprehensive Research on User's Query with Gemini and Web Search What is this? Perform comprehensive research on a user's query by dynamically generating search terms, querying the web using Google Search (by Gemini) , reflecting on the results to identify knowledge gaps, and iteratively refining its search until it can provide a well-supported answer with citations. (like Perplexity) This workflow is a reproduction of gemini-fullstack-langgraph-quickstart in N8N. The gemini‑fullstack‑langgraph‑quickstart is a demo by the Google‑Gemini team that showcases how to build a powerful full‑stack AI agent using Gemini and LangGraph How It Works Generate Query 💬 generates one or more search queries tasks based on the User's question. uses Gemini 2.0 Flash Web Research 🌐 execute web search tasks using the native Google Search API tool in combination with Gemini 2.0 Flash. Reflection 📚 Identifies knowledge gaps and generates potential follow-up queries. Setup Configure API Credentials: Create Google Gemini(PaLM) Api Credential using you own Gemini key Connect the credential with three nodes: Google Gemini Chat Model and GeminiSearch and reflection Configure Redis Source: prepare a Redis service that can be accessed by n8n Create Redis Crediential and connect it with all Redis node Customize Try using different Gemini models. Try modifying the parameters number_of_initial_queries and max_research_loops. Why use Redis? Use Redis as an external storage to maintain global variables (counter, search results, etc.) This workflow contains a loop process, which need global variables (as State in LangGraph). It is difficult to achieve global variables management without external storage in n8n.
by Onur
Proactively retain customers predicted to churn with this automated n8n workflow. Running daily, it identifies high-risk customers from your Google Sheet, uses Google Gemini to generate personalized win-back offers based on their churn score and preferences, sends these offers via Gmail, and logs all actions for tracking. What does this workflow do? This workflow automates the critical process of customer retention by: Running automatically every day** on a schedule you define. Fetching customer data** from a designated Google Sheet containing metrics like predicted churn scores and preferred categories. Filtering* to identify customers with a high churn risk (score > 0.7) who haven't recently received a specific campaign (based on the created_campaign_date field - *you might need to adjust this logic). Using Google Gemini AI to dynamically generate one of three types of win-back offers, personalized based on the customer's specific churn score and preferred product categories: Informational: (Score 0.7-0.8) Highlights new items in preferred categories. Bonus Points: (Score 0.8-0.9) Offers points for purchases in a target category (e.g., Books). Discount Percentage: (Score 0.9-1.0) Offers a percentage discount in a target category (e.g., Books). Sending the personalized offer* directly to the customer via *Gmail**. Logging** each sent offer or the absence of eligible customers for the day in a separate 'SYSTEM_LOG' Google Sheet for monitoring and analysis. Who is this for? CRM Managers & Retention Specialists:** Automate personalized outreach to at-risk customers. Marketing Teams:** Implement data-driven retention campaigns with minimal manual effort. E-commerce Businesses & Subscription Services:** Proactively reduce churn and increase customer lifetime value. Anyone** using customer data (especially churn prediction scores) who wants to automate personalized retention efforts via email. Benefits Automated Retention:** Set it up once, and it runs daily to engage at-risk customers automatically. AI-Powered Personalization:** Go beyond generic offers; tailor messages based on churn risk and customer preferences using Gemini. Proactive Churn Reduction:* Intervene *before customers leave by addressing high churn scores with relevant offers. Scalability:** Handle personalized outreach for many customers without manual intervention. Improved Customer Loyalty:** Show customers you value them with relevant, timely offers. Action Logging:** Keep track of which customers received offers and when the workflow ran. How it Works Daily Trigger: The workflow starts automatically based on the schedule set (e.g., daily at 9 AM). Fetch Data: Reads all customer data from your 'Customer Data' Google Sheet. Filter Customers: Selects customers where predicted_churn_score > 0.7 AND created_campaign_date is empty (verify this condition fits your needs). Check for Eligibility: Determines if any customers passed the filter. IF Eligible Customers Found: Loop: Processes each eligible customer one by one. Generate Offer (Gemini): Sends the customer's predicted_churn_score and preferred_categories to Gemini. Gemini analyzes these and the defined rules to create the appropriate offer type, value, title, and detailed message, returning it as structured JSON. Log Sent Offer: Records action_taken = SENT_WINBACK_OFFER, the timestamp, and customer_id in the 'SYSTEM_LOG' sheet. Send Email: Uses the Gmail node to send an email to the customer's user_mail with the generated offer_title as the subject and offer_details as the body. IF No Eligible Customers Found: Set Status: Creates a record indicating system_log = NOT_FOUND. Log Status: Records this 'NOT_FOUND' status and the current timestamp in the 'SYSTEM_LOG' sheet. n8n Nodes Used Schedule Trigger Google Sheets (x3 - Read Customers, Log Sent Offer, Log Not Found) Filter If SplitInBatches (Used for Looping) Langchain Chain - LLM (Gemini Offer Generation) Langchain Chat Model - Google Gemini Langchain Output Parser - Structured Set (Prepare 'Not Found' Log) Gmail (Send Offer Email) Prerequisites Active n8n instance (Cloud or Self-Hosted). Google Account** with access to Google Sheets and Gmail. Google Sheets API Credentials (OAuth2):** Configured in n8n. Two Google Sheets:** 'Customer Data' Sheet: Must contain columns like customer_id, predicted_churn_score (numeric), preferred_categories (string, e.g., ["Books", "Electronics"]), user_mail (string), and potentially created_campaign_date (date/string). 'SYSTEM_LOG' Sheet: Should have columns like system_log (string), date (string/timestamp), and customer_id (string, optional for 'NOT_FOUND' logs). Google Cloud Project** with the Vertex AI API enabled. Google Gemini API Credentials:** Configured in n8n (usually via Google Vertex AI credentials). Gmail API Credentials (OAuth2):** Configured in n8n with permission to send emails. Setup Import the workflow JSON into your n8n instance. Configure Schedule Trigger: Set the desired daily run time (e.g., Hours set to 9). Configure Google Sheets Nodes: Select your Google Sheets OAuth2 credentials for all three Google Sheets nodes. 1. Fetch Customer Data...: Enter your 'Customer Data' Spreadsheet ID and Sheet Name. 5b. Log Sent Offer...: Enter your 'SYSTEM_LOG' Spreadsheet ID and Sheet Name. Verify column mapping. 3b. Log 'Not Found'...: Enter your 'SYSTEM_LOG' Spreadsheet ID and Sheet Name. Verify column mapping. Configure Filter Node (2. Filter High Churn Risk...): Crucially, review the second condition: {{ $json.created_campaign_date.isEmpty() }}. Ensure this field and logic correctly identify customers who should receive the offer based on your campaign strategy. Modify or remove if necessary. Configure Google Gemini Nodes: Select your configured Google Vertex AI / Gemini credentials in the Google Gemini Chat Model node. Review the prompt in the 5a. Generate Win-Back Offer... node to ensure the offer logic matches your business rules (especially category names like "Books"). Configure Gmail Node (5c. Send Win-Back Offer...): Select your Gmail OAuth2 credentials. Activate the workflow. Ensure your 'Customer Data' and 'SYSTEM_LOG' Google Sheets are correctly set up and populated. The workflow will run automatically at the next scheduled time. This workflow provides a powerful, automated way to engage customers showing signs of churn, using personalized AI-driven offers to encourage them to stay. Adapt the filtering and offer logic to perfectly match your business needs!
by Mark Shcherbakov
Video Guide I prepared a comprehensive guide detailing how to create a Smart Agent that automates meeting task management by analyzing transcripts, generating tasks in Airtable, and scheduling follow-ups when necessary. Youtube Link Who is this for? This workflow is ideal for project managers, team leaders, and business owners looking to enhance productivity during meetings. It is particularly helpful for those who need to convert discussions into actionable items swiftly and effectively. What problem does this workflow solve? Managing action items from meetings can often lead to missed tasks and poor follow-up. This automation alleviates that issue by automatically generating tasks from meeting transcripts, keeping everyone informed about their responsibilities and streamlining communication. What this workflow does The workflow leverages n8n to create a Smart Agent that listens for completed meeting transcripts, processes them using AI, and generates tasks in Airtable. Key functionalities include: Capturing completed meeting events through webhooks. Extracting relevant meeting details such as transcripts and participants using API calls. Generating structured tasks from meeting discussions and sending notifications to clients. Webhook Integration: Listens for meeting completion events to trigger subsequent actions. API Requests for Data: Pulls necessary details like transcripts and participant information from Fireflies. Task and Notification Generation: Automatically creates tasks in Airtable and notifies clients of their responsibilities. Setup N8N Workflow Configure the Webhook: Set up a webhook to capture meeting completion events and integrate it with Fireflies. Retrieve Meeting Content: Use GraphQL API requests to extract meeting details and transcripts, ensuring appropriate authentication through Bearer tokens. AI Processing Setup: Define system messages for AI tasks and configure connections to the AI chat model (e.g., OpenAI's GPT) to process transcripts. Task Creation Logic: Create structured tasks based on AI output, ensuring necessary details are captured and records are created in Airtable. Client Notifications: Use an email node to notify clients about their tasks, ensuring communications are client-specific. Scheduling Follow-Up Calls: Set up Google Calendar events if follow-up meetings are required, populating details from the original meeting context.