by phil
This workflow automates web scraping of Amazon search result pages by retrieving raw HTML, cleaning it to retain only the relevant product elements, and then using an LLM to extract structured product data (name, description, rating, reviews, and price), before saving the results back to Google Sheets. It integrates Google Sheets to supply and collect URLs, BrightData to fetch page HTML, a custom n8n Function node to sanitize the HTML, LangChain (OpenRouter GPT-4) to parse product details, and Google Sheets again to store the output. URL to scape . Result Who Needs Amazon Search Result Scraping? This scraping workflow is ideal for teams and businesses that need to monitor Amazon product listings at scale: E-commerce Analysts** – Track competitor pricing, ratings, and inventory trends. Market Researchers** – Collect data on product popularity and reviews for market analysis. Data Teams** – Automate ingestion of product metadata into BI pipelines or data lakes. Affiliate Marketers** – Keep affiliate catalogs up to date with latest product details and prices. If you need reliable, structured data from Amazon search results delivered directly into your spreadsheets, this workflow saves you hours of manual copy-and-paste. Why Use This Workflow? End-to-End Automation** – From URL list to clean JSON output in Sheets. Robust HTML Cleaning** – Strips scripts, styles, unwanted tags, and noise. Accurate Structured Parsing** – Leverages GPT-4 via LangChain for reliable extraction. Scalable & Repeatable** – Processes thousands of URLs in batches. Step-by-Step: How This Workflow Scrapes Amazon Get URLs from Google Sheets – Reads a list of search result URLs. Loop Over Items – Iterates through each URL in controlled batches. Fetch Raw HTML – Uses BrightData’s Web Unlocker proxy to retrieve the page. Clean HTML – A Function node removes doctype, scripts, styles, head, comments, classes, and non-whitelisted tags, collapsing extra whitespace. Extract with LLM – Passes cleaned HTML into LangChain → GPT-4 to output JSON for each product: name, description, rating, reviews, price Save Results – Appends the JSON fields as columns back into a “results” sheet in Google Sheets. Customization: Tailor to Your Needs Adaptable Sites** – This workflow can be adapted to any e-commerce or other website, for example Walmart or eBay. Whitelist Tags** – Modify the allowedTags array in the Code node to keep additional HTML elements. Schema Changes** – Update the Structured Output Parser schema to include more fields (e.g., availability, SKU). Alternate Data Sink** – Instead of Sheets, route output to a database, CSV file, or webhook. 🔑 Prerequisites Google Sheets Credentials** – OAuth credentials configured in n8n. BrightData API token** – Stored in n8n credentials as BRIGHTDATA_TOKEN. OpenRouter API Key** – Configured for the LangChain node to call GPT-4. n8n Instance** – Self-hosted or cloud with sufficient quota for HTTP requests and LLM calls. 🚀 Installation & Setup Configure Credentials** In n8n, set up Google Sheets OAuth under “Credentials.” Add BrightData token as a new HTTP Request credential. Create an OpenRouter API key credential for the LangChain node. Import the Workflow** Copy the JSON workflow into n8n’s “Import” dialog. Map your Google Sheet IDs and GIDs to the {{WEB_SHEET_ID}}, {{TRACK_SHEET_GID}}, and {{RESULTS_SHEET_GID}} placeholders. Ensure the BRIGHTDATA_TOKEN credential is selected on the HTTP Request node. Test & Run** Add a few Amazon search URLs to your “track” sheet. Execute the workflow and verify product data appears in your “results” sheet. Tweak batch size or parser schema as needed. ⚠ Important API Rate Limits** – Monitor your BrightData and OpenRouter usage to avoid throttling. Amazon’s Terms** – Ensure your scraping complies with Amazon’s policies and legal requirements. Summary This workflow delivers a fully automated, scalable solution to extract structured product data from Amazon search pages directly into Google Sheets—streamlining your competitive analysis and data collection. 🚀 Phil | Inforeole
by KlickTipp
Community Node Disclaimer: This workflow uses KlickTipp community nodes. How It Works This workflow automates personalized WhatsApp message template delivery triggered by events in KlickTipp or by messages sent to the Whatsapp Business account. When a contact triggers an Outbound, the workflow uses a pre-approved WhatsApp message template to send dynamic, real-time messages through the WhatsApp Business Cloud API. When receiving messages it checks whether a cancellation should be processed or if a auto-response is sent. This setup is ideal for time-sensitive campaigns such as: Birthday greetings Discount or promo notifications Follow-ups on product or service interest Key Features KlickTipp Trigger Starts the workflow when a specific outbound is triggered Typical use case: subcriber receives activation Tag and triggers an Outbound which sends a webhook call to trigger WhatsApp messaging. WhatsApp Business Cloud - Message Trigger Listens to messages from the contact and processes answers with answering auto-responder or by tagging the contact in KlickTipp. WhatsApp Business Cloud - Sending Template Messages Sends WhatsApp message templates using a pre-approved template. Template placeholders are filled with data from KlickTipp custom fields. Setup Instructions Set up the KlickTipp and Whatsapp nodes in your n8n instance. Authenticate your WhatsApp and KlickTipp accounts. Create the necessary custom fields to match the data structure Verify and customize field assignments in the workflow to align with your specific form and subscriber list setup. | Field Label | Field Type | |--------------------------------|-------------| | Whatsapp_Produkt/Dienstleistung | Single line | | | Whatsapp_Name/Unternehmen | Single line | | Whatsapp_Link_Endung | Single line | Testing & Deployment Use a real test contact with all required fields filled. Trigger the Outbound in KlickTipp using the activation tag and answer with a message to the template. Run the scenario once in n8n to verify successful delivery of the whatsapp message template to your test contact as well as the reception of the auto-responder and the subscription and tagging in KlickTipp to stop further messages. Campaign Expansion Ideas Connect campaign to process keywords like "STOP" from WhatsApp messages Pair WhatsApp with welcome email series for onboarding. Use tags like product_interest_X for precise segmentation. A/B test templates with different CTA formats or timings. Monitor CTRs via dynamic URLs in WhatsApp templates. Benefits Multi-channel engagement:** Adds WhatsApp to your marketing toolkit. Dynamic content:** Personalizes messages using contact data. KlickTipp campaign control** Whatsapp contacts can for example signal with messages like "STOP" to receive the according Tag in KlickTipp in order to start/end automations. > 💡 Pro Tip: Customize the domain link ending per campaign or product line. This allows targeted redirects, e.g., meinshop.de/ProduktA or `mein Ressources: Send WhatsApp Templates with KlickTipp Use KlickTipp Community Node in n8n Automate Workflows: KlickTipp Integration in n8n
by Yang
Who is this for? This workflow is perfect for marketers, SEO specialists, product teams, and competitive analysts who want to monitor and summarize public reviews of their competitors. It’s especially helpful for small teams who want fast insights from Google reviews without spending hours manually reading and sorting them. What problem is this workflow solving? Manually going through competitor reviews is time-consuming and repetitive. You risk missing patterns or insights, and it’s hard to share summaries with your team quickly. This workflow automatically scrapes reviews from Google and generates a structured summary of pain points and positive feedback. That way, you can focus on strategy instead of sorting through dozens of reviews. What this workflow does This automation watches for new competitor entries in a Google Sheet, then: Uses Dumpling AI to scrape the latest Google reviews (up to 20) for each business. Splits and cleans the reviews for analysis. Sends them to GPT-4o, which summarizes the most common complaints and praises. Saves the structured result back to the same Google Sheet. You’ll instantly get an overview of what people are saying about any competitor. Setup Google Sheet Setup Create a Google Sheet with at least one column: Business Add names or search queries for the competitors you want to analyze Optional: Add columns for Summary of Reviews and Pain Points Connect Dumpling AI Sign up at Dumpling AI Create an agent using the get-google-reviews endpoint Copy your agent key Use it in the HTTP Request node in this workflow OpenAI Setup Use your API key with GPT-4o access The prompt is already structured to generate grouped summaries from reviews Run the Workflow Trigger it manually or schedule it Make sure your Google Sheets, OpenAI, and Dumpling AI connections are active How to customize this workflow to your needs You can expand the number of reviews retrieved by changing the Dumpling AI agent config Replace Google Sheets with Airtable if you want more robust data views Add more fields like star ratings or review dates in your agent for richer analysis Change the GPT prompt to highlight emotional tone, urgency, or feature mentions 🧠 Node Details Google Sheets Trigger**: Watches for new competitor names HTTP Request (Dumpling AI)**: Scrapes 20 recent reviews from Google SplitOut Node**: Breaks review array into individual items Code Node**: Extracts and combines review text Edit Fields Node**: Structures the review content before GPT GPT-4o Node**: Analyzes and summarizes top pain points and praise Google Sheets Output**: Saves the summary back to the same sheet Dependencies Dumpling AI account and review scraping agent setup OpenAI API key with GPT-4o access Google Sheets OAuth2 credentials
by Yang
Who is this for? This workflow is perfect for operations teams, accountants, e-commerce businesses, or finance managers who regularly process digital invoices and need to automate data extraction and record-keeping. What problem is this workflow solving? Manually reading invoice PDFs, extracting relevant data, and entering it into spreadsheets is time-consuming and error-prone. This workflow automates that process—watching a Google Drive folder, extracting structured invoice data using Dumpling AI, and saving the results into Google Sheets. What this workflow does Watches a specific Google Drive folder for new invoices. Downloads the uploaded invoice file. Converts the file into a Base64 format. Sends the file to Dumpling AI’s extract-document endpoint with a detailed parsing prompt. Parses Dumpling AI’s JSON response using a Code node. Splits the items array into individual rows using the Split Out node. Appends each invoice item to a preformatted Google Sheet along with the full header metadata (order number, PO, addresses, etc.). Setup Google Drive Setup Create or select a folder in Google Drive and place the folder ID in the trigger node. Make sure your n8n Google Drive credentials are authorized for access. Google Sheets Create a Google Sheet with the following headers: Order number, Document Date, Po_number, Sold to name, Sold to address, Ship to name, Ship to address, Model, Description, Quantity, Unity price, Total price Paste the Sheet ID and sheet name (Sheet1) into the Google Sheets node. Dumpling AI Sign up at Dumpling AI Go to your account settings and generate your API key. Paste this key into the HTTP header of the Dumpling AI request node. The endpoint used is: https://app.dumplingai.com/api/v1/extract-document Prompt (already included) This prompt extracts: order number, document date, PO number, shipping/billing details, and detailed line items (model, quantity, unit price, total). How to customize this workflow to your needs Adjust the Google Sheet fields to fit your invoice structure. Modify the Dumpling AI prompt if your invoices have additional or different data points. Add filtering logic if you want to handle different invoice types differently. Replace Google Sheets with Airtable or a database if preferred. Use a different trigger like an email attachment if invoices come via email.
by Yang
Who is this for? This workflow is perfect for customer support teams, sales departments, or solopreneurs who receive frequent email enquiries and want to automate the initial response process using AI. If you spend too much time answering similar questions, this system helps respond faster and more intelligently—without writing a single line of code. What problem is this workflow solving? Manually responding to repeated customer enquiries slows productivity and increases delay. This workflow classifies if an incoming email is a real enquiry, analyzes the content with a LangChain-powered agent, fetches helpful context using Dumpling AI, and sends a personalized reply using Gmail—all within minutes. What this workflow does Listens for new incoming Gmail messages using the Gmail Trigger node. Classifies whether the email is an enquiry using a GPT-4o classification prompt. Uses a Filter node to continue only if the email was classified as an enquiry. Passes the email content to a LangChain Agent, enhanced with memory, AI tools, and Dumpling AI to search for relevant information. The agent constructs a smart, relevant response, then sends it to the original sender via Gmail. Setup Connect Gmail Use the Gmail Trigger node to connect to the Gmail account that receives enquiries. Make sure Gmail OAuth2 credentials are authenticated. Configure Dumpling AI Agent Sign up at Dumpling AI. Create an agent trained to search your help docs, site content, or FAQs. Copy your Dumpling agent ID and API key. Paste it in the Dumpling AI Agent – Search for Relevant Info HTTP Request node. Set Up LangChain Agent No extra setup needed beyond connecting OpenAI credentials. GPT-4o is used for classification and reply generation. Enable Gmail Reply Node The final Send Email Response via Gmail node will send the AI-generated reply back to the same thread. How to customize this workflow to your needs Change the classification prompt to include other email types like “support”, “complaint”, or “sales”. Add additional logic if you want to CC someone or forward certain types of enquiries. Add a Notion or Google Sheets node to log the conversation for analytics. Replace Gmail with Outlook or another email provider by switching the nodes. Improve context by adding more AI tools like database queries or preloaded FAQs.
by Hugo
This workflow provides a robust solution for automatically backing up all your n8n workflows to a designated GitHub repository on a daily basis. By leveraging the n8n API and GitHub API, it ensures your workflows are version-controlled and securely stored, safeguarding against data loss and facilitating disaster recovery. How it works The automation follows these key steps: Scheduled trigger: The workflow is initiated automatically every day at a pre-configured time. List existing backups: It first connects to your GitHub repository to retrieve a list of already backed-up workflow files. This helps in determining whether a workflow's backup file needs to be created or updated. Retrieve n8n workflows: The workflow then fetches all current workflows directly from your n8n instance using the n8n REST API. Process and prepare: Each retrieved workflow is individually processed. Its data is converted into JSON format. This JSON content is then encoded to base64, a format suitable for GitHub API file operations. Commit to GitHub: For each n8n workflow: A standardized filename is generated (e.g., workflow-name-tag.json). The workflow checks if a file with this name already exists in the GitHub repository (based on the list fetched in step 2). If the file exists: It updates the existing file with the latest version of the workflow. If it's a new workflow (file doesn't exist): A new file is created in the repository. Each commit is timestamped for clarity. This process ensures that you always have an up-to-date version of all your n8n workflows stored securely in your GitHub version control system, providing peace of mind and a reliable backup history. Pre-requisites Before you can use this template, please ensure you have the following: An active n8n instance (self-hosted or cloud). A GitHub account. A GitHub repository created where you want to store the workflow backups. A GitHub Personal Access Token with repo scope (or fine-grained token with read/write access to the specific backup repository). This token will be used for GitHub API authentication. n8n API credentials (API key) for your n8n instance. Set up steps Setting up this workflow should take approximately 10-15 minutes if you have your credentials ready. Import the template: Import this workflow into your n8n instance. Configure n8n API credentials: Locate the "Retrieve workflows" node. In the "Credentials" section for "n8n API", create new credentials (or select existing ones). Enter your n8n instance URL and your n8n API Key (you can create your n8n api key in the settings of your n8n instance) Configure GitHub credentials: Locate the "List files from repo" node (and subsequently "Update file" / "Upload file" nodes which will use the same credential). In the "Credentials" section for "GitHub API", create new credentials. Select OAuth2/Personal Access Token authentication method. Enter the GitHub Personal Access Token you generated as per the pre-requisites. Specify repository details: In the "List files from repo", "Update file", and "Upload file" GitHub nodes: Set the Owner: Your GitHub username or organization name. Set the Repository: The name of your GitHub repository dedicated to backups. Set the Branch (e.g., main or master) where backups should be stored. (Optional) Specify a Path within the repository if you want backups in a specific folder (e.g., n8n_backups/). Leave blank to store in the root. Adjust schedule (Optional): Select the "Schedule Trigger" node. Modify the trigger interval (e.g., change the time of day or frequency) as needed. By default, it's set for a daily run. Activate the workflow: Save and activate the workflow. Explanation of nodes Here's a detailed breakdown of each node used in this workflow: Schedule trigger** Type: n8n-nodes-base.scheduleTrigger Purpose: This node automatically starts the workflow based on a defined schedule (e.g., daily at midnight). List files from repo** Type: n8n-nodes-base.github Purpose: Connects to your specified GitHub repository and lists all files, primarily to check for existing workflow backups. Aggregate** Type: n8n-nodes-base.aggregate Purpose: Consolidates the list of file names obtained from the "List files from repo" node into a single item for easier lookup later in the "Check if file exists" node. Retrieve workflows** Type: n8n-nodes-base.n8n Purpose: Uses the n8n API to fetch a list of all workflows currently present in your n8n instance. Json file** Type: n8n-nodes-base.convertToFile Purpose: Takes the data of each workflow (retrieved by the "Retrieve workflows" node) and converts it into a structured JSON file format. To base64** Type: n8n-nodes-base.extractFromFile Purpose: Converts the binary content of the JSON file (from the "Json file" node) into a base64 encoded string. This encoding is required by the GitHub API for file content. Commit date & file name** Type: n8n-nodes-base.set Purpose: Prepares metadata for the GitHub commit. It generates: commitDate: The current date and time for the commit message. fileName: A standardized file name for the workflow backup (e.g., my-workflow-vps-backups.json), typically using the workflow's name and its first tag. Check if file exists** Type: n8n-nodes-base.if Purpose: A conditional node. It checks if the fileName (generated by "Commit date & file name") is present in the list of files aggregated by the "Aggregate" node. This determines if the workflow backup already exists in GitHub. Update file** Type: n8n-nodes-base.github Purpose: If the "Check if file exists" node determines the file does exist, this node updates that existing file in your GitHub repository with the latest workflow content (base64 encoded) and a commit message. Upload file** Type: n8n-nodes-base.github Purpose: If the "Check if file exists" node determines the file does not exist, this node creates and uploads a new file to your GitHub repository with the workflow content and a commit message. Customization Here are a few ways you can customize this template to better fit your needs: Backup path**: In the GitHub nodes ("List files from repo", "Update file", "Upload file"), you can specify a Path parameter to store backups in a specific folder within your repository (e.g., workflows/ or daily_backups/). Filename convention**: Modify the "Commit date & file name" node (specifically the expression for fileName) to change how backup files are named. For example, you might want to include the workflow ID or a different date format. Commit messages**: Customize the commit messages in the "Update file" and "Upload file" GitHub nodes to include more specific information if needed. Error handling**: Consider adding error handling branches (e.g., using the "Error Trigger" node or checking for node execution failures) to notify you if a backup fails for any reason. Filtering workflows**: If you only want to back up specific workflows (e.g., those with a particular tag or name pattern), you can add a "Filter" node after "Retrieve workflows" to include only the desired workflows in the backup process. Backup frequency**: Adjust the "Schedule Trigger" node to change how often the backup runs (e.g., hourly, weekly, or on specific days). Template was created in n8n v1.92.2
by Airtop
Define Your ICP from Customer LinkedIn Profiles Use Case This automation helps marketing and sales teams define their Ideal Customer Profile (ICP) using real LinkedIn profiles of current high-fit customers. By enriching and analyzing profile data, it generates a clear ICP definition and scoring methodology for future targeting. What This Automation Does This automation analyzes LinkedIn profiles of your existing customers and produces: A structured ICP definition A scoring model to evaluate future prospects A Google Boolean search string to find similar prospects Input: LinkedIn profile URLs of existing high-fit customers (e.g., https://www.linkedin.com/in/amirashkenazi/) Output: A Google Doc containing the ICP analysis and scoring methodology How It Works Trigger: Waits for a chat message containing one or more LinkedIn profile URLs. AI Agent: Parses and processes the URLs. Airtop Data Enrichment: Uses Airtop to extract structured information from each LinkedIn profile (e.g., job title, company, experience, skills). Memory: Maintains state between inputs for consistent analysis. LLM Analysis: Uses Claude 3.7 Sonnet to synthesize enriched data into a meaningful ICP. Google Docs: Automatically creates a new doc with a timestamped title and appends the ICP definition. Setup Requirements Airtop Profile connected to LinkedIn, Insert the profile name in the Airtop Tool Airtop API credentials. Get it free here If you choose to activate saving the profiles in Google Docs you will need OAuth2 credentials (or just copy the ICP definition from the chat) Next Steps Use the ICP for Scoring**: Feed new LinkedIn profiles through the same Airtop enrichment and use the scoring function to evaluate fit. Automate Target Discovery**: Plug the Boolean search output into LinkedIn, Google, or People Data Labs for ICP-matching lead generation. Refine Continuously**: Repeat the workflow as your customer base grows or segments evolve. Read more about how to Define ICP from Customer Examples
by Airtop
Scoring LinkedIn Profiles Against Your ICP Use Case This automation scores individual LinkedIn profiles against your Ideal Customer Profile (ICP) based on interest in AI, technical depth, and seniority level. It's ideal for prioritizing leads and understanding how well a person fits your ICP criteria. What This Automation Does Given a LinkedIn profile and an Airtop profile, it: Extracts relevant data from the person's profile Determines levels of AI interest, seniority, and technical depth Calculates an ICP score based on weighted criteria Returns the full enriched profile with the score Input parameters: LinkedIn Profile URL** (e.g., https://linkedin.com/in/janedoe) Airtop Profile** connected to LinkedIn ICP scoring method** in the Airtop node prompt Output fields in JSON format: Full name, job title, employer, company LinkedIn URL, location, number of connections and followers, about section content and more Calculated ICP Score (out of 100) How It Works Form Trigger or Workflow Trigger: Accepts input from either a form or another workflow. Parameter Assignment: Ensures proper variable names for downstream nodes. Airtop Enrichment Tool: Extracts and scores the person based on a detailed prompt. Scoring: Uses this point system: AI Interest: beginner (5), intermediate (10), advanced (25), expert (35) Technical Depth: basic (5), intermediate (15), advanced (25), expert (35) Seniority Level: junior (5), mid-level (15), senior (25), executive (30) Output Formatting: Cleans and returns the result as JSON. Setup Requirements IMPORTANT: Enter your ICP scoring method in the prompt field of the Airtop node Airtop Profile connected to LinkedIn. Airtop API credentials configured in n8n. Optional: a front-end form to collect profile URLs and trigger the automation. Next Steps Embed in CRM**: Trigger this automation on new leads to auto-score them. Batch Process Leads**: Run it over a list of profile URLs for segmentation. Customize Scoring**: Adjust point weights based on your sales priorities. Read more about Scoring LinkedIn Profiles Against Your ICP
by Airtop
Monitor X for Relevant Posts Use Case This automation monitors X (formerly Twitter) search pages in real time and extracts high-signal posts that match your categories of interest. It’s ideal for community engagement, lead discovery, thought leadership tracking, or competitive analysis. What This Automation Does Given a search URL and a list of categories, it: Logs into X using Airtop Opens the specified search URL Scrolls through the results Extracts up to 10 valid, English-language posts Filters and classifies each post by category (or marks as [NA] if unrelated) Returns the structured results as JSON Input parameters: airtop_profile** — An Airtop browser profile authenticated on X x_url** — X search URL (e.g., https://x.com/search?q=ai agents&f=live) relevant_categories** — Text-based list of categories to classify posts (e.g., "Web automation use cases", "Thought leadership") Output: A JSON array of posts, each with: writer time text url category How It Works Trigger: This workflow is triggered by another workflow (e.g., a community engagement pipeline). Input Setup: Accepts the Airtop profile, search URL, and categories to use for classification. Session: Starts a browser session using the Airtop profile. Window Navigation: Opens the provided X search URL. Extraction: Scrapes up to 10 posts with /status/ in the URL and text in English. Classification: Each post is labeled with a category if relevant, or [NA] otherwise. Filtering: Discards [NA] posts. Output: Returns the list of classified posts. Setup Requirements Airtop profile with an active X login. Airtop API key connected in n8n. List of category definitions to guide post classification (used in prompt). Next Steps Feed into Engagement Workflows**: Pass the results to workflows that reply, retweet, or track posts. Use in Slack Alerts**: Push classified posts into Slack channels for review and reaction. Customize Classifier**: Refine the categorization logic to include sentiment or company mentions. Read more about Monitoring X for Relevant Posts
by Airtop
Automating LinkedIn Company Data Extraction Use Case This automation extracts detailed company insights from a LinkedIn company page, including identity, scale, classification, and funding data. Ideal for investors, sales teams, and market researchers. What This Automation Does This automation accepts the following inputs: Company's LinkedIn URL**: The public LinkedIn page URL of the company. Airtop Profile (connected to LinkedIn)**: Your Airtop Profile authenticated on LinkedIn. It then extracts and returns structured data with: 1. Company Identity Full name Tagline Headquarters location (city, state, country) About section Website 2. Company Scale Current employee count Employee size bracket: [0-9], [10-150], [150+] 3. Business Classification Is the company an automation agency? (true/false) AI implementation level: Low / Medium / High Technical sophistication: Basic / Intermediate / Advanced / Expert 4. Funding Profile Most recent funding round Total amount raised Key investors Last funding update date How It Works Creates an Airtop session using the provided profile. Navigates to the company LinkedIn page. Executes an Airtop query to extract data. Outputs the result in a standardized JSON schema. Setup Requirements Airtop API Key A LinkedIn-authenticated Airtop Profile Next Steps Feed into CRM**: Enrich your accounts with detailed LinkedIn data. Prioritize Leads**: Use classification and funding data to prioritize outreach. Combine with People Data**: Integrate with individual-level enrichment for full context. Read more about how to extract company data from Linkedin with Airtop and n8n
by Automate With Marc
✉️ Telegram Email Agent with GPT + Gmail Category: Messaging / AI Agent Level: Beginner-Friendly Tags: Telegram, Email Automation, AI Agent, Gmail, GPT Model Watch Step-by-step video guide here: https://www.youtube.com/watch?v=nyI40s9QOuw&t=420s&pp=0gcJCb4JAYcqIYzv 🤖 What This Workflow Does This workflow turns your Telegram bot into a personal email assistant powered by AI. With just a message on Telegram, users can: Send an email via Gmail Automatically generate the email content using OpenAI Models. Get confirmation or responses directly in Telegram It's like ChatGPT meets Gmail, inside your Telegram chat. 🔧 How It Works Telegram Trigger – Listens for incoming messages from your bot. AI Agent – Processes the input using OpenAI Model and converts it into structured email content (To, Subject, Body). Memory Node – Stores short-term context per user (via chat ID), so the agent can hold simple conversations. Gmail Node – Sends the generated email using your Gmail account. Telegram Node – Replies to the user confirming the output or status. 🧠 Why This is Useful Ever wanted to send an email while on the go, without typing the whole thing out in Gmail? This is a fast, intuitive, and AI-powered way to: Dictate or draft emails from anywhere Create an AI-powered virtual assistant via Telegram Integrate n8n's Langchain Agent with real-world productivity use cases 🪜 Setup Instructions Connect your Telegram bot via BotFather and add the credentials in n8n. Set up your OpenAI API key (GPT-4o-mini recommended). Add your Gmail OAuth credentials. Activate the workflow and start messaging your bot!
by Yang
Who is this for? This template is designed for content creators, marketing teams, educators, or media managers who want to repurpose video content into written blog posts with visuals. It's ideal for anyone looking to automate the process of transforming YouTube videos into professional blog articles and custom images. What problem is this workflow solving? Creating written content from video material is time-consuming and manual. This workflow solves that by automating the entire pipeline: from detecting new YouTube video uploads to transcribing the audio, turning it into an engaging blog post, generating a matching visual, and saving both in Airtable. It saves hours of work while keeping your blog or social feed active and consistent. What this workflow does This automation listens for new YouTube videos added to a Google Drive folder, extracts the full transcript using Dumpling AI, and sends it to GPT-4o to generate a blog post and image prompt. Dumpling AI then turns the prompt into a 16:9 visual. The blog and visual are saved into Airtable for easy publishing or curation. Setup Google Drive Trigger Create a folder in Google Drive and upload your YouTube videos there. Link this folder in the "Watch Folder for New YouTube Videos" node. Enable polling every minute or adjust as needed. Download & Prepare the Video The video is downloaded and converted into base64 format by the next two nodes: Download Video File and Convert Downloaded Video to Base64. Transcription with Dumpling AI The base64 video is sent to Dumpling AI’s extract-video endpoint. You must have a Dumpling AI account and an API key with access to this endpoint: Dumpling AI Docs Generate Blog Content with GPT-4o GPT-4o takes the transcript and generates: A human-like blog post A descriptive prompt for AI image generation Make sure your OpenAI credentials are configured. Generate the Visual The prompt is passed to Dumpling AI’s generate-ai-image endpoint using model FLUX.1-pro. The result is a clean 1024x576 image. Save to Airtable Blog content is stored under the Content field in Airtable. The image prompt is also added to the Attachments column as a visual reference. Ensure Airtable base and table are preconfigured with the correct field names. How to customize this workflow to your needs Change the GPT prompt to alter the tone or format of the blog post (e.g., add bullet points or SEO tags). Modify the Dumpling AI prompt to generate different image styles. Add a scheduler or webhook trigger to run at different intervals or through other integrations. Connect this output to Ghost, Notion, or your CMS using additional nodes. 🧠 Sticky Note Summary Part 1: Transcription & Blog Prompt Watches a Google Drive folder for new video uploads. Downloads and encodes the video. Transcribes full audio with Dumpling AI. GPT-4o writes a blog post and descriptive image prompt. Part 2: Image Generation & Airtable Save Dumpling AI generates a visual from the image prompt. Blog content is saved to Airtable. The image prompt is patched into the Attachments field in the same record. ✅ Use this if you want to automate repurposing YouTube videos into blog content with zero manual work.