by Yaron Been
Description This workflow automates the process of finding local events and adding them directly to your Google Calendar. It eliminates the need for manual event tracking by automatically scraping event information and creating calendar entries. Overview This workflow automates the process of finding local events and adding them to your Google Calendar. It uses Bright Data to scrape event information from a specified source and then creates new events in your calendar, ensuring you never miss out on what's happening around you. Tools Used n8n:** The automation platform that orchestrates the workflow. Bright Data:** For scraping event data from websites without getting blocked. Google Calendar API:** To create and manage calendar events. How to Install Import the Workflow: Download the .json file and import it into your n8n instance. Configure Bright Data: Add your Bright Data credentials to the Bright Data node. Set Up Google Calendar: Authenticate your Google Calendar account in the Google Calendar node. Customize: Adjust the workflow to target the specific websites and event types you're interested in. Use Cases Community Managers:** Keep track of local meetups and community events. Event Enthusiasts:** Never miss a concert, festival, or local gathering. Marketing Professionals:** Monitor competitor events and industry conferences. Connect with Me Website:** https://www.nofluff.online YouTube:** https://www.youtube.com/@YaronBeen/videos LinkedIn:** https://www.linkedin.com/in/yaronbeen/ Get Bright Data:** https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #googlecalendar #brightdata #webscraping #events #eventautomation #localevents #calendarintegration #eventtracking #n8nworkflow #workflow #nocode #eventmanagement #productivitytools #timemanagement #eventplanning #automatedcalendar #eventdiscovery #techautomation #eventnotifications #eventscheduling #calendarsync #eventorganizer #automatedevents
by Davide
How it Works This workflow automates the process of handling job applications by extracting relevant information from submitted CVs, analyzing the candidate's qualifications against a predefined profile, and storing the results in a Google Sheet. Here’s how it operates: Data Collection and Extraction: The workflow begins with a form submission (On form submission node), which triggers the extraction of data from the uploaded CV file using the Extract from File node. Two informationExtractor nodes (Qualifications and Personal Data) are used to parse specific details such as educational background, work history, skills, city, birthdate, and telephone number from the text content of the CV. Processing and Evaluation: A Merge node combines the extracted personal and qualification data into a single output. This merged data is then passed through a Summarization Chain that generates a concise summary of the candidate’s profile. An HR Expert chain evaluates the candidate against a desired profile (Profile Wanted), assigning a score and providing considerations for hiring. Finally, all collected and processed data including the evaluation results are appended to a Google Sheets document via the Google Sheets node for further review or reporting purposes [[9]]. Set Up Steps To replicate this workflow within your own n8n environment, follow these steps: Configuration: Begin by setting up an n8n instance if you haven't already; you can sign up directly on their website or self-host the application. Import the provided JSON configuration into your n8n workspace. Ensure that all necessary credentials (e.g., Google Drive, Google Sheets, OpenAI API keys) are correctly configured under the Credentials section since some nodes require external service integrations like Google APIs and OpenAI for language processing tasks. Customization: Adjust the parameters of each node according to your specific requirements. For example, modify the fields in the formTrigger node to match what kind of information you wish to collect from applicants. Customize the prompts given to AI models in nodes like Qualifications, Summarization Chain, and HR Expert so they align with the type of analyses you want performed on the candidates' profiles. Update the destination settings in the Google Sheets node to point towards your own spreadsheet where you would like the final outputs recorded. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Emmanuel Bernard
Automatically Add Captions to Your Video Who Is This For? This workflow is ideal for content creators, marketers, educators, and businesses that regularly produce video content and want to enhance accessibility and viewer engagement by effortlessly adding subtitles. What Problem Does This Workflow Solve? Manually adding subtitles or captions to videos can be tedious and time-consuming. Accurate captions significantly boost viewer retention, accessibility, and SEO rankings. What Does This Workflow Do? This automated workflow quickly adds accurate subtitles to your video content by leveraging the Json2Video API. It accepts a publicly accessible video URL as input. It makes an HTTP request to Json2Video, where AI analyzes the video, generates captions, and applies them seamlessly. The workflow returns a URL to the final subtitled video. The second part of the workflow periodically checks the Json2Video API to monitor the processing status at intervals of 10 seconds. 👉🏻 Try Json2Video for Free 👈🏻 Key Features Automatic & Synced Captions:** Captions are generated automatically and synchronized perfectly with your video. Fully Customizable Design:** Easily adjust fonts, colors, sizes, and more to match your unique style. Word-by-Word Display:** Supports precise, word-by-word captioning for improved clarity and viewer engagement. Super Fast Processing:** Rapid caption generation saves time, allowing you to focus more on creating great content. Preconditions To use this workflow, you must have: A Json2Video API account. A video hosted at a publicly accessible URL. Why You Need This Workflow Adding subtitles to your videos significantly enhances their reach and effectiveness by: Improving SEO visibility, enabling search engines to effectively index your video content. Enhancing viewer engagement and accessibility, accommodating viewers who watch without sound or who have hearing impairments. Streamlining your content production process, allowing more focus on creativity. Specific Use Cases Social Media Content:** Boost viewer retention by adding subtitles. Educational Videos:** Enhance understanding and improve learning outcomes. Marketing Videos:** Reach broader and more diverse audiences.
by Harshil Agrawal
This workflow demonstrates the use of the $item(index) method. This method is useful when you want to reference an item at a particular index. This example workflow makes POST HTTP requests to a dummy URL. Set node: This node is used to set the API key that will be used in the workflow later. This node returns a single item. This node can be replaced with other nodes, based on the use case. Customer Datastore node: This node returns the data of customers that will be sent in the body of the HTTP request. This node returns 5 items. This node can be replaced with other nodes, based on the use case. HTTP Request node: This node uses the information from both the Set node and the Customer Datastore node. Since, the node will run 5 times, once for each item of the Customer Datastore node, you need to reference the API Key 5 times. However, the Set node returns the API Key only once. Using the expression {{ $item(0).$node["Set"].json["apiKey"] }} you tell n8n to use the same API Key for all the 5 requests.
by Tom
This workflow automatically deletes user data from different apps/services when a specific slash command is issued in Slack. Watch this talk and demo to learn more about this use case. The demo uses Slack, but Mattermost is Slack-compatible, so you can also connect Mattermost in this workflow. Prerequisites Accounts and credentials for the apps/services you want to use. Some basic knowledge of JavaScript. Nodes Webhook node triggers the workflow when a Slack slash command is issued. IF nodes confirm Slack's verification token and verify that the data has the expected format. Set node simplifies the payload. Switch node chooses the correct path for the operation to perform. Respond to Webhook nodes send responses back to Slack. Execute Workflow nodes call sub-workflows tailored to deleting data from each individual service. Function node, Crypto node, and Airtable node generate and store a log entry containing a hash value. HTTP Request node sends the final response back to Slack.
by Don Jayamaha Jr
Get deep insights into NFT market trends, sales data, and collection statistics—all powered by AI and OpenSea! This workflow connects GPT-4o-mini, OpenSea API, and n8n automation to provide real-time analytics on NFT collections, wallet transactions, and market trends. It is ideal for NFT traders, collectors, and investors looking to make informed decisions based on structured data. How It Works Receives user queries via Telegram, webhooks, or another connected interface. Determines the correct API tool based on the request (e.g., collection stats, wallet transactions, event tracking). Retrieves data from OpenSea API (requires API key). Processes the information using an AI-powered analytics agent. Returns structured insights in an easy-to-read format for quick decision-making. What You Can Do with This Agent 🔹 Retrieve NFT Collection Stats → Get floor price, volume, sales data, and market cap. 🔹 Track Wallet Activity → Analyze transactions for a given wallet address. 🔹 Monitor NFT Market Trends → Track historical sales, listings, bids, and transfers. 🔹 Compare Collection Performance → View side-by-side market data for different NFT projects. 🔹 Analyze NFT Transaction History → Check real-time ownership changes for any NFT. 🔹 Identify Market Shifts → Detect sudden spikes in demand, price changes, and whale movements. Example Queries You Can Use ✅ "Get stats for the Bored Ape Yacht Club collection." ✅ "Show me all NFT sales from the last 24 hours." ✅ "Fetch all NFT transfers for wallet 0x123...abc on Ethereum." ✅ "Compare the last 3 months of sales volume for Azuki and CloneX." ✅ "Track the top 10 wallets making the most NFT purchases this week." Available API Tools & Endpoints 1️⃣ Get Collection Stats → /api/v2/collections/{collection_slug}/stats (Retrieve NFT collection-wide market data) 2️⃣ Get Events → /api/v2/events (Fetch global NFT sales, transfers, listings, bids, redemptions) 3️⃣ Get Events by Account → /api/v2/events/accounts/{address} (Track transactions by wallet) 4️⃣ Get Events by Collection → /api/v2/events/collection/{collection_slug} (Get sales activity for a collection) 5️⃣ Get Events by NFT → /api/v2/events/chain/{chain}/contract/{address}/nfts/{identifier} (Retrieve historical transactions for a specific NFT) Set Up Steps Get an OpenSea API Key Sign up at OpenSea API and request an API key. Configure API Credentials in n8n Add your OpenSea API key under HTTP Header Authentication. Connect the Workflow to Telegram, Slack, or Database (Optional) Use n8n integrations to send alerts to Telegram, Slack, or save results to Google Sheets, Notion, etc. Deploy and Test Send a query (e.g., "Azuki latest sales") and receive instant NFT market insights! Stay ahead in the NFT market—get real-time analytics with OpenSea’s AI-powered analytics agent!
by bangank36
This workflow retrieves all Shopify Customers and saves them into a Google Sheets spreadsheet using the Shopify Admin REST API. It uses pagination to ensure all customers are collected efficiently. N8n does not have built-in actions for Customers, so I built the workflow using an HTTP Request node. How It Works This workflow uses the HTTP Request node to fetch paginated chunks manually. Shopify uses cursor-based pagination (page_info) instead of traditional page numbers. Pagination data is stored in the response headers, so we need to enable Include Response Headers and Status in the HTTP Request node. The workflow processes customer data, saves it to Google Sheets, and formats a compatible CSV for Squarespace Contacts import. This workflow can be run on demand or scheduled to keep your data up to date. Parameters You can adjust these parameters in the HTTP Request node: limit** – The number of customers per request (default: 50, max: 250). fields** – Comma-separated list of fields to retrieve. page_info** – Used for pagination; only limit and fields are allowed when paginating. 📌 Note: When you query paginated chunks with page_info, only the limit and fields parameters are allowed. Credentials Shopify API Key** – Required for authentication. Google Sheets API credentials** – Needed to insert data into the spreadsheet. Google Sheets Template Clone this spreadsheet: 📎 Google Sheets Template According to Squarespace documentation, your spreadsheet can have up to three columns and must be arranged in this order (no header): Email Address First Name (optional) Last Name (optional) Shopify Customer ID (this field will be ignored) Exporting a Compatible CSV for Squarespace Contacts This workflow also generates a CSV file that can be imported into Squarespace Contacts. How to Import the CSV to Squarespace: Open the Lists & Segments panel and click on your mailing list. Click Add Subscribers, then select Upload a list. Click Add a CSV file and select the file to import. Toggle These subscribers accept marketing to confirm permission. Preview your list, then click Import. Who Is This For? Shopify store owners** who need to export all customers to Google Sheets. Anyone looking for a flexible and scalable** Shopify customers extraction solution. Squarespace website owners** who want to bulk-create their Contacts using CSV. Explore More Templates 👉 Check out my other n8n templates
by Julian Reich
This n8n workflow automates the transformation of press releases into polished articles. It converts the content of an email and its attachments (PDF or Word documents) into an AI-written article/blog post. What does it do? This workflow assists editors and journalists in managing incoming press-releases from governments, companies, NGOs, or individuals. The result is a draft article that can easily be reviewed by the editor, who receives it in a reply email containing both the original input and the output, plus an AI-generated self-assessment. This self-assessment represents an additional feedback loop where the AI compares the input with the output to evaluate the quality and accuracy of its transformation. How does it work? Triggered by incoming emails in Google, it first filters attachments, retaining only Word and PDF files while removing other formats like JPGs. The workflow then follows one of three paths: If no attachments remain, it processes the inline email message directly. For PDF attachments, it uses an extractor to obtain the document content. For Word attachments, it extracts the text content by a http request. In each case, the extracted content is then passed to an AI agent that converts the press release into a well-structured article according to predefined prompts. A separate AI evaluation step provides a self-assessment by comparing the output with the original input to ensure quality and accuracy. Finally, the workflow generates a reply email to the sender containing three components: the original input, the AI-generated article, and the self-assessment. This streamlined process helps editors and journalists efficiently manage incoming press releases, delivering draft articles that require minimal additional editing." How to set it up 1. Configure Gmail Connection: Create or use an existing Gmail address Connect it through the n8n credentials manager Configure polling frequency according to your needs Set the trigger event to "Message Received" Optional: Filter incoming emails by specifying authorized senders Enable the "Download Attachments" option 2. Set Up AI Integration: Create an OpenAI account if you don't have one Create a new AI assistant or use an existing one Customize the assistant with specific instructions, style guidelines, or response templates Configure your API credentials in n8n to enable the connection 3. Configure Google Drive Integration: Connect your Google Drive credentials in n8n Set the operation mode to "Upload" Configure the input data field name as "data" -Set the file naming format to dynamic: {{ $json.fileName }} 4. Configure HTTP Request Node: Set request method to "POST" Enter the appropriate Google API endpoint URL Include all required authorization headers Structure the request body according to API specifications Ensure proper error handling for API responses 5. Configure HTTP Request Node 2: Set request method to "GET" Enter the appropriate Google API endpoint URL Include all required authorization headers Configure query parameters as needed Implement response validation and error handling 6. Configure Self-Assessment Node: Set operation to "Message a Model" Select an appropriate AI model (e.g., GPT-4, Claude) Configure the following prompt in the Message field: Please analyze and compare the following input and output content: (for example) Original Input: {{ $('HTTP Request3').item.json.data }} {{ $('Gmail Trigger').item.json.text }} Generated Output: {{ $json.output }} Provide a detailed self-assessment that evaluates: Content accuracy and completeness Structure and readability improvements Tone and style appropriateness Any information that may have been omitted or misrepresented Overall quality of the transformation 7. Configure Reply Email Node: Set operation to "Send" and select your Gmail account Configure the "To" field to respond to the original sender: {{ $('Gmail Trigger').item.json.from }} Set an appropriate subject line: RE: {{ $('Gmail Trigger').item.json.subject }} Structure the email body with clear sections using the following template: handlebars EDITED ARTICLE* {{ $('AI Article Writer 2').item.json.output }} SELF-ASSESSMENT* Rating: 1 (poor) to 5 (excellent) {{ $json.message.content }} ORIGINAL MESSAGE* {{ $('Gmail Trigger').item.json.text }} ATTACHMENT CONTENT* {{ $('HTTP Request3').item.json.data }} Note: Adjust the template fields according to the input source (PDF, Word document, or inline message). For inline messages, you may not need the "ATTACHMENT CONTENT" section.
by Jonathan
This workflow is part of an MSP collection, which is publicly available on GitHub. This workflow archives or unarchives a Clockify projects, depending on a Syncro status. Note that Syncro should be setup with a webhook via 'Notification Set for Ticket - Status was changed'. It doesn't handle merging of tickets, as Syncro doesn't support a 'Notification Set' for merged tickets, so you should change a ticket to 'Resolved' first before merging it. Prerequisites A Clockify account and credentials Nodes Webhook node triggers the workflow. IF node filters projects that don't have the status 'Resolved'. Clockify nodes get all projects that (don't) have the status 'Resolved', based on the IF route. HTTP Request nodes unarchives unresolved projects, and archives resolved projects, respectively.
by David w/ SimpleGrow
Scheduled Trigger: Every X day at Y pm, the workflow is automatically triggered. Fetch User Data: The workflow retrieves all user records from the "WhatsApp Engagement Database" in Airtable. Each record contains the user’s WhatsApp ID, current points, and the number of raffle vouchers. Personalized Message Preparation: For each user, a personalized WhatsApp message is prepared. The message includes: The user’s current point total The number of raffle vouchers they have for the week Encouragement to keep engaging for more chances to win Information about the weekly raffle and available prizes Send WhatsApp Message: The workflow sends this personalized message to each user via the Whapi API, using their WhatsApp ID. Result: Every active user receives a weekly update about their engagement status, raffle tickets, and a motivational message to encourage further participation. This helps boost engagement and keeps users informed about their progress and chances in the weekly raffle.
by Ghaith Alsirawan
🧠 This workflow is designed for one purpose only, to bulk-upload structured JSON articles from an FTP server into a Qdrant vector database for use in LLM-powered semantic search, RAG systems, or AI assistants. The JSON files are pre-cleaned and contain metadata and rich text chunks, ready for vectorization. This workflow handles Downloading from FTP Parsing & splitting Embedding with OpenAI-embedding Storing in Qdrant for future querying JSON structure format for blog articles { "id": "article_001", "title": "reseguider", "language": "sv", "tags": ["london", "resa", "info"], "source": "alltomlondon.se", "url": "https://...", "embedded_at": "2025-04-08T15:27:00Z", "chunks": [ { "chunk_id": "article_001_01", "section_title": "Introduktion", "text": "Välkommen till London..." }, ... ] } 🧰 Benefits ✅ Automated Vector Loading Handles FTP → JSON → Qdrant in a hands-free pipeline. ✅ Clean Embedding Input Supports pre-validated chunks with metadata: titles, tags, language, and article ID. ✅ AI-Ready Format Perfect for Retrieval-Augmented Generation (RAG), semantic search, or assistant memory. ✅ Flexible Architecture Modular and swappable: FTP can be replaced with GDrive/Notion/S3, and embeddings can switch to local models like Ollama. ✅ Community Friendly This template helps others adopt best practices for vector DB feeding and LLM integration.
by Pablo
Get Scaleway Server Info with Dynamic Filtering Description This workflow is designed for developers, system administrators, and DevOps engineers who need to retrieve and filter Scaleway server information quickly and efficiently. It gathers data from Scaleway instances and baremetal servers across multiple zones and is ideal for: Quickly identifying servers by tags, names, public IPs, or zones. Automating server status checks in production, staging, or test environments. Integrating Scaleway data into broader monitoring or inventory systems. High-Level Steps Webhook Trigger:** Receives an HTTP POST request (with basic authentication) containing the search criteria (search_by and search). Server Data Collection:** Fetches server data from Scaleway’s API endpoints for both instances and baremetal servers across defined zones. Data Processing:** Aggregates and normalizes the fetched data using a Code node with helper functions. Dynamic Filtering:** Routes data to dedicated filtering routines (by tags, name, public_ip, or zone) based on the input criteria. Response:** Returns the filtered data (or an error message) via a webhook response. Set Up Steps Insert Your Scaleway Token: In the “Edit Fields” node, replace the placeholder Your personal Scaleway X Auth Token with your Scaleway API token. Configure Zones: Review or update the zone lists (ZONE_INSTANCE and ZONE_BAREMETAL) to suit your environment. Send a Request: Make a POST request to the workflow’s webhook endpoint with a JSON payload, for example: { "search_by": "tags", "search": "Apiv1" } View the Results: The workflow returns a JSON array of servers matching your criteria, including details like name, tags, public IP, type, state, zone, and user.