by Hojjat Jashnniloofar
Overview This n8n templates helps you to authomatically search Linkeding jobs. It uses AI (Gemini or OpenAPI) to match your resume with each job description and write a sample cover letter for each job and update the job google sheet. You can receive daily matched linkedin job alerts by telegram. Prerequisites AI API Key from one model like: Google Gemini OpenAI Telegram Bot Token - Create via @BotFather Google Sheets - OAuth2 credentials Google Drive - OAuth2 credentials Setup 1. Upload your resume Upload your CV in PDF format in google drive and configure google drive node to read your resume from list of google drive files. You need to configure Google Drive OAuth2 and grant access to your drive before that. You can find useful infomration about how to configure Googel OAuth2 API key in n8n documents. 2. Create Google sheet You need to create a google sheet document consist of two sheets, one sheet for define job filter criteria and second sheet to store job search result. You can download this Google Sheet Template and copy in your personal space. Then you can add your job filter in Google sheet. You can search job by keywords, location, remote type, job type and easy apply. You need to configure Google Sheet OAuth2 and grant access to your drive before that. 3. Conifgure Telegram Bot You need to create a new Telegram Bot in @BotFather and insert API Key in Telegram node and you need to TELEGRAM_CHAT_ID to your telegram ID.
by Dat Proto
Introduction This workflow will backup all of your existed workflows to a single Github repository. The Backup folders' name are based on the current backup date and have default format: "yyyy/MM/dd" (setup at "Create sub path" node). Throughout the backup process, the N8N will inform user via discord with clear message about Start, Success and Failure backups. The workflow will be Tech Stack The following nodes / services / libraries are used in this workflow: Nodes: Discord: To send message to configured setup channel. N8N: To get all workflows' information. Github: To store backup data. Code: To run data comparison (Existed vs Latest workflow data). Wait: To avoid discord message rate limit. External libraries: Underscore.js: JavaScript library that provides lots of common Javascript functions, to help user save time when using code node. Guideline Open "Config" node and setup the following information: repo_owner: Your Github username. repo_name: The repository that you want to store workflows backup data. Open "Create sub path" node and change the naming and path format of backup folder(s). Setup custom messages in 3 discord nodes: Starting Message: N8N inform user at the time workflow start. Inform Success Flows: After each success backup, N8N will notify user. Inform Failed Flows: After each failure backup, N8N will notify user to have appropriate action. Completed Notifications: Then at the final, the workflow will give user a summary. Setup "Schedule Trigger" node to change default automated backup time. Screenshots Discord output
by Angel Menendez
Who is this for? This workflow is designed for IT teams, service desk personnel, and incident management professionals who need a streamlined way to monitor and report on recent ServiceNow incidents directly within Slack. What problem is this workflow solving? / Use Case Manually monitoring incidents in ServiceNow can be time-consuming, and keeping teams updated about new or specific incidents often involves additional manual effort. This workflow automates the process of querying recent incidents from ServiceNow based on user-defined parameters and delivering formatted results directly to Slack. It ensures faster response times and improved incident visibility. What this workflow does This workflow integrates Slack and ServiceNow to provide an automated system for retrieving and presenting incident details. Slack User Interaction: Users initiate the workflow via a Slack modal form, selecting incident parameters like priority and state. ServiceNow Query: The workflow queries ServiceNow for incidents matching the selected criteria. Results Delivery: Incident results are sent back to Slack as a message formatted using Block Kit. If no results are found, the workflow notifies the user with a detailed message, either in a Slack channel or via direct message. Error Handling: If no channel is selected or any issues arise, the workflow ensures graceful fallback with appropriate notifications. Setup Instructions Slack Setup: Integrate Slack with n8n using a Slack app. Configure the modal form to accept parameters like priority and state. Check out this video for setting up a modal slack app on YouTube. ServiceNow Integration: Use ServiceNow credentials to connect with n8n. Ensure appropriate permissions for querying incidents. n8n Workflow Configuration: Import this workflow into n8n. Verify all node configurations, particularly those for ServiceNow API queries and Slack outputs. Set up webhook URLs for Slack event handling. Testing: Trigger the workflow from Slack to test modal inputs and incident queries. Confirm the output is correctly formatted and delivered to the intended Slack channel or user. How to Customize this Workflow to Your Needs Modify the ServiceNow query logic to include additional filters or fields. Adjust the Slack Block Kit formatting to match your organization’s preferred notification style. Use conditional logic to add more advanced handling for specific priorities or states. Expand the workflow to include escalation steps, such as notifying a specific team or creating follow-up tasks. Workflow Highlights Slack Modal Form**: Allows users to specify search criteria for incidents interactively. Dynamic Results Delivery**: Automatically sends results to a Slack channel or direct message based on user input. Error Handling**: Provides fallback notifications when no incidents are found or user inputs are incomplete. Customizable Integration**: Easily adaptable to fit different organizational needs, including advanced filtering and formatting options.
by Boriwat Chanruang
Who is this for? This workflow is designed for: Content creators**, artists, or hobbyists looking to experiment with AI-generated art. Small business owners* or *marketers** using LEGO-style designs for branding or promotions. Developers* or *AI enthusiasts** wanting to automate image transformations through messaging platforms like LINE. What problem is this workflow solving? Simplifies the process of creating custom AI-generated LEGO-style images. Automates the manual effort of transforming user-uploaded images into AI-generated artwork. Bridges the gap between messaging platforms (LINE) and advanced AI tools (DALL·E). Provides a seamless system for users to upload an image and receive an AI-transformed output without technical expertise. What this workflow does Image Upload via LINE: Users send an image to the LINE chatbot. AI-Powered Prompt Creation: GPT generates a prompt to describe the uploaded image for LEGO-style conversion. AI Image Generation: DALL·E 3 processes the prompt and creates a LEGO-style isometric image. Image Delivery: The generated image is returned to the user in LINE. Setup Prerequisites LINE Developer Account** with API credentials. Access to OpenAI API with DALL·E and GPT-4 capabilities. A configured n8n instance to run this workflow. Steps Environment Setup: Add your LINE API Token and OpenAI credentials as environment variables (LINE_API_TOKEN, OPENAI_API_KEY) in n8n. Configure LINE Webhook: Point the LINE webhook to your n8n instance. Connect OpenAI: Set up OpenAI API credentials in the workflow nodes for GPT-4 and DALL·E. Test Workflow: Upload a sample image in LINE and ensure it returns the LEGO-style AI image. How to customize this workflow to your needs Localization**: Modify response messages in LINE to fit your audience's language and tone. Integration**: Add nodes to send notifications through other platforms like Slack or email. Image Style**: Replace the LEGO-style image prompt with other artistic styles or themes. Advanced Use Cases Art Contests: Users upload images and receive AI-enhanced outputs for community voting or branding. Marketing Campaigns: Quickly generate creative visual content for ads and promotions using customer-submitted photos. Education: Use the workflow to teach students about AI-generated art and automation through a hands-on approach. Tips for Optimization Error Handling**: Add fallback nodes to handle invalid images or API errors gracefully. Logging**: Implement a logging mechanism to track requests and outputs for debugging and analytics. Scalability**: Use queue-based systems or cloud scaling to handle large volumes of image requests. Enhancements Add sticky notes in n8n to provide inline instructions for configuring each node. Create a tutorial video or documentation for first-time users to set up and customize the workflow. Include advanced filters to allow users to select from multiple styles beyond LEGO (e.g., pixel art, watercolor). This workflow enables seamless interaction between messaging platforms and advanced AI capabilities, making it highly versatile for various creative and business applications.
by n8n Team
This workflow sends a OpenAI GPT reply when an email is received from specific email recipients. It then saves the initial email and the GPT response to an automatically generated Google spreadsheet. Subsequent GPT responses will be added to the same spreadsheet. Additionally, when feedback is given for any of the GPT responses, it will be recorded to the spreasheet, which can then be used later to fine-tune the GPT model. Prerequisites OpenAI credentials Google credentials How it works This workflow is essentially a two-in-one workflow. It triggers off from two different nodes and have very different functionality from each trigger. The flow triggered from On email received node is as follows: Triggers off on the On email received node. Extract the email body from the email. Generate a response from the email body using the OpenAI node. Reply to the email sender using the Send reply to recipient node. A feedback link is also included in the email body which will trigger the On feedback given node. This is used to fine-tune the GPT model. Save the email body and OpenAI response to a Google Sheet. If a sheet does not exist, it will be created. The flow triggered from On feedback given node is as follows: Triggers off when a feedback link is clicked in the emailed GPT response. The feedback, either positive or negative, for that specific GPT response is then recorded to the Google Sheet.
by Mario
Purpose This solution enables you to manage all your Notion and Todoist tasks from different workspaces as well as your calendar events in a single place. All tasks can be managed in Todoist and additionally Fantastical can be used to manage scheduled tasks & events all together. Demo & Explanation How it works The realtime sync consists of two workflows, both triggered by a registered webhook from either Notion or Todoist To avoid overwrites by lately arriving webhook calls, every time the current task is retrieved from both sides. Redis is used to prevent from endless loops, since an update in one system triggers another webhook call again. Using the ID of the task, the trigger is being locked down for 15 seconds. Depending on the detected changes, the other side is updated accordingly. Generally Notion is treaded as the main source. Using an "Obsolete" Status, it is guaranteed, that tasks never get deleted entirely by accident. The Todoist ID is stored in the Notion task, so they stay linked together An additional full sync workflow daily fixes inconsistencies, if any of them occurred, since webhooks cannot be trusted entirely. Since Todoist requires a more complex setup, a tiny workflow helps with activating the webhook. Another tiny workflow helps generating a global config, which is used by all workflows for mapping purposes. Mapping (Notion >> Todoist) Name: Task Name Priority: Priority (1: do first, 2: urgent, 3: important, 4: unset) Due: Date Status: Section (Done: completed, Obsolete: deleted) <page_link>: Description (read-only) Todoist ID: <task_id> Current limitations Changes on the same task cannot be made simultaneously in both systems within a 15-20 second time frame Subtasks are not linked automatically to their parent yet Recurring tasks are not supported yet Tasks names do not support URL’s yet Prerequisites Notion A database must already exist (get a basic template here) with the following properties (case matters!): Text: "Name" Status: "Status", containing at least the options "Backlog", "In progress", "Done", "Obsolete" Select: "Priority", containing the options "do first", "urgent", "important" Date: "Due" Checkbox: "Focus" Text: "Todoist ID" Todoist A project must already exist with the same sections like defined as Status in Notion (except Done and Obsolete) Redis Create a Free Redis Cloud instance or self-host Setup The setup involves quite a lot of steps, yet many of them can be automated for business internal purposes. Just follow the video or do the following steps: Setup credentials for Notion (access token), Todoist (access token) and Redis - you can also create empty credentials and populate these later during further setup Clone this workflow by clicking the "Use workflow" button and then choosing your n8n instance - otherwise you need to map the credentials of many nodes. Follow the instructions described within the bundle of sticky notes on the top left of the workflow How to use You can apply changes (create, update, delete) to tasks both in Notion and Todoist which then get synced over within a couple of seconds (this is handled by the differential realtime sync) The daily running full sync, resolves possible discrepancies in Todoist and sends a summary via email, if anything needed to be updated. In case that contains an unintended change, you can jump to the Task from the email directly to fix it manually.
by Marcelo Abreu
What this workflow does Runs automatically every Monday morning at 8 AM Collects your Google Search Console from the last month and the month before that for a given url (date range is configurable) Formats the data, aggregating it by date, query, page, device and country Generates AI-driven analysis and insights on your results, providing actionable recommendations Renders the report as a visually appealing PDF with charts and tables Sends the report via Slack (you can also add email or WhatsApp) A sample for the first page of the report: Setup Guide Create an account of pdforge and use the pre-made Meta Ads template. Connect Google OAuth2 (guide on the template), OpenAI and Slack to n8n Set your site url and date range (opcional) Customize the scheduling date and time Requirements Google OAuth2 (via Google Search Console): Documentation pdforge access: Create an account AI API access (e.g. via OpenAI, Anthropic, Google or Ollama) Slack acces (via OAuth2): Documentation Feel free to contact me via Linkedin, if you have any questions! 👋🏻
by Oneclick AI Squad
This automated n8n workflow sets up a complete MERN Stack development environment on a Linux server by installing core technologies, development tools, package managers, global npm packages, deployment tools, build tools, and security configurations. It creates a dedicated developer user and configures essential settings for MERN projects. What is MERN Stack Setup? MERN Stack setup involves installing and configuring Node.js, MongoDB, Express.js, and React, along with additional tools and packages, to create a fully functional development environment for building MERN-based applications on a Linux system. Good to Know The workflow triggers manually and takes 10-15 minutes to complete A dedicated developer user with proper permissions is created Firewall configuration secures development ports The environment variables template is provided All tools are installed and ready for immediate use How It Works Set Parameters** - Configures server host, user, password, setup type, Node.js version, MongoDB version, username, and user password System Preparation** - Prepares the system for installation Install Node.js** - Installs Node.js (v20 by default) with npm Install MongoDB** - Installs MongoDB (v7.0 by default) with Compass & Shell Install Git & GitHub CLI** - Installs Git and GitHub CLI Install Development Tools** - Installs VS Code, Docker, Docker Compose, Postman, Nginx, Redis, and PostgreSQL Create Dev User** - Creates a development user account Install Additional Tools** - Installs package managers (npm, Yarn, pnpm), global npm packages, deployment tools, build tools, and security tools Final Configuration** - Configures firewall, SSH keys, and environment variables template Setup Complete** - Marks the completion of the setup process How to Use Import the workflow into n8n Configure parameters in the Set Parameters node (server_host, server_user, server_password, setup_type, node_version, mongodb_version, username, user_password) Run the workflow SSH into the server as the new developer user Start building MERN applications Requirements Linux server access with SSH Administrative privileges (root access) Customizing This Workflow Adjust the setup_type parameter to customize the installation scope Modify node_version or mongodb_version to use different versions Change the username and user_password for the developer account
by Juan Carlos Cavero Gracia
This automation template turns any long video into multiple viral-ready short clips and auto-schedules them to TikTok, Instagram Reels, and YouTube Shorts. It works with both vertical and horizontal inputs and respects the original input resolution (no unnecessary upscaling), cropping or letterboxing intelligently when needed. The workflow automatically extracts between 3 and 6 clips (based on video length and the most engaging segments) and schedules one short per consecutive day—e.g., 3 clips → the next 3 days, 6 clips → the next 6 days. Note: This workflow uses OpenAI Whisper for word-level transcription, Google’s Gemini for clip selection and metadata, and Upload-Post’s FFmpeg API for GPU-accelerated cutting/cropping and social scheduling. You can use the same Upload-Post API token for both FFmpeg jobs and publishing uploads. Upload-Post also offers a generous free trial with no credit card required.* Who Is This For? Creators & Editors:** Batch-convert long talks/podcasts into daily Shorts/Reels/TikToks. Agencies & Social Teams:** Turn webinars/interviews into a reliable short-form stream. Brands & Founders:** Maintain a steady posting cadence with minimal hands-on editing. What Problem Does This Workflow Solve? Manual clipping is slow and inconsistent. This workflow: Finds Hooks Automatically:** AI picks 3–6 high-retention segments from transcript + timestamps (count scales with video length/quality). Cuts Cleanly:** Absolute-second FFmpeg timing to avoid mid-word cuts. Vertical & Horizontal Friendly:** Handles both orientations and respects source resolution. Schedules for You:** Posts one clip per day on consecutive days. How It Works Form Upload: Submit your long video. Audio Extraction: FFmpeg job extracts audio for accurate ASR. Whisper Transcription: Word-level timestamps enable precise clipping. AI Clip Mining (Gemini): Detects 3–6 “viral” moments (15–60s) and generates titles/descriptions. Cut & Crop (FFmpeg): GPU pipeline produces clean clips; preserves input resolution/orientation when possible and crops/pads appropriately for target platforms. Status & Download: Polls job status and retrieves the final clips. Auto-Scheduling (Consecutive Days): Schedules one short per day starting tomorrow, for as many days as clips were produced (e.g., 3 clips → 3 days, 6 clips → 6 days) at a configurable time (default 20:00 Europe/Madrid). Setup OpenAI (Whisper): Add your OpenAI API credentials. Google Gemini: Add Gemini credentials used by the AI Agent node. Upload-Post (free trial no credit card required): Generate your api token https://app.upload-post.com/ connect your social media accounts and add your API token credentials in n8n (same token works for FFmpeg jobs and publishing). Scheduling: Adjust posting time/intervals and timezone (Europe/Madrid by default). Metadata Mapping: Titles/descriptions are auto-generated per platform; tweak as needed. Requirements Accounts:** n8n, OpenAI, Google (Gemini), Upload-Post, and social platform connections. API Keys:** OpenAI token, Gemini credentials, Upload-Post token. Budget:** Whisper + Gemini inference + FFmpeg compute + optional posting costs. Features Word-Accurate Cuts:** Absolute-second timecodes with subtle pre/post-roll. Orientation-Aware:** Supports vertical and horizontal inputs; preserves source resolution where possible. Platform-Optimized Output:** 9:16-ready delivery with smart crop/pad behavior. Consecutive-Day Scheduler:** 3–6 clips → 3–6 consecutive posting days, automatically. Retry & Polling:** Built-in waits and status checks for robust processing. Modular:** Swap models, adjust clip count/length, or add/remove platforms quickly. Turn long-form video into a consistent sequence of Shorts/Reels/TikToks—automatically, day after day, while respecting your source resolution.
by Sarfaraz Muhammad Sajib
This automation workflow captures incoming chat messages from your Tawk.to live chat widget and sends alert emails via Gmail to notify your support team instantly. It is designed to help you respond promptly to visitors and improve your customer support experience. Prerequisites Tawk.to account:** You must have an active Tawk.to account with a configured live chat widget on your website. Gmail account:** A Gmail account with API access enabled and configured in n8n for sending emails. n8n instance:** Access to an n8n workflow automation instance where you will import and configure this workflow. Step-by-Step Setup Instructions 1. Configure Tawk.to Webhook Log in to your Tawk.to dashboard. Navigate to Administration > Webhooks. Click Add Webhook and enter the following: URL: Your n8n webhook URL from the Receive Tawk.to Request node (e.g., https://your-n8n-instance.com/webhook/a4bf95cd-a30a-4ae0-bd2a-6d96e6cca3b4) Method: POST Events: Select the chat message event (e.g., Visitor Message or Chat Message Received) Save the webhook configuration. 2. Configure Gmail Credentials in n8n In your n8n instance, go to Credentials. Add a new Gmail OAuth2 credential: Follow Google's instructions to create a project, enable Gmail API, and obtain client ID and secret. Authenticate and authorize n8n to send emails via your Gmail account. 3. Import and Activate Workflow Import the provided workflow JSON into n8n. Verify the Receive Tawk.to Request webhook node path matches the webhook URL configured in Tawk.to. Enter the email address you want the alerts sent to in the Send alert email node’s sendTo parameter. Activate the workflow. Workflow Explanation Receive Tawk.to Request: This webhook node listens for POST requests from Tawk.to containing chat message data. Format the message: Extracts relevant data from the incoming payload such as chat ID, visitor name, country, and message text, and assigns them to new fields for easy use downstream. Send alert email: Uses Gmail node to send a notification email to your support team with all relevant chat details formatted in a clear, concise text email. Customization Guidance Email Recipient:** Update the sendTo field in the Send alert email node to specify your support team’s email address. Email Content:** Modify the message template in the Send alert email node’s message parameter to suit your tone or include additional details like timestamps or chat URLs. Additional Processing:** You can extend the workflow by adding nodes for logging chats, triggering Slack notifications, or storing messages in a database. By following these instructions, your support team will receive immediate email alerts whenever a new chat message arrives on your website, improving response times and customer satisfaction.
by David Ashby
🛠️ SIGNL4 Tool MCP Server Complete MCP server exposing all SIGNL4 Tool operations to AI agents. Zero configuration needed - all 2 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every SIGNL4 Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n SIGNL4 Tool tool with full error handling 📋 Available Operations (2 total) Every possible SIGNL4 Tool operation is included: 🔧 Alert (2 operations) • Send an alert • Resolve an alert 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native SIGNL4 Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every SIGNL4 Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Daniel Nolde
What it is: An automation to plan→draft→finalize and publish your textual blog post ideas to your wordpress blog Works in stages and hand back control to you in between those You can use a Google Spreadsheet for planning topics and configuring LLM models and prompts What it does: plans→drafts→finalizes blog post topics you specify in a Google Spreadsheet using an LLM with prompts that also ar configured in that spreadsheet (even which model to use) savs the results in the corresponding columns of the "Schedule" sheet in the spreadsheet hands control back to the user for inspecting or changing the results and for setting the next "Action" for th workflow Finally publishes the blog post to your Wordpress instance Limitations Probably slightly over-engineered ;-) No media generation yet some LLM models don't work because of their output format How it works: The Workflow is triggered manually or scheduled every hour It ingests a Google Spreadsheet to get Config for prompts/context tc Blog-Topics and their status and next action Depending on each blog topics "Status" and "Action" it then either uses an LLM for th next action ("plan"→"draft"→"final" actions) or publishes the written content to your Wordpress instance ("publish" actions) Set up steps: Import the workflow Make your own copy of the Google Spreadsheet Update the credentials using your individual credentials for: Google Spreadsheets OpenRouter Edit the "Settings" node and enter your individual values for Your spreadsheet copy URL Your wordpress blog URL Your wordpress blog username Your wordpress blog app password (a 4x4 alphanumeric sequence), that you probably have to create first, for which your wordpress user has to have 2-factor-authentication enabled. In your own copy of the spreadsheet: individualize the "Config" sheet's "Value" column for the prompts/context/etc Populate the "Schedule" sheet with at least one line in which you specify a "Topic" a "Schedulded" date (YYYY-MM-DD HH:mm:ss) a "Status" of "idea" an "Action" of "plan" (to kick off that action)