by EoCi - Mr.Eo
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Introduction Tired of spending time crafting the perfect AI prompt? This workflow takes your simple ideas like "write a blog post" and automatically transforms them into detailed, structured prompts that actually work. 🎯 What This Does Automatically converts simple user prompts like "write a blog post" into structured, professional AI prompts with metadata, variables, and clear instructions. Perfect for everybody, all industries and organizations who are wanting to eliminate prompt engineering works. 🔄 How It Works Google Sheets Trigger monitors for new prompts AI Enhancement Pipeline uses Gemini + Groq to add structure & context Field Completion auto-generates missing metadata (topic, categories) Quality Assurance validates & stores complete results 🚀 Setup Requirements AI APIs**: Gemini, Telegram, Groq API keys Google Sheets**: 2 sheets (Main, ModifiedPrompt) 5 minutes setup time** - detailed instructions in blue sticky notes Set up steps Setup time: < 5 minutes Create a Google Spreadsheet with two tabs (sheets): OriginalPrompts and ModifiedPrompts. OriginalPrompts columns: Original Prompt ID | Model | Original Prompt | Created Time ModifiedPrompts columns (example): Modified Prompt ID | Original Prompt ID | Topic | Topic Categories | Modified Prompt | Prompt Title | Prompt Type | Model Used | Improvement Notes | Updated Time | Created Time | isProcessed Add and attach credentials in n8n: Google Sheets OAuth2 (required for getting new prompt) Gemini and Groq API credentials (required for AI Agent) Telegram credential (required for notifications) Save & Activate the workflow. Add a test row to OriginalPrompts, for example: Original Prompt ID: 1 — Original Prompt: "Write a short blog post about AI ethics". Wait ~30–60s and check ModifiedPrompts for the enhanced output. That’s it ! Once it configured, drop short ideas into your sheet and get professional prompts back automatically. Your prompts get better, your AI outputs improve, and you save hours of manual prompt crafting.
by AiAgent
Disclaimer This workflow contains a community node. What It Does Leverage the power of GPT-4o to seamlessly summarize a scientific research PDF of your choosing. By simply downloading a PDF of a scientific research article into a folder on your computer this powerful workflow will automatically read the article and produce a detailed summarization of the article. The workflow will then save this summarization onto your computer for future convenience. Who Is This For? The workflow is the perfect tool for all types of self-learners attempting to improve their knowledge base as efficiently as possible. It is a way to rapidly improve your knowledge base using peer reviewed scientific articles in a quick and efficient way. This workflow will provide a more detailed summary of the scientific research article than a typical abstract, while taking a fraction of the time it would take to read an entire paper. It will provide you with enough information to have a firm grasp on the information provided within the scientific article and will allow you to determine if you would like to dive deeper into the article. This workflow is perfect for professionals who need to stay current on the most recent literature in their field, as well as the self-learners who enjoy diving deep into a specific topic. It can aid anyone who is performing academic research, a literature review, or attempting to increase their knowledge base in a field using peer reviewed sources. How It Works Utilizing the power of GPT-4o, the moment you save a PDF of a scientific research article to a predesignated folder it will being to read the article and produce a summary that will be saved into another designated folder on your computer via the following steps below. Search the internet and your favorite journal databases for a scientific article that interests you. With the n8n workflow activated, download a PDF of the scientific article and save it to a specific designated folder. Saving the scientific article to this folder will trigger the workflow to initiate. The workflow will then extract the contents of the PDF and pass the data along to an AI agent utilizing the power of GPT-4o. This AI agent will produce a detailed summary of the scientific article. This summary will include the following: Introduction heading discussing the importance of the article and the specific aims of the study Methods heading detailing how the study was conducted, what variables they evaluated, what their inclusion and exclusion criteria were, and what their measurement standards were. Results heading providing specific data provided in the study for all variables tested as well as the statistical significance of each result. Summary heading evaluating the importance of the results, how it compares to other scientific articles in the same field, as well as the recommendations of the authors on how to interpret the data provided by the results. Conclusion heading summarizing the strengths and weaknesses of the scientific article as well as providing deficiencies in knowledge on the subject that would be a good topic for future studies. After the AI agent has completed its summary, it will convert the summary to text and save it to a designated folder on your computer for future viewing. Set Up Steps You will need to create a folder on your computer where you would like to save your scientific article PDFs. You will then copy the pathway to this folder into the local file trigger node. You will need to obtain an Open AI API key from platform.openai.com/api-keys After you obtain this Open AI API key you will need to connect it to the Open AI Chat Model connected to the Summarizer Tools Agent. You will now need to fund your Open AI account. GPT-4o costs ~$0.01 to run the workflow. Finally, create a folder on your computer you wish to have the summarizations saved to. Copy the pathway to this folder into the Save to Folder node. Customization This workflow is easy to customize to a specific area of research to provide the best possible summarization. If you have a specific expertise in a field of study, you can customize the output to provide data at a higher level of understanding for that field. For example, if you are a marine biologist, you can change the portion of the text prompt in the summarizer tool from "You are a research expert who is providing data to another researcher." to "You are a marine biologist expert who is providing data to another marine biologist." Disclaimer If the pdf is too large, open AI will not be able to summarize it and will provide the error that you have reached your limit of requests.
by Friedemann Schuetz
Welcome to my AI Social Media Caption Creator Workflow! What this workflow does This workflow automatically creates a social media post caption in an editorial plan in Airtable. It also uses background information on the target group, tonality, etc. stored in Airtable. This workflow has the following sequence: Airtable trigger (scan for new records every minute) Wait 1 Minute so the Airtable record creator has time to write the Briefing field retrieval of Airtable record data AI Agent to write a caption for a social media post. The agent is instructed to use background information stored in Airtable (such as target group, tonality, etc.) to create the post. Format the output and assign it to the correct field in Airtable. Post the caption into Airtable record. Requirements Airtable Database: Documentation AI API access (e.g. via OpenAI, Anthropic, Google or Ollama) Example of an editorial plan in Airtable: Editorial Plan example in Airtable For this workflow you need the Airtable fields "created_at", "Briefing" and "SoMe_Text_AI" Feel free to contact me via LinkedIn, if you have any questions!
by CustomJS
This n8n template demonstrates how to download multiple PDF files from public URLs and merge them into a single PDF using the PDF Toolkit from www.customjs.space. @custom-js/n8n-nodes-pdf-toolkit Notice Community nodes can only be installed on self-hosted instances of n8n. What this workflow does Downloads** each PDF using an HTTP Request. Populates* files into an array with *Merge** node from n8n. Merges** all downloaded PDFs using the Merge PDF node from the @custom-js/n8n-nodes-pdf-toolkit. Writes** the final merged PDF to disk. Requirements Self-hosted** n8n instance CustomJS API key** for merging multiple PDF files. PDF files to be merged** to be converted into a PDF Workflow Steps: Manual Trigger: Runs with user interaction. HTTP Request Node For PDF Download: Pass urls for PDF files to merge. Merge Node For Array Population: Just populates two files into an array. Merge PDF files: Uses the CustomJS node to merge the incoming PDF files into a single PDF file. If size of PDF files exceeds 6MB, you can simply pass an array of URLs for PDF files. Usage Get API key from customJS Sign up to customJS platform. Navigate to your profile page Press "Show" button to get API key Set Credentials for CustomJS API on n8n Copy and paste your API key generated from CustomJS here. Design workflow A Manual Trigger for starting workflow. Two HTTP Request Nodes for downloading PDF files. A Merge Node for populating files as an array. Merge PDFs node for merging files Write to Disk node for saving merged PDF file. You can replace logic for triggering and returning results. For example, you can trigger this workflow by calling a webhook and get a result as a response from webhook. Simply replace Manual Trigger and Write to Disk nodes. Perfect for Bundling reports or invoices. Generating document sets from external sources. Automating PDF handling without writing custom code
by Zacharia Kimotho
Create new Clickup Tasks from Slack commands This workflow aims to make it easy to create new tasks on Clickup from normal Slack messages using simple slack command. For example We can have a slack command as /newTask Set task to update new contacts on CRM and assign them to the sales team This will have an new task on Clickup with the same title and description on Clickup For most teams, getting tasks from Slack to Clickup involves manually entering the new tasks into Clickup. What if we could do this with a simple slash command? Step 1 The first step is to Create an endpoint URL for your slack command by creating an events API from the link [below] https://api.slack.com/apps/) STEP 2 Next step is defining the endpoint for your URL Create a new webhook endpoint from your n8n with a POST and paste the endpoint URL to your event API. This will send all slash commands associated with the Slash to the desired endpoint Step 3 Log on to slack API (https://api.slack.com/) and create an application. This is the one we use to run all automation and commands from Slack. Once your app is ready, navigate to the Slash Commands and create a new command This will include the command, the webhook URL and a description of what the slash command is all about Now that this is saved you can do a test by sending a demo task to your endpoint Once you have tested the webhook slash command is working with the webhook, create a new Clickup API that can be used to create new tasks in ClickUp This workflow creates a new task with the start dates on Clikup that can be assigned to the respective team members More details about the document setup can be found on this document below Happy Productivity
by CustomJS
This n8n template demonstrates how to convert HTML into a PDF, compress the generated PDF, and return it as a binary response using the PDF Toolkit from www.customjs.space. Notice Community nodes can only be installed on self-hosted instances of n8n. @custom-js/n8n-nodes-pdf-toolkit What this workflow does Convert** the requested HTML to PDF. Compress** the PDF file. Use** a Code node to handle URLs pointing to PDF files if they exceed 6MB. Compress** the PDF pages. Requirements Self-hosted** n8n instance CustomJS API key** for compressing PDF files. HTML** Data to convert PDF files Code node** for handling URL that indicates PDF file. Workflow Steps: Manual Trigger: Runs with user interaction. HTML to PDF: Request HTML Data Convert HTML to PDF Request PDF from URL. Compress Pages from PDF: Compress PDF as a binary file. Usage Get API key from customJS Sign up to customJS platform. Navigate to your profile page Press "Show" button to get API key Set Credentials for CustomJS API on n8n Copy and paste your API key generated from CustomJS here. Design workflow A Manual Trigger for starting workflow. HTTP Request Nodes for downloading PDF files. Code node for handling URL that indicates PDF file. Compress PDF files. You can replace logic for triggering and returning results. For example, you can trigger this workflow by calling a webhook and get a result as a response from webhook. Simply replace Manual Trigger and Write to Disk nodes.
by Tom
This workflow shows a low code approach to parsing an XML file and storing its contents in a Google Sheets spreadsheet. To run the workflow: Make sure you are running n8n 0.197 or newer Have n8n authenticated with Google Sheets How it's done: This workflow first downloads an example file using the HTTP Request node and reads this file using the XML node. It then runs the Item Lists node to split out the individual food items from the example file. It then splits up the workflow into a separate branch creating a new spreadsheet file using the Google Sheets node. To read the column names we're using the Object.keys() method inside a Set node. Once the spreadsheet is created (the workflow waits for this using the Merge node), the data is appended to the newly created sheet (again using the Google Sheets node).
by Tom
This workflow identifies new rows in Google Sheets using a separate column keeping track of already processed rows. For this approach to work, the sheet needs to meet two requirements: A unique identifier for each row is required A column used to differentiate new/processed rows is present Our example sheet looks like this: So the row identifier is named ID, the new/processed column is called Processed. Update the workflow accordingly if your columns have different names. Now if the workflow runs, it discovers all three rows as new. After processing them, it will add a timestamp to the Processed column: The next time the workflow is executed it will skip the existing rows and only process newly added data:
by Polina Medvedieva
Who is this template for This template is for marketers, SEO specialists, or content managers who need to analyze keywords to identify which ones contain references to a specific area or topic, in this case – IT software, services, tools, or apps. Use case Automating the process of scanning a large list of keywords to determine if they reference known IT products or services (like ServiceNow, Salesforce, etc.), and updating a Google Sheet with this classification. This helps in categorizing keywords for targeted SEO campaigns, content creation, or market analysis. How this workflow works Fetches keyword data from a Google Sheet Processes keywords in batches to prevent rate limiting Uses an AI agent (OpenAI) to analyze each keyword and determine if it contains a reference to an IT service/software Updates the original Google Sheet with the results in a "Service?" column Continues processing until all keywords are analyzed Set up steps Connect your Google Sheets account credentials Set the Google Sheet document ID (currently using "Copy of Sheet1 1") Configure the OpenAI API credentials for the AI agent Adjust the batch size (currently 6) if needed based on your API rate limits Ensure the Google Sheet has the required columns: "Number", "Keyword", and "Service?" The AI agent's prompt is highly customizable to match different identification needs. For example, instead of looking for IT software/services, you could modify the prompt to identify: Industry-specific terms (healthcare, finance, education) Geographic references (cities, countries, regions) Product categories (electronics, clothing, food) Competitor brand mentions Here's how you could modify the prompt for different use cases: Copy // For identifying educational content keywords "Check the keyword I provided and define if this keyword relates to educational content, courses, or learning materials and return yes or no." // For identifying local service keywords "Check the keyword I provided and determine if it contains location-specific terms (city names, neighborhoods, regions) that suggest local service intent and return yes or no." // For identifying competitor mentions "Check the keyword I provided and determine if it mentions any of our competitors (CompetitorA, CompetitorB, CompetitorC) and return yes or no." `
by Bela
Sync your Google Sheets Data with your Postgres database table, requiring minimal adjustments. Follow these steps: Retrieve Data: Pull data from Google Sheets and PostgreSQL. Compare Datasets: Identify differences, focusing on new or updated entries. Update PostgreSQL: Apply changes to ensure both platforms mirror each other. Automate this process to regularly synchronize data. Before starting, grant necessary access to both Google Sheets and PostgreSQL, and specify the data details for synchronization. This streamlined workflow enhances data consistency across platforms. This example is a one-way synchronization from Google Sheets into your Postgres. With small adjustments, you can make it the other way around, or 2-way.
by Nskha
An innovative N8N workflow that monitors cryptocurrency prices on Binance, identifies significant market movements, and sends customized alerts through Telegram. Ideal for traders and enthusiasts seeking real-time market insights. How It Works Trigger Options: Choose between a manual trigger or a scheduled trigger to start the workflow. Fetch Market Data: The 'Binance 24h Price Change' node retrieves the latest 24-hour price changes for cryptocurrencies from Binance. Identify Significant Changes: The 'Filter by 10% Change rate' node filters out cryptocurrencies with price changes of 10% or more. Aggregate Data: The 'Aggregate' node combines all significant changes into a single dataset. Format Data for Telegram: The 'Split By 1K chars' node formats this data into chunks suitable for Telegram's message size limit. Send Telegram Message: The 'Send Telegram Message' node broadcasts the formatted message to a specified Telegram chat. Set Up Steps Estimated Time**: About 1-5 minutes for setup. Initial Configuration**: Set up a Binance API connection (Optional) and your Telegram bot credentials. Customization**: Adjust the trigger according to your preference (manual or scheduled) and update the Telegram chat ID. Create Telegram bot steps**:- Setting up a Telegram bot and obtaining its token involves several steps. Here's a detailed guide: Start a Chat with BotFather: Open Telegram and search for "BotFather". This is the official bot that allows you to create new bots. Start a chat with BotFather by clicking on the "Start" button at the bottom of the screen. Create a New Bot: In the chat with BotFather, type /newbot and send the message. BotFather will ask you to choose a name for your bot. This is a display name and can be anything you like. Next, you'll need to choose a username for your bot. This must be unique and end in bot. For example, my_crypto_alert_bot. Receive Your Token: After you've set the name and username, BotFather will provide you with a token. This token is like a password for your bot, so keep it secure. The message will look something like this: Done! Congratulations on your new bot. You will find it at t.me/my_crypto_alert_bot. You can now add a description, about section and profile picture for your bot, see /help for a list of commands. Use this token to access the HTTP API: 123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11 The token in this case is 123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11. Test Your Bot: You can find your bot by searching for its username in Telegram. Start a chat with your bot and try sending it a message. Although it won't respond yet, this step is essential to ensure it's set up correctly. Use the Token in n8n: In your n8n workflow, when setting up the Telegram node, you'll be prompted to enter credentials. Choose to add new credentials and paste the token you received from BotFather. Get Your Chat ID: To send messages to a specific chat, you need to know the chat ID. The easiest way to find this is to first message your bot, then use a bot like @userinfobot to get your chat ID. Once you have the chat ID, you can configure it in the Telegram node in your n8n workflow. Finalize Your Workflow: With the bot token and chat ID set up in n8n, your Telegram notifications should work as intended in your workflow. Remember, keep your bot token secure and never share it publicly. If your token is compromised, you can always generate a new one by chatting with BotFather and selecting /token. Example result Keywords: n8n workflow, cryptocurrency market, Binance API, Telegram bot, price alert system, automated trading signals, market analysis `
by Harshil Agrawal
This example workflow demonstrates how to handle pagination. This example assumes that the API you are making the request to has pagination, and returns a cursor (something that points to the next page). This example workflow makes a request to the HubSpot API to fetch contacts. You will have to modify the parameters based on your API. Config URL node: This node sets the URL that the HTTP Request node calls. HTTP Request node: This node makes the API call and returns the data from the API. Based on your API, you will have to modify the parameters of the node. NoOp node and Wait node: These nodes help me avoiding any rate limits. If you're API has rate limits, make sure you configure the correct time in the Wait node. Check if pagination: This IF node checks if the API returns any cursor. If the API doesn't return any cursor, it means that there is no data to be fetched, and the node returns false. If the API returns a cursor, it means that there is still some data that needs to be fetched. In this case, the node returns true. Set next URL: This Set node is used to set the URL. In the next cycle, the HTTP Request node makes a call to this URL. Combine all data: This node combines all the data that gets returned by the API calls from the HTTP Request node.