by Trung Tran
๐ค Smart Interview Assistant: Tailored Questions Based on CV, JD, and Round Watch the demo video below: ๐ Whoโs it for This workflow is designed for: Recruiters* and *Talent Acquisition Specialists** who want to automate candidate interview prep. Hiring Managers** conducting multiple interviews and needing personalized question sets. Technical Interviewers** who want to save time and be well-prepared with relevant questions. โ๏ธ How it works / What it does The Smart Interview Assistant automates the interview preparation process in a few clicks: Accepts: Multiple resumes (PDFs) Selected job role Chosen interview round Extracts structured data from: The candidateโs CV The corresponding Job Description (JD) Uses GPT-4 to analyze: Candidate profile Role requirements Interview round context Generates: Tailored interview questions Expected answers A summarized interview prep report Sends the report directly to the hiring team via email (SMTP) ๐ Google Drive Structure ๐ Root Folder โโโ ๐ jd/ # Stores all job descriptions in PDF format โ โโโ Backend_Engineer.pdf โ โโโ Azure_DevOps_Lead.pdf โ โโโ ... โโโ ๐ Positions (Google Sheet) # Maps Job Role โ JD File Link ๐ Sample Mapping Sheet: Positions Sheet Columns: Job Role Job Description File URL (pointing to PDF in jd/ folder) ๐ ๏ธ How to Set Up Step 1: Configure API Integrations โ Connect your OpenAI GPT-4 API Key โ Enable Google Cloud APIs: Google Sheets API (to read job roles) Google Drive API (to access CV and JD files) โ Set up SMTP credentials (for email delivery) Step 2: Prepare Google Drive & Mapping Sheet Create a root folder on Google Drive Inside the root folder: Create a folder named /jd/ and upload all job descriptions (PDFs) Create a Google Sheet named Positions with the following format: | Job Role | Job Description File URL | |-----------------------------|--------------------------------------------| | Azure DevOps Engineer | https://drive.google.com/xxx/jd1.pdf | | Full-Stack Developer (.NET) | https://drive.google.com/xxx/jd2.pdf | Step 3: Build the Application Form Use any form tool (e.g., Typeform, Tally, or custom HTML) that collects: ๐ Resume file (PDF) ๐งพ Job Role (dropdown) ๐ Interview Round (dropdown) Step 4: Resume & JD Extraction ๐ Use Extract from PDF to parse the resume content ๐ Retrieve the JD link from the Positions sheet based on the selected Job Role ๐ Use Download file to pull the PDF for processing Step 5: Analyze with GPT-4 Run both Resume and JD through a Profile Analyzer Agent (GPT-4 with JSON output) Merge results Add manual input or mapping for the Interview Round metadata Step 6: Generate Interview Report Use a second GPT-4 agent (e.g., HR Expert Agent) to: Generate 6โ8 tailored interview questions Include expected answers and rationale Step 7: Deliver Final Report Format the content as: ๐ PDF (optional) ๐จ Email body Send the report to the recruiter, hiring manager, or interviewer via SMTP โ Requirements ๐ OpenAI GPT-4 API Key ๐ Google Drive (for resume and JD storage) ๐ Google Sheet (job role mapping) ๐ฌ SMTP credentials (host, username, password) ๐งฐ n8n self-hosted or cloud instance with: PDF Parser Google Sheets node HTTP Download node Email node โ๏ธ How to Customize the Workflow | Part | Customization Options | |----------------------------|-------------------------------------------------------------| | Form UI | Modify the design, dropdown options, or input validations | | Job Description Source | Replace Google Sheet with Notion, Airtable, or database | | Interview Metadata | Add job level, region, or language preference | | AI Prompt Tuning | Adjust prompt phrasing or temperature in GPT nodes | | Report Format | Generate PDF instead of email body using PDF node | | Delivery Method | Add internal HR portal webhook or generate downloadable link |
by John Alejandro SIlva
๐ค๐จ Telegram AI Assistant with Multi-File Media Group Handling, Smart File Processing & PostgreSQL Integration > AI-powered Telegram bot for text, voice, video, documents & media โ with database-driven grouping and Telegram-safe formatting. ๐ Description This n8n template creates a next-generation Telegram AI assistant ๐ง ๐ฌ capable of handling text messages, media files, and documents with advanced processing, PostgreSQL integration, and AI-powered responses. It is designed to solve Telegramโs media group challenge ๐ฆ โ when multiple files are sent together, they are stored, processed, and combined into one coherent AI-generated reply. โจ Key Features ๐ Multi-file media group management with PostgreSQL: media_group media_queue chat_histories ๐ Document parsing for CSV, HTML, ICS, JSON, ODS, PDF (with AI fallback), RTF, TXT, XML, and spreadsheets. ๐ค Voice & video transcription for AI analysis. ๐ผ๏ธ Image, audio, and video description for richer AI context. ๐ก๏ธ Telegram-safe MarkdownV2 formatting with auto-splitting for messages over 4096 chars. โ ๏ธ Error fallback for unsupported file types. ๐ก Acknowledgment A huge thank you to Ezema Gingsley Chibuzo ๐ for the inspiration of the first version of this workflow: Create a Multi-Modal Telegram Support Bot with GPT-4 and Supabase RAG Your pioneering work laid the foundation for this improved, database-powered multi-modal assistant ๐ ๐ท Tags telegram ai-assistant postgresql multi-file media-group file-processing voice-transcription document-parser pdf-extraction markdown-formatting n8n-template ๐ผ Use Case Use this template if you need an AI-powered Telegram bot that can: ๐ฆ Handle multiple files sent in a single message (albums, multiple PDFs, etc.). ๐งพ Extract & analyze content from many file formats. ๐๏ธ Transcribe voice and video messages. ๐๏ธ Maintain chat memory for contextual AI answers. ๐ก๏ธ Avoid Telegram formatting errors and length limit issues. This workflow automates the full chain: Receive โ Process โ AI Analysis โ Telegram-safe Reply. ๐ฌ Example User Interactions ๐ Multiple PDFs with a caption** โ AI extracts and summarizes all PDFs in one combined reply. ๐ค Voice message** โ AI transcribes and replies with a contextual answer. ๐ CSV or spreadsheet file** โ AI parses and summarizes the data. ๐ผ๏ธ Multiple images** โ AI describes each image and replies in a single message. ๐ Required Credentials Telegram Bot API** (Bot Token) PostgreSQL** (Connection credentials) AI Provider API** (OpenAI, Google Gemini, or compatible LLM) โ๏ธ Setup Instructions ๐๏ธ Create the PostgreSQL tables (Gray section SQL): media_group media_queue chat_histories ๐ Configure the Telegram Trigger with your bot token. ๐ค Connect your AI provider credentials. ๐๏ธ Set up PostgreSQL credentials in the database nodes. โถ๏ธ Deploy the workflow in n8n. ๐ฏ Start sending messages and files to your bot. ๐ Extra Notes โ Green section ensures only one trigger per media group. ๐ Yellow section guarantees captions and files are stored in the correct sequence. โจ Purple section formats AI output to be Telegram-safe and split if needed. ๐ง AI prompt is not fixed, allowing full customization. ๐ก Need Assistance? If youโd like help customizing or extending this workflow, feel free to reach out: ๐ง Email: johnsilva11031@gmail.com ๐ LinkedIn: John Alejandro Silva Rodrรญguez
by Niklas Hatje
Use Case When trying to maximize your outreach, website visitors are often an overlooked source of qualified new leads. This workflow allows your to track and enrich new website visitors and saves them to a Google Sheet once they meet a pre-defined criteria. What this workflow does This workflow fires once a day and gets all your leads saved in Leadfeeder. It then takes the leads that meet a pre-defined engagement criteria, e.g. that they visited your site 3 times, and enriches them additionally with Clearbit. From there it filters the leads again by a criteria on the company, e.g. a minimum employee count, and saves matching leads into a Google Sheet document. Setup Add your Leedfeeder credentials. The name should be Authorization and the value Token token=yourapitoken. You can find your token via Settings -> Personal -> API-Token Add your Google Sheet credentials Save the Leedfeeder account names you want to use in the Setup node Copy the Google Sheets Template and add its URL to the Setup node How to adjust this to your needs Adjust and/or remove the engagement and company criteria Add more ways to enrich a company Potential ideas to enhance the use of this workflow Automatically reach out to users that meet the criteria / that get added to the sheet Create a workflow that finds the right employee in companies that are identified by this workflow
by Manu
In Grist, when I mark a row as confirmed (via a toggle): a webhook is set up to notify n8n, and this workflow will create derived records in the destination table. Design decisions Confirmation-based In the source table there is a boolean column "Confirmed" that will trigger the transfer. This way there is a manual check involved & it's a conscious step to trigger the workflow. Runs once If the destination table already contains an entry, we will not re-create/update it (as it might've already been changed manually) Setup Create a boolean column Confirmed in source table Add a webhook in Grist Settings Add grist API credentials in n8n Set document ID & source table ID/Name in the 'get existing' node Set docID, the destination table ID/Name - and the columns & values you want in the Create Row node
by Bao Duy Nguyen
Who is this for? This template is ideal for developers, DevOps engineers, and automation managers who manage their n8n workflows using GitHub. It helps teams streamline their CI/CD automation by syncing changes from GitHub directly into n8n after a pull request (PR) is merged. What problem is this workflow solving? Manually restoring workflows after reviewing and merging code in GitHub can be tedious and error-prone. This workflow solves that by automating the restore process, ensuring that any new or updated workflow committed to your GitHub repo is automatically imported into your n8n environment. What this workflow does Triggers when a GitHub pull request is closed and merged. Fetches the details of the merge commit. Retrieves the list of added and modified workflow files. Downloads and decodes each workflow file. Creates or updates** the corresponding workflow in your n8n instance automatically. Setup Connect GitHub: Use the GitHub Trigger node and configure GitHub API credentials. Note: I'd recommended to use GitHub PAT (Personal Access Token) classic with repo and admin:repo_hook permission scopes enabled. Connect n8n API: Provide your n8n API credentials in the n8n nodes. - Check this doc Set repository variables: Update github_owner and repo_name in the Define Local Variables node. Enable webhook: Make sure your GitHub repository has a webhook for pull_request events pointing to this workflow. How to customize this workflow to your needs Modify filters to handle only certain branches or file paths. Add Slack or email notifications to confirm successful imports. Insert logging or version tagging for better traceability. Extend with conditional logic for workflow testing before applying changes. This automated flow provides a seamless CI/CD loop between GitHub and n8n, empowering teams to manage workflow versioning efficiently and securely.
by Airtop
Automating File (Image) Upload to Postimages.org Use Case Manually uploading screenshots or other image files to hosting platforms like Postimages.org can be tedious and time-consuming. This automation simplifies the process by automatically uploading an image to Postimages.org and validating the result, which is especially useful for repetitive QA tasks, reporting, or archiving visual web data. What This Automation Does This automation uploads a screenshot to Postimages.org using the following steps: Creates a browser session using Airtop. Navigates to the Postimages.org upload page. Takes a screenshot using the browser session. Uploads the screenshot to the site via the "Choose images" button. Waits briefly to ensure upload processing. Captures a post-upload screenshot for validation. How It Works Session Initialization: Starts a browser session using the Airtop node. Navigation: Opens the URL https://postimages.org/ in a new window. Screenshot Capture: Takes a screenshot to use for upload. File Upload: Uploads the screenshot to the site using the file upload interaction. Validation: Waits 5 seconds and then captures a second screenshot to confirm the image was uploaded successfully. Setup Requirements Airtop API Key โ required for session creation and browser interactions. Next Steps Customize for Other Sites**: Adapt this workflow to automate file uploads to different platforms. Integrate with Reporting Tools**: Combine this automation with workflows that require image reporting or archiving. Enhance Validation**: Add logic to parse the upload confirmation or retrieve the image URL programmatically for logging or sharing. Read more about how to automate file uploads to the web
by ist00dent
This n8n template empowers you to instantly summarize long pieces of text by sending a simple webhook request. By integrating with ApyHub's summarization API, you can distil complex articles, reports, or messages into concise summaries, significantly boosting efficiency across various domains. ๐ง How it works Receive Content Webhook:** This node acts as the entry point, listening for incoming POST requests. It expects a JSON body containing: content: The long text you want to summarize. summary_length (optional): The desired length of the summary (e.g., 'short', 'medium', 'long'). Defaults to 'medium'. And a header containing your apy-token for the ApyHub API. Start Summarization Job:** This node sends a POST request to ApyHub's summarization endpoint (api.apyhub.com/sharpapi/api/v1/content/summarize). It passes the content and summary_length from the webhook body, along with your apy-token from the headers. ApyHub processes the text asynchronously, and this node immediately returns a job_id. Get Summarization Result:** Since ApyHub's summarization is an asynchronous process, this node is crucial. It polls ApyHub's job status endpoint (api.apyhub.com/sharpapi/api/v1/content/summarize/job/status/{{job_id}}) using the job_id obtained from the previous step. It continues to check the status until the summarization is finished, at which point it retrieves the final summarized text. Respond with Summarized Content:** This node sends the final, distilled summarized text back to the service that initiated the webhook. ๐ค Who is it for? This workflow is extremely useful for: Content Creators & Marketers:** Quickly summarize articles for social media snippets, email newsletters, or blog post intros. Researchers & Students:** Efficiently get the gist of academic papers, reports, or long documents without reading every word. Customer Support & Sales Teams:** Summarize customer inquiries, long email chains, or call transcripts to quickly understand key issues or discussion points. News Aggregators & Media Monitoring:** Automatically generate summaries of news articles from various sources for quick consumption. Business Professionals:** Condense lengthy reports, meeting minutes, or project updates into digestible summaries for busy stakeholders. Legal & Compliance:** Summarize legal documents or regulatory texts to highlight critical clauses or changes. Anyone Dealing with Information Overload:** Use it to save time and extract key information from overwhelming amounts of text. ๐Data Structure When you trigger the webhook, send a POST ** request with a **JSON body and an apy-token in the headers: { "content": "Your very long text goes here. This could be an article, a report, a transcript, or any other textual content you want to summarize. The longer the text, the more valuable summarization becomes!", "summary_length": "medium" // Optional: "short", "medium", or "long" } Headers: apy-token: YOUR_APYHUB_API_KEY Note: You'll need to obtain an API Key from ApyHub to use their API services. They typically offer a free tier for testing. The workflow will return a JSON response similar to this (the summary content will vary based on input): { "summary": "Max Verstappen believes the Las Vegas Grand Prix is '99% show and 1% sporting event', not looking forward to the razzmatazz. Other drivers, like Fernando Alonso, were more equivocal about the hype, acknowledging the investment and spectacle. Lewis Hamilton praised the city's energy but emphasized it's 'a business, ultimately', believing there will still be good racing.", "status": "finished", "result_file_id": "..." // ApyHub might provide a file ID for larger results } โ๏ธ Setup Instructions Get an ApyHub API Key:** Go to https://apyhub.com/ and sign up to get your API key. Import Workflow:** In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path:** Double-click the Receive Content Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /summarize-content). Activate Workflow:** Save and activate the workflow. ๐ Tips This content summarizer is a powerful component. Here's how to supercharge it and make it an indispensable part of your automation arsenal: Integrate with Document/File Storage:** Google Drive/Dropbox/OneDrive:* Automatically summarize documents uploaded to these services. Add a Watch New Files trigger (if available for your service) or a Cron node to regularly check for new files. Then, read the file content, pass it to this summarizer, and save the summary back to a designated folder or as a comment on the original file. CRM/CMS Systems:* Pull long notes, customer interactions, or article drafts from your CRM/CMS, summarize them, and update the records with the concise version. Email Processing & Triage:** Email Trigger: Use an Email node to trigger the workflow when new emails arrive. Extract the email body, summarize it, and then: Send a shortened summary as a notification to your Slack or Telegram. Add a summary to a task management tool (e.g., Trello, Asana) for quicker triaging. Create a summary for an email digest. Slack/Discord Bot Integration:** Create a Slack/Discord command (using a custom webhook or a dedicated Slack/Discord node) where users can paste long text. The bot then sends the summarized version back to the channel. Dynamic Summary Length & Options:** Allow the user to specify summary_length (short, medium, long) in the webhook body, as already implemented. Explore ApyHub's documentation for more parameters (if any) and dynamically pass them. Error Handling & User Feedback:** Add an IF node after Get Summarization Result to check for status: 'failed' or error messages. If an error occurs, send a helpful message back to the webhook caller or an internal alert. For very long texts that might exceed API limits, add a Function node to truncate the input content if it's too long, and notify the user. Multi-language Support (if ApyHub offers it):** If ApyHub supports summarization in multiple languages, extend the webhook to accept a language parameter and pass it to the API. Web Scraping & Article Summaries:** Combine this with a HTTP Request node to scrape content from a web page (e.g., a news article). Then, pass the extracted article text to this summarizer to get quick insights. Data Storage & Archiving:** Store the original content alongside its summary in a database (e.g., PostgreSQL, MongoDB) or a simple spreadsheet (Google Sheets, Airtable). This creates a searchable, summarized archive of your content. Automated Report Generation:** If you receive daily/weekly reports, use this workflow to summarize key sections, then compile these summaries into a concise digest or dashboard using a Merge node and send it out automatically.
by Jimleuk
This n8n template demonstrates how to build a simple but effective vintage image restoration service using an AI model with image editing capabilities. With Gemini now capable of multimodal output, it's a great time to explore this capability for image or graphics automation. Let's see how well it does for a task such as image restoration. Good to know At time of writing, each image generated will cost $0.039 USD. See Gemini Pricing for updated info. The model used in this workflow is geo-restricted! If it says model not found, it may not be available in your country or region. How it works Images are imported into our workflow via the HTTP node and converted to base64 strings using the Extract from file node. The image data is then pipelined to Gemini's Image Generation model. A prompt is provided to instruct Gemini to "restore" the image to near new condition - of course, feel free to experiment with this prompt to improve the results! Gemini's responds with the image as a base64 string and hence, a convert to file node is used to transform the data to binary. With the restored image as a binary, we can then use this with our Google Drive node to upload it to our desired folder. How to use This demonstration uses 3 random images sourced from the internet but any typical image file will work. Use a webhook node to allow integration from other applications. Use a telegram trigger for instant mobile service! Requirements Google Gemini for LLM/Image generation Google Drive for Upload Storage Customising this workflow AI image editing can be applied to many use-cases not just image restoration. Try using it to add watermarks, branding or modify an existing image for marketing purposes.
by Jimleuk
This n8n template demonstrates how to use AI to compose or "stitch" separate images together to generate a new image which retains the source assets and consistent style. Use cases are many: Try producing storyboard scenes with consistent characters, marketing material with existing product assets or trying on different articles on fashion! Good to know At time of writing, each image generated will cost $0.039 USD. See Gemini Pricing for updated info. The model used in this workflow is geo-restricted! If it says model not found, it may not be available in your country or region. How it works We'll import our required assets via our Cloud storage using the HTTP node. The images are then converted to base64 strings and aggregated so we can use it for our AI model. Gemini's image generation model is used which takes all 3 images and a prompt that we define. Our prompt instructs the model on how to compose the final image. Gemini generates a new image but uses the original 3 assets to do so. The consistency to the source images is very high and shows little signs of hallucinations! Gemini's output is base64 so we use a "Convert to file" node to convert the data to binary. The final binary image is then uploaded to Google Drive to complete the demonstration. How to use The manual trigger node is used as an example but feel free to replace this with other triggers such as webhook or even a form. Technically, you should be able to compose even more images but of course, the generation will take longer and cost more. Requirements Gemini account for LLM and Image generation Google drive for upload Customising this workflow AI Image editing can be used for many use-cases. Try a popular use-case such as virtual try-on for fashion or applying branding on editing image assets.
by Un tal Camilo Medina
๐ค Telegram Bot Webhook Configuration Tool This workflow creates a simple web form that helps you configure Telegram bot webhooks quickly. Instead of manually constructing the Telegram API URL, this tool does it for you automatically. How It Works The workflow consists of three main steps: Form Input: A web form collects your bot token and webhook URL URL Construction: Automatically builds the correct Telegram API URL Redirect: Takes you directly to the Telegram API to complete the configuration What You Need Bot Token**: Get this from @BotFather on Telegram (format: 123456789:ABCdefGHIjklMNOpqrsTUVwxyz) Webhook URL**: Your n8n webhook endpoint (must be HTTPS) Setup Instructions Import this workflow into your n8n instance Activate the workflow Access the generated form URL Fill in your bot details and submit Form Fields | Field | Description | Example | |-------|-------------|---------| | Bot API Token | Token from BotFather | 123456789:ABCdefGHIjklMNOpqrsTUVwxyz | | Webhook URL | Your n8n webhook endpoint | https://your-instance.app.n8n.cloud/webhook/telegram | What Happens You enter your bot token and webhook URL in the form The workflow constructs this URL: https://api.telegram.org/bot{TOKEN}/setWebhook?url={WEBHOOK_URL} You're redirected to that URL where Telegram configures your webhook Telegram shows you a success or error message Benefits No Manual URL Building**: Eliminates copy-paste errors Quick Setup**: Configure webhooks in seconds Privacy Focused**: No data is stored anywhere Team Friendly**: Share the form URL with team members Common Webhook URLs n8n Cloud: https://your-instance.app.n8n.cloud/webhook/telegram-bot Self-hosted: https://your-domain.com/webhook/telegram-bot Requirements n8n with form trigger support Valid Telegram bot token Publicly accessible webhook URL (HTTPS required) Troubleshooting Invalid Token Error: Make sure you copied the complete token from BotFather Webhook Error: Ensure your URL is publicly accessible and uses HTTPS SSL Error: Verify your webhook URL has a valid SSL certificate This tool simply automates the manual process of visiting the Telegram API URL to configure your bot's webhook. Perfect for developers who frequently set up or change Telegram bot configurations.
by Ranjan Dailata
Who this is for? Google SERP Tracker + Trends and Recommendations is an AI-powered n8n workflow that extracts Google search results via Bright Data, parses them into structured JSON using Google Gemini, and generates actionable recommendations and search trends. It outputs CSV reports and sends real-time Webhook notifications. This workflow is ideal for: SEO Agencies needing automated rank & trend tracking Growth Marketers seeking daily/weekly search-based insights Product Teams monitoring brand or competitor visibility Market Researchers performing search behavior analysis No-code Builders automating search intelligence workflows What problem is this workflow solving? Traditional tracking of search engine rankings and search trends is often fragmented and manual. Analyzing SERP changes and trends requires: Manual extraction or using unstable scrapers Unstructured or cluttered HTML data Lack of actionable insights or recommendations This workflow solves the problem by: Automating real-time Google SERP data extraction using Bright Data Structuring unstructured search data using Google Gemini LLM Generating actionable recommendations and trends Exporting both CSV reports automatically to disk for downstream use Notifying external systems via Webhook What this workflow does Accepts search input, zone name, and webhook notification URL Uses Bright Data to extract Google Search Results Uses Google Gemini LLM to parse the SERP data into structured JSON Loops over structured results to: Extract recommendations Extract trends Saves both as .csv files (example below): Google_SERP_Recommendations_Response_2025-06-10T23-01-50-650Z.csv Google_SERP_Trends_Response_2025-06-10T23-01-38-915Z.csv Sends a Webhook with the summary or file reference LLM Usage Google Gemini LLM handles: Parsing Google Search HTML into structured JSON Summarizing recommendation data Deriving trends from the extracted SERP metadata Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set input fields with the search criteria, Bright Data Zone name, Webhook notification URL. How to customize this workflow to your needs Input Customization Set your target keyword/phrase in the search field Add your webhook_notification_url for external triggers or notifications SERP Source You can extend the Bright Data search logic to include other engines like Bing or DuckDuckGo. Output Format Edit the .csv structure in the Convert to File nodes if you want to include/exclude specific columns. LLM Prompt Tuning The Gemini LLM prompt inside the Recommendation or Trends extractor nodes can be fine-tuned for domain-specific insight (e.g., SEO vs eCommerce focus).
by Amit Mehta
How it Works This workflow reads sheet details from a source Google Spreadsheet, creates a new spreadsheet, replicates the sheet structure, enriches the content by reading data, and writes it into the corresponding sheets in the new spreadsheet. The process is looped for every sheet, providing an automated way to duplicate and transform structured data. ๐ฏ Use Case Automate duplication and data enrichment for multi-sheet Google Spreadsheets Replicate templates across new documents with consistent formatting Data team workflows requiring repetitive structured Google Sheets setup Setup Instructions 1. Required Google Sheets You must have a source spreadsheet with multiple sheets. The destination spreadsheet will be created automatically. 2. API Credentials Google Sheets OAuth2** โ connect to both read and write spreadsheets. HTTP Request Auth** โ if external API headers are needed. 3. Configure Fields in Write Sheet Ensure you define appropriate columns and mapping for the destination sheet. ๐ Workflow Logic Manual Trigger: Starts the flow on user demand. Create New Spreadsheet: Generates a blank spreadsheet. HTTP Request: Retrieves all sheet names from the source spreadsheet. JavaScript Code: Extracts titles and metadata from the HTTP response. Loop Over Sheets: Iterates through each sheet retrieved. Delete Default Sheet: Removes the placeholder 'Sheet1'. Create Sheets: Replicates each original sheet in the new document. Read Spreadsheet1: Pulls data from the matching original sheet. Write Sheet: Appends the data to the newly created sheets. ๐งฉ Node Descriptions | Node Name | Description | |-----------|-------------| | Manual Trigger | Starts the workflow manually by user test. | | Create New Spreadsheet | Creates a new Google Spreadsheet for output. | | HTTP Request | Fetches metadata from the source spreadsheet including sheet names. | | Code | Processes sheet metadata into a list for iteration. | | Loop Over Items | Loops over each sheet to replicate and populate. | | Google Sheets2 | Deletes the default 'Sheet1' from the new spreadsheet. | | Create Sheets | Creates a new sheet matching each source sheet. | | Read Spreadsheet1 | Reads data from the source sheet. | | Write sheet | Writes the data into the corresponding new sheet. | ๐ ๏ธ Customization Tips Adjust the Google Sheet title to be dynamic or user-input driven Add filtering logic before writing data Append custom audit columns like 'Timestamp' or 'Processed By' Enable logging or Slack alerts after each sheet is created ๐ Required Files | File Name | Purpose | |-----------|---------| | My_workflow_4.json | Main workflow JSON file for sheet duplication and enrichment | ๐งช Testing Tips Test with a spreadsheet containing 2โ3 simple sheets Validate whether all sheets are duplicated Check if columns and data structure remain intact Watch for authentication issues in Google Sheets nodes ๐ท Suggested Tags & Categories #GoogleSheets #Automation #DataEnrichment #Workflow #Spreadsheet