by Mykolas Bartkus
What This Workflow Does This n8n workflow reads backlinks from a Google Sheet, sends each one to the DataForSEO On-Page API, and checks: Whether the backlink is still live on the target page Whether it's dofollow or nofollow Whether it's missing (i.e., lost) The result is then written back to the same Google Sheet under a Status column. Your result will look like this: Step-by-Step Setup Instructions Add your DataForSEO and Google Sheets credentials in n8n Make sure your Google Sheet has these columns: Backlink URL, Landing page, and Status Click the Test Workflow button to check a batch of backlinks Workflow Breakdown Trigger: Manual test start Read Data: Pulls backlink URLs and target pages from Google Sheets Format URLs: Extracts domain from URL Send POST Request to DataForSEO: Triggers a crawl on the backlink URL Wait 20 seconds: Allows crawl to finish Fetch Link Results: Retrieves backlink data from DataForSEO Validate Backlink: Checks if the backlink is live, and whether itโs dofollow Update Google Sheets: Logs the status as Live, Lost, or Lost (Nofollow)
by n8n Team
This template quickly shows how to use RAG in n8n. Who is this for? This template is for everyone who wants to start giving knowledge to their Agents through RAG. Requirements Have a PDF with custom knowledge that you want to provide to your agent. Setup No setup required. Just hit Execute Workflow, upload your knowledge document and then start chatting. How to customize this to your needs Add custom instructions to your Agent by changing the prompts in it. Add a different way to load in knowledge to your vector store, e.g. by looking at some Google Drive files or loading knowledge from a table. Exchange the Simple Vector Store nodes with your own vector store tools ready for production. Add a more sophisticated way to rank files found in the vector store. For more information read our docs on RAG in n8n.
by Oliver Bardenheier
๐ ๏ธSetup Guide 'Get OVH Invoices to Google Sheets' Author: Oliver Bardenheier Who is this for? This Workflow is for all users who have services (Domains, BareMetal, VPS, Cloud, etc.) with Provider OVH.com (European API) It automatically retrieves invoice data, -files and puts the Data in a Google Spreadsheet for further processing. What problem is this workflow solving? / use case Currently the invoices from OVH do not come as an attachment via mail, it is just a link. So, the receiver has to be logged in to the ovh account to download the file. Even more effort if one is using 2FA. This workflow retrieves all information through the oauth2 token. What this workflow does This Workflow automatically retrieves invoice data, -files from Your OVH.com account and puts the Data in a Google Spreadsheet for further processing. It also saves the invoice PDF to a certain (yearly) folder in Your Google Drive. Setup Make a copy of this Google Sheet Template Set the timeframe for the query to Your likings in "Query Latest OVH Invoices" You could set an email trigger before and make the frame only one day. Log into Your OVH Account and get Your Credentials here Authentication using oAuth2 Authorization Code "Login with OVHcloud SSO" You need to Authorize OVHcloud API console If this worked fine You'll see a green text: "Access Token Received" Head over to the OVH API Console to get Your Token. Set Up Header Auth in the HTTP nodes: Authentication = Generic Credential Type Generic Auth Type = Header Auth Header Auth = Your OVH Header Credentials: -- a.) In every API Call in the console You'll find a curl example, just take the data from the line including: -H "authorization: Bearer eyJhxxxxxxxxxxxxxxxxxxxxxxxxxxxxx......" -- b.) Create a new Credential in n8n for the header auth. Put in the 'name' Field: authorization Copy Your Token including Bearer in the value field: 'Bearer eyJhxxxxxxxxxxxxxxxxxxxxxxxxxxxxx......' How to customize this workflow to your needs You can put in a mail trigger that activates on every incoming invoice mail from OVH. Adjusting the timeframe to get invoices from a certain time period, or remove the time variables completely to get ALL invoices.
by Mutasem
Use Case Track all Linear tickets in Google sheets. Useful if you want to do some custom analysis but don't want to pay for Linear's Plus features (Linear Insights) or that it does not cover. Setup Add Linear API header key Add Google sheets creds Update which teams to get tickets from in Graphql Nodes Update which Google Sheets page to write all the tickets to You only need to add one column, id, in the sheet. Google Sheets node in automatic mapping mode will handle adding the rest of the columns. Set any custom data on each ticket Activate workflow ๐ How to adjust this template Set any custom fields you want to get out of this, that you can quickly do in n8n.
by Ranjan Dailata
Who this is for? Extract & Summarize Yelp Business Review is an automated workflow that extracts the Yelp business reviews using Bright Data Web Unlocker, process and formats the raw data, summarizes using the Google Gemini's LLM, and forward the concise summary with the review respose to a specified webhook endpoint. This workflow is tailored for: Local SEO Specialists who need structured insights from Yelp reviews to optimize listings. Business Owners wanting quick summaries of what customers love or complain about. Reputation Managers who monitor brand sentiment and identify customer pain points. Data Analysts & Researchers extracting Yelp review patterns at scale. AI Product Builders needing clean Yelp review data as input for their LLMs or recommender systems. What problem is this workflow solving? Yelp reviews are rich in customer sentiment but messy to work with manually. This workflow solves: The pain of scraping Yelp review content manually. The challenge of building the structured data with the summary. The need for structured outputs suitable for analysis, reports, or AI input. What this workflow does This automated pipeline does the following: Bright Data Integration**: Queries Yelp and scrapes business listing data using Bright Data's Web Unlocker. Structured Data Formatting**: Formats the Yelp review data to a structured response in JSON format. Google Gemini Summarization**: Sends the cleaned reviews to Google Gemini to: Output Delivery**: Returns the structured response with the concise summary over the webhook endpoint. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Yelp Business Review URL with the Bright Data zone by navigating to the Set Yelp URL with the Bright Data Zone node. Update the Webhook Notifier for the merged response node with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether youโre a market researcher, entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Target Specific Business Categories** Update the Yelp Business Review input to scrape different businesses like gyms, salons etc. Limit Reviews** Add filters by description, location, page range to get the top reviews. Tweak the Data Extraction Node** Update the Structured Data Extractor node Output Parser for building the JSON response with the appropriate fields or attributes. Tweak the Summarization Prompt** Modify the Gemini prompt to generate a comprehensive summary. Send Output to Other Destinations** Replace the Webhook URL to forward output to: Google Sheets Airtable Slack or Discord Custom API endpoints
by Joseph LePage
Transform your local N8N instance into a powerful chat interface using any local & private Ollama model, with zero cloud dependencies โ๏ธ. This workflow creates a structured chat experience that processes messages locally through a language model chain and returns formatted responses ๐ฌ. How it works ๐ ๐ญ Chat messages trigger the workflow ๐ง Messages are processed through Llama 3.2 via Ollama (or any other Ollama compatible model) ๐ Responses are formatted as structured JSON โก Error handling ensures robust operation Set up steps ๐ ๏ธ ๐ฅ Install N8N and Ollama โ๏ธ Download Ollama 3.2 model (or other model) ๐ Configure Ollama API credentials โจ Import and activate workflow This template provides a foundation for building AI-powered chat applications while maintaining full control over your data and infrastructure ๐.
by Manu
In Grist, when I mark a row as confirmed (via a toggle): a webhook is set up to notify n8n, and this workflow will create derived records in the destination table. Design decisions Confirmation-based In the source table there is a boolean column "Confirmed" that will trigger the transfer. This way there is a manual check involved & it's a conscious step to trigger the workflow. Runs once If the destination table already contains an entry, we will not re-create/update it (as it might've already been changed manually) Setup Create a boolean column Confirmed in source table Add a webhook in Grist Settings Add grist API credentials in n8n Set document ID & source table ID/Name in the 'get existing' node Set docID, the destination table ID/Name - and the columns & values you want in the Create Row node
by ist00dent
This n8n template empowers you to instantly summarize long pieces of text by sending a simple webhook request. By integrating with ApyHub's summarization API, you can distil complex articles, reports, or messages into concise summaries, significantly boosting efficiency across various domains. ๐ง How it works Receive Content Webhook:** This node acts as the entry point, listening for incoming POST requests. It expects a JSON body containing: content: The long text you want to summarize. summary_length (optional): The desired length of the summary (e.g., 'short', 'medium', 'long'). Defaults to 'medium'. And a header containing your apy-token for the ApyHub API. Start Summarization Job:** This node sends a POST request to ApyHub's summarization endpoint (api.apyhub.com/sharpapi/api/v1/content/summarize). It passes the content and summary_length from the webhook body, along with your apy-token from the headers. ApyHub processes the text asynchronously, and this node immediately returns a job_id. Get Summarization Result:** Since ApyHub's summarization is an asynchronous process, this node is crucial. It polls ApyHub's job status endpoint (api.apyhub.com/sharpapi/api/v1/content/summarize/job/status/{{job_id}}) using the job_id obtained from the previous step. It continues to check the status until the summarization is finished, at which point it retrieves the final summarized text. Respond with Summarized Content:** This node sends the final, distilled summarized text back to the service that initiated the webhook. ๐ค Who is it for? This workflow is extremely useful for: Content Creators & Marketers:** Quickly summarize articles for social media snippets, email newsletters, or blog post intros. Researchers & Students:** Efficiently get the gist of academic papers, reports, or long documents without reading every word. Customer Support & Sales Teams:** Summarize customer inquiries, long email chains, or call transcripts to quickly understand key issues or discussion points. News Aggregators & Media Monitoring:** Automatically generate summaries of news articles from various sources for quick consumption. Business Professionals:** Condense lengthy reports, meeting minutes, or project updates into digestible summaries for busy stakeholders. Legal & Compliance:** Summarize legal documents or regulatory texts to highlight critical clauses or changes. Anyone Dealing with Information Overload:** Use it to save time and extract key information from overwhelming amounts of text. ๐Data Structure When you trigger the webhook, send a POST ** request with a **JSON body and an apy-token in the headers: { "content": "Your very long text goes here. This could be an article, a report, a transcript, or any other textual content you want to summarize. The longer the text, the more valuable summarization becomes!", "summary_length": "medium" // Optional: "short", "medium", or "long" } Headers: apy-token: YOUR_APYHUB_API_KEY Note: You'll need to obtain an API Key from ApyHub to use their API services. They typically offer a free tier for testing. The workflow will return a JSON response similar to this (the summary content will vary based on input): { "summary": "Max Verstappen believes the Las Vegas Grand Prix is '99% show and 1% sporting event', not looking forward to the razzmatazz. Other drivers, like Fernando Alonso, were more equivocal about the hype, acknowledging the investment and spectacle. Lewis Hamilton praised the city's energy but emphasized it's 'a business, ultimately', believing there will still be good racing.", "status": "finished", "result_file_id": "..." // ApyHub might provide a file ID for larger results } โ๏ธ Setup Instructions Get an ApyHub API Key:** Go to https://apyhub.com/ and sign up to get your API key. Import Workflow:** In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path:** Double-click the Receive Content Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /summarize-content). Activate Workflow:** Save and activate the workflow. ๐ Tips This content summarizer is a powerful component. Here's how to supercharge it and make it an indispensable part of your automation arsenal: Integrate with Document/File Storage:** Google Drive/Dropbox/OneDrive:* Automatically summarize documents uploaded to these services. Add a Watch New Files trigger (if available for your service) or a Cron node to regularly check for new files. Then, read the file content, pass it to this summarizer, and save the summary back to a designated folder or as a comment on the original file. CRM/CMS Systems:* Pull long notes, customer interactions, or article drafts from your CRM/CMS, summarize them, and update the records with the concise version. Email Processing & Triage:** Email Trigger: Use an Email node to trigger the workflow when new emails arrive. Extract the email body, summarize it, and then: Send a shortened summary as a notification to your Slack or Telegram. Add a summary to a task management tool (e.g., Trello, Asana) for quicker triaging. Create a summary for an email digest. Slack/Discord Bot Integration:** Create a Slack/Discord command (using a custom webhook or a dedicated Slack/Discord node) where users can paste long text. The bot then sends the summarized version back to the channel. Dynamic Summary Length & Options:** Allow the user to specify summary_length (short, medium, long) in the webhook body, as already implemented. Explore ApyHub's documentation for more parameters (if any) and dynamically pass them. Error Handling & User Feedback:** Add an IF node after Get Summarization Result to check for status: 'failed' or error messages. If an error occurs, send a helpful message back to the webhook caller or an internal alert. For very long texts that might exceed API limits, add a Function node to truncate the input content if it's too long, and notify the user. Multi-language Support (if ApyHub offers it):** If ApyHub supports summarization in multiple languages, extend the webhook to accept a language parameter and pass it to the API. Web Scraping & Article Summaries:** Combine this with a HTTP Request node to scrape content from a web page (e.g., a news article). Then, pass the extracted article text to this summarizer to get quick insights. Data Storage & Archiving:** Store the original content alongside its summary in a database (e.g., PostgreSQL, MongoDB) or a simple spreadsheet (Google Sheets, Airtable). This creates a searchable, summarized archive of your content. Automated Report Generation:** If you receive daily/weekly reports, use this workflow to summarize key sections, then compile these summaries into a concise digest or dashboard using a Merge node and send it out automatically.
by Jimleuk
This n8n template demonstrates how to use AI to compose or "stitch" separate images together to generate a new image which retains the source assets and consistent style. Use cases are many: Try producing storyboard scenes with consistent characters, marketing material with existing product assets or trying on different articles on fashion! Good to know At time of writing, each image generated will cost $0.039 USD. See Gemini Pricing for updated info. The model used in this workflow is geo-restricted! If it says model not found, it may not be available in your country or region. How it works We'll import our required assets via our Cloud storage using the HTTP node. The images are then converted to base64 strings and aggregated so we can use it for our AI model. Gemini's image generation model is used which takes all 3 images and a prompt that we define. Our prompt instructs the model on how to compose the final image. Gemini generates a new image but uses the original 3 assets to do so. The consistency to the source images is very high and shows little signs of hallucinations! Gemini's output is base64 so we use a "Convert to file" node to convert the data to binary. The final binary image is then uploaded to Google Drive to complete the demonstration. How to use The manual trigger node is used as an example but feel free to replace this with other triggers such as webhook or even a form. Technically, you should be able to compose even more images but of course, the generation will take longer and cost more. Requirements Gemini account for LLM and Image generation Google drive for upload Customising this workflow AI Image editing can be used for many use-cases. Try a popular use-case such as virtual try-on for fashion or applying branding on editing image assets.
by Amit Mehta
How it Works This workflow reads sheet details from a source Google Spreadsheet, creates a new spreadsheet, replicates the sheet structure, enriches the content by reading data, and writes it into the corresponding sheets in the new spreadsheet. The process is looped for every sheet, providing an automated way to duplicate and transform structured data. ๐ฏ Use Case Automate duplication and data enrichment for multi-sheet Google Spreadsheets Replicate templates across new documents with consistent formatting Data team workflows requiring repetitive structured Google Sheets setup Setup Instructions 1. Required Google Sheets You must have a source spreadsheet with multiple sheets. The destination spreadsheet will be created automatically. 2. API Credentials Google Sheets OAuth2** โ connect to both read and write spreadsheets. HTTP Request Auth** โ if external API headers are needed. 3. Configure Fields in Write Sheet Ensure you define appropriate columns and mapping for the destination sheet. ๐ Workflow Logic Manual Trigger: Starts the flow on user demand. Create New Spreadsheet: Generates a blank spreadsheet. HTTP Request: Retrieves all sheet names from the source spreadsheet. JavaScript Code: Extracts titles and metadata from the HTTP response. Loop Over Sheets: Iterates through each sheet retrieved. Delete Default Sheet: Removes the placeholder 'Sheet1'. Create Sheets: Replicates each original sheet in the new document. Read Spreadsheet1: Pulls data from the matching original sheet. Write Sheet: Appends the data to the newly created sheets. ๐งฉ Node Descriptions | Node Name | Description | |-----------|-------------| | Manual Trigger | Starts the workflow manually by user test. | | Create New Spreadsheet | Creates a new Google Spreadsheet for output. | | HTTP Request | Fetches metadata from the source spreadsheet including sheet names. | | Code | Processes sheet metadata into a list for iteration. | | Loop Over Items | Loops over each sheet to replicate and populate. | | Google Sheets2 | Deletes the default 'Sheet1' from the new spreadsheet. | | Create Sheets | Creates a new sheet matching each source sheet. | | Read Spreadsheet1 | Reads data from the source sheet. | | Write sheet | Writes the data into the corresponding new sheet. | ๐ ๏ธ Customization Tips Adjust the Google Sheet title to be dynamic or user-input driven Add filtering logic before writing data Append custom audit columns like 'Timestamp' or 'Processed By' Enable logging or Slack alerts after each sheet is created ๐ Required Files | File Name | Purpose | |-----------|---------| | My_workflow_4.json | Main workflow JSON file for sheet duplication and enrichment | ๐งช Testing Tips Test with a spreadsheet containing 2โ3 simple sheets Validate whether all sheets are duplicated Check if columns and data structure remain intact Watch for authentication issues in Google Sheets nodes ๐ท Suggested Tags & Categories #GoogleSheets #Automation #DataEnrichment #Workflow #Spreadsheet
by John Alejandro SIlva
๐ค๐จ Telegram AI Assistant with Multi-File Media Group Handling, Smart File Processing & PostgreSQL Integration > AI-powered Telegram bot for text, voice, video, documents & media โ with database-driven grouping and Telegram-safe formatting. ๐ Description This n8n template creates a next-generation Telegram AI assistant ๐ง ๐ฌ capable of handling text messages, media files, and documents with advanced processing, PostgreSQL integration, and AI-powered responses. It is designed to solve Telegramโs media group challenge ๐ฆ โ when multiple files are sent together, they are stored, processed, and combined into one coherent AI-generated reply. โจ Key Features ๐ Multi-file media group management with PostgreSQL: media_group media_queue chat_histories ๐ Document parsing for CSV, HTML, ICS, JSON, ODS, PDF (with AI fallback), RTF, TXT, XML, and spreadsheets. ๐ค Voice & video transcription for AI analysis. ๐ผ๏ธ Image, audio, and video description for richer AI context. ๐ก๏ธ Telegram-safe MarkdownV2 formatting with auto-splitting for messages over 4096 chars. โ ๏ธ Error fallback for unsupported file types. ๐ก Acknowledgment A huge thank you to Ezema Gingsley Chibuzo ๐ for the inspiration of the first version of this workflow: Create a Multi-Modal Telegram Support Bot with GPT-4 and Supabase RAG Your pioneering work laid the foundation for this improved, database-powered multi-modal assistant ๐ ๐ท Tags telegram ai-assistant postgresql multi-file media-group file-processing voice-transcription document-parser pdf-extraction markdown-formatting n8n-template ๐ผ Use Case Use this template if you need an AI-powered Telegram bot that can: ๐ฆ Handle multiple files sent in a single message (albums, multiple PDFs, etc.). ๐งพ Extract & analyze content from many file formats. ๐๏ธ Transcribe voice and video messages. ๐๏ธ Maintain chat memory for contextual AI answers. ๐ก๏ธ Avoid Telegram formatting errors and length limit issues. This workflow automates the full chain: Receive โ Process โ AI Analysis โ Telegram-safe Reply. ๐ฌ Example User Interactions ๐ Multiple PDFs with a caption** โ AI extracts and summarizes all PDFs in one combined reply. ๐ค Voice message** โ AI transcribes and replies with a contextual answer. ๐ CSV or spreadsheet file** โ AI parses and summarizes the data. ๐ผ๏ธ Multiple images** โ AI describes each image and replies in a single message. ๐ Required Credentials Telegram Bot API** (Bot Token) PostgreSQL** (Connection credentials) AI Provider API** (OpenAI, Google Gemini, or compatible LLM) โ๏ธ Setup Instructions ๐๏ธ Create the PostgreSQL tables (Gray section SQL): media_group media_queue chat_histories ๐ Configure the Telegram Trigger with your bot token. ๐ค Connect your AI provider credentials. ๐๏ธ Set up PostgreSQL credentials in the database nodes. โถ๏ธ Deploy the workflow in n8n. ๐ฏ Start sending messages and files to your bot. ๐ Extra Notes โ Green section ensures only one trigger per media group. ๐ Yellow section guarantees captions and files are stored in the correct sequence. โจ Purple section formats AI output to be Telegram-safe and split if needed. ๐ง AI prompt is not fixed, allowing full customization. ๐ก Need Assistance? If youโd like help customizing or extending this workflow, feel free to reach out: ๐ง Email: johnsilva11031@gmail.com ๐ LinkedIn: John Alejandro Silva Rodrรญguez
by Robert Breen
This n8n workflow finds experts on any topic, scrapes their websites, and pulls out contact emails automatically. Core services used: SerpAPI (google search) ยท Apify (website crawler) ยท OpenAI (GPT-4o email extraction). ๐ ๏ธ Step-by-Step Setup & Execution 1๏ธโฃ Run Workflow (Manual Trigger) | Node | Type | Purpose | |------|------|---------| | Run Workflow | Manual Trigger | Start the workflow on demand while you test. | 2๏ธโฃ Set Your Topic | Node | Type | How to configure | |------|------|------------------| | Set Topic | Set | Add a string field Topic โ e.g. "n8n". This keyword drives every subsequent step. | 3๏ธโฃ Search Google (Results 1-10) | Node | Type | API Credential | |------|------|----------------| | Search Google (top 10) | SerpAPI | Create SerpAPI credential1. Sign up โ copy API key โ n8n โ Credentials โ New โ SerpAPI โ paste.2. Select the credential in this node. | | Key Params | | | | q | | ={{ $json.Topic }} Expert | | location | | Region code (ex 585069efee19ad271e9c9b36) | | additionalFields.start | | "10" (Google position 1-10)| 4๏ธโฃ Search Google (Results 11-20) | Node | Type | Notes | |------|------|-------| | Search Google (11-20) | SerpAPI (same credential) | Remove start or set to 20+ to fetch next page. | 5๏ธโฃ Extract URL Lists | Node | Type | Script Purpose | |------|------|----------------| | Extract Url & Extract Url 2 | Code | Loop data.organic_results โ output { title, link, displayed_link } for each result. | 6๏ธโฃ Combine Both Result Sets | Node | Type | Details | |------|------|---------| | Append Results | Merge (combineAll) | Merges arrays from steps 3 & 4 into a single list for processing. | 7๏ธโฃ Loop Over Every URL | Node | Type | Configuration | |------|------|---------------| | Loop Over Items1 | Split In Batches | Default batch = 1 (process one page at a time).onError = continueRegularOutput keeps loop alive on failures. | 8๏ธโฃ Scrape Webpage Content (Apify) | Node | Type | API Credential | |------|------|----------------| | Scrape URL with apify | HTTP Request | Create Apify credential1. Sign up at https://console.apify.com2. Account โ API tokens โ copy.3. n8n โ Credentials โ New โ HTTP Query Auth โ set query param token=YOUR_TOKEN. | | Request Details | | | | Method | POST | | URL | https://api.apify.com/v2/acts/6sigmag~fast-website-content-crawler/run-sync-get-dataset-items | | JSON Body | 9๏ธโฃ Extract Email with OpenAI | Node | Type | API Credential | |------|------|----------------| | Extract Email from webpage | LangChain Agent | Create OpenAI credential1. Generate key at https://platform.openai.com/account/api-keys2. n8n โ Credentials โ New โ OpenAI API โ paste key. | | Prompt (system) | | Output Parser | Structured Output Parser2 expects โ { "email": "address OR null" } | ๐ Loop Continues & Final Data The extracted result returns to Loop Over Items1 until every URL is processed. Typical final item JSON**: { "title": "How to Build n8n Workflows", "link": "https://example.com", "email": "info@example.com" } ๐ก Optional Enhancements Idea How Save Leads Add a Google Sheets or Airtable node after the loop. Validate Emails Chain a ZeroBounce / Hunter.io verification API before saving. Parallel Crawling Increase SplitInBatches size (watch Apify rate limits). ๐โโ๏ธ Need More Help? Robert Breen โ Automation Consultant & n8n Expert ๐ง robert.j.breen@gmail.com ๐ https://www.linkedin.com/in/robert-breen-29429625/ ๐ https://ynteractive.com