You can import plain text files with product data to your product catalog in Connect. This option lets you add new products, update existing ones and create custom product attributes during import.
Typical usage:
- Import new product inventory from your e-commerce platform or inventory management system.
- Update product pricing, availability and descriptions based on supplier feeds.
- Add seasonal products with custom attributes for holiday promotions and campaigns.
How it works:
- If the ID of a product in your file is new, the product is added to the product catalog.
- If the ID of a product is already available in the product catalog, the existing product is updated. You can update all product attributes except for Product ID.
- You must map columns in the file to existing product attributes. If there is no suitable attribute in Connect, you can add it as part of the mutation.
File requirements
Your file must meet these specifications:
- Encoding: UTF-8
- Format: CSV (comma-separated), PSV or TSV. The same type of delimiter must be used throughout the file. If the file contains mixed delimiters or other separating characters, the import job may fail.
- Header row: Optional
- Maximum size: 5 GB
Dates
If a file contains dates, they must be in one of the following formats:
yyyy-MM-ddMM/dd/yyyydd/MM/yyyyyyyy/MM/dddd.MM.yyyyyyyy-MM-dd'T'HH:mm:ssXXX
Tip
Be careful when opening text files in Excel as it can automatically convert a date like
02/10/2025to an unsupported format such as2/10/25. To prevent this, use a text editing app such as Notepad or Visual Studio Code.
Initial setup
To start importing products, you need to configure access to your Connect FTP directory and upload your files there.
Examples
Importing products with existing attributes
A grocery retailer receives weekly inventory updates from suppliers. They import new seasonal fruits with pricing and availability information, enabling their marketing team to create targeted campaigns for fresh produce arrivals.
Their CSV file has 6 new products, 4 of which are new. All product attributes in the file are predefined in Connect except for "Available Offline," which is a custom attribute that already exists in the product catalog.
Name,ID,Available offline,Date added,Price,Currency
Apple,APP-1314,TRUE,20/11/2025,4.99,USD
Orange,OR-7810,TRUE,20/11/2025,6.99,USD
Pineapple,PIN-2341,FALSE,20/11/2025,12.99,USD
Mango,MAN-2351,TRUE,10/10/2025,11.45,USD
Lychee,LYC-0091,FALSE,10/10/2025,20,USD
Melon,MEL-1280,TRUE,20/11/2025,5.39,USD
They use the following mutation to import the file to Connect. Keep in mind that for predefined attributes, internal names are required. The complete list is available in the Mutation arguments section below.
mutation {
createImportJob(
importInput: {
dataSetId: "CAT-2025YG"
jobName: "New fruit"
importType: ADD_UPDATE
dateFormat: MONTH_DAY_YEAR_SLASH_SEPARATED
delimiter: ","
fileLocation: {
type: SFTP
filename: "new-fruit.csv"
folder: "products"
}
skipFirstRow: true
mappings: [
{ columnIndex: 1, attributeName: "productName" }
{ columnIndex: 2, attributeName: "productId" }
{ columnIndex: 3, attributeName: "Available Offline" }
{ columnIndex: 4, attributeName: "dateAdded" }
{ columnIndex: 5, attributeName: "unitPrice" }
{ columnIndex: 6, attributeName: "currency" }
]
notifications: [{ channel: EMAIL, destination: "[email protected]" }]
}
) {
id
}
}
Creating custom attributes during import
If a custom attribute doesn't exist in the product catalog yet, you can create it as part of the mutation using the attributes field.
mutation {
createImportJob(
importInput: {
dataSetId: "CAT-2025YG"
jobName: "New fruit with custom attribute"
importType: ADD_UPDATE
dateFormat: MONTH_DAY_YEAR_SLASH_SEPARATED
delimiter: ","
fileLocation: {
type: SFTP
filename: "new-fruit.csv"
folder: "products"
}
skipFirstRow: true
attributes: { create: { name: "Available Offline", type: BOOLEAN } }
mappings: [
{ columnIndex: 1, attributeName: "productName" }
{ columnIndex: 2, attributeName: "productId" }
{ columnIndex: 3, attributeName: "Available Offline" }
{ columnIndex: 4, attributeName: "dateAdded" }
{ columnIndex: 5, attributeName: "unitPrice" }
{ columnIndex: 6, attributeName: "currency" }
]
notifications: [{ channel: EMAIL, destination: "[email protected]" }]
}
) {
id
}
}
Importing custom data
A retailer with a network of offline music shops tracks the availability of their products across the country. In their PSV file, "Locations" is an array of string values.
Name ID Availability Category Price Currency Locations
Piano PIA-100 In stock Keys/Acoustic pianos 3199.99 GBP ["Worcester", "Edinburgh", "Dundee", "Leeds", "York"]
Violin VIO-017 In stock Strings/Acoustic violins 116.36 GBP ["Cardiff", "Leicester", "Dundee", "Oxford"]
Flute FLU-022 Backorder Winds/Flutes 12.36 GBP []
Drums DRU-012 Discontinued Drums/E-Drums 595.45 GBP []
Saxophone SAX-003 Out of stock Winds/Saxophones 495.33 GBP []
To import all data from their file, they need to add "Locations" as a custom attribute to their product catalog in Connect. Note that there's an additional setting for arrays (validateAs) to improve validation.
mutation {
createImportJob(
importInput: {
dataSetId: "CAT-2025YG"
jobName: "New music instruments with array"
importType: ADD_UPDATE
delimiter: "\t"
fileLocation: { type: SFTP, filename: "music-instruments.tsv" }
skipFirstRow: true
attributes: {
create: { name: "Locations", type: ARRAY, validateAs: TEXT }
}
mappings: [
{ columnIndex: 1, attributeName: "productName" }
{ columnIndex: 2, attributeName: "productId" }
{ columnIndex: 3, attributeName: "availability" }
{ columnIndex: 4, attributeName: "category" }
{ columnIndex: 5, attributeName: "unitPrice" }
{ columnIndex: 6, attributeName: "currency" }
{ columnIndex: 7, attributeName: "Locations" }
]
}
) {
id
}
}
Mutation arguments
importInput(required): A JSON object with import settings.
Import input object
attributes: Object - Lets you add new custom attributes to the product catalog.dataSetId(required): String - The ID of your product catalog. Use thedataSetsquery to get it (instructions).dateFormat: Enum - Add this property if your file contains dates and their format isn'tyyyy-MM-dd'T'HH:mm:ssXXX. Specify which date format is used in your file:
| Date format | Enum value |
|---|---|
yyyy-MM-dd'T'HH:mm:ssXXX | YEAR_MONTH_DAY_DASH_SEPARATED_WITH_TIME (default) |
yyyy-MM-dd | YEAR_MONTH_DAY_DASH_SEPARATED |
MM/dd/yyyy | MONTH_DAY_YEAR_SLASH_SEPARATED |
dd/MM/yyyy | DAY_MONTH_YEAR_SLASH_SEPARATED |
yyyy/MM/dd | YEAR_MONTH_DAY_SLASH_SEPARATED |
dd.MM.yyyy | DAY_MONTH_YEAR_DOT_SEPARATED |
delimiter(required): String - The type of delimiter used in the file. Valid values:,,\tand|.fileFormat: Enum - File format (valid value:DELIMITED)fileLocation(required): Object - Use this object to locate the file on the FTP server.importType(required): Enum - The method of file processing. Valid value:ADD_UPDATE. TheADD_UPDATEmethod lets you update the product catalog.jobName(required): String - A name for the import job.mappings(required): Array of objects - Create a separate object for each imported attribute.notifications: Array of objects - Create an object for each recipient you want to add.skipFirstRow: Boolean - Set the value totrueif the file has a header row (default:false).
Attributes object
Used within the importInput object to create new custom product attributes.
create: Object - Create a new custom product attribute during import. The object supports the following fields:name(required): String - The name of the new product attribute. Case-sensitive and unique within the product catalog. It will be used both as an internal name and display name.type(required): Enum - The data format of the attribute. Valid values:TEXT,NUMBER,BOOLEAN,DATE,JSON,ARRAY.validateAs: Enum - Additional details about the data format. This improves validation during product import (only properly formatted data according to this type is accepted). Applies only to the product attributes of the TEXT and ARRAY types. Valid values for arrays:TEXT,NUMBER,BOOLEAN,DATEandURL. Valid values for text:TEXT(default) andURL.
File location object
Used within the importInput object to specify where the text file is located.
filename(required): String - The name and extension of the file.folder: String - The subfolder on the SFTP server where the file is located. If you have uploaded the file to the root folder, you don't need this field.type(required): Enum - The method of file delivery. Valid value:SFTP.
Mappings object
Used within the importInput object to map CSV columns to product catalog attributes.
Both fields are required for each object in the array:
attributeName(required): String - The name of the attribute in the product catalog that the column from your CSV file will be mapped to. For custom attributes, the internal name coincides with the display name. You must use internal names for predefined product attributes (listed below).
| UI label | Internal name (for mutations) |
|---|---|
| Availability | availability |
| Brand Name | brandName |
| Brand Description | brandDescription |
| Category | category |
| Currency | currency |
| Date Added | dateAdded |
| Discount | discount |
| Image URLs | imageUrls |
| Inventory Quantity | inventoryQuantity |
| Model | model |
| MSRP | msrp |
| Product Description | productDescription |
| Product Id | productId |
| Product Name | productName |
| Product Rating | productRating |
| Product Status | productStatus |
| Product URLs | productUrls |
| SKU | sku |
| Tags | tags |
| Unit Price | unitPrice |
columnIndex(required): Integer - The index number of the column in the import file.
Notifications object
Used within the importInput object to configure job completion notifications.
channel(required): Enum - Add this property if you want to receive a notification when the import job is complete. Valid value:EMAIL.destination(required): String - The email address for notification delivery.
Response fields
The mutation returns a JSON response containing:
data(required): Object - Root response object.createImportJob(required): Object - Type of operation performed.id(required): String - The ID assigned to the import job. Use this ID to check the current job status in Connect (Data management > Job monitoring). You will get a configuration summary and a report on how many records have been processed.
Example:
{
"data": {
"createImportJob": {
"id": "JOB-ID-OOO"
}
}
}
