What "connect in minutes" actually covers
Setting up the API connection in OneSila is fast. But before a single product flows cleanly, four things need to happen:
- Pull existing products from the marketplace (and inherit their data model)
- Map your attribute schema to the marketplace's requirements
- Push products (errors surface where mapping is incomplete)
- Resolve errors and exceptions
The "connect in minutes" part is the API setup. Steps two, three, and four are where the weeks go. Every marketplace has its own language. What your PIM calls "Colour" the marketplace might call "item_colour" and expect only specific controlled values. None of this is hard. But all of it takes time, and the amount depends on a decision you need to make before you push anything.
Two approaches to the mapping work
In OneSila, before products flow, you have a choice.
Frontload everything. Map all your attributes to the marketplace's requirements before pushing a single product. Clean from day one, no launch delays, predictable. Cost: anywhere from one day for a small, clean catalogue to two weeks for a large or complex one. It also generates a lot of mappings that will never be used in practice.
Map on exceptions. Push products and resolve mapping gaps as they surface. The first ten or so products require back-and-forth. A value is not mapped, a product fails, someone resolves it. After that, one person handles all marketplace exceptions while the listing team continues working. The mapping work is absorbed into the flow. Most of the listing team never notices it.
In our experience, exception-based mapping is generally the better approach. It keeps things moving and focuses effort on what actually comes up. The exception: teams that need a clean launch from day one, or whose workflow cannot tolerate any initial failures.
One additional step applies when you have existing products on the marketplace: map the values from the pulled data and re-run the import before starting new listing work. This ensures historic data is correctly ingested and builds the data trust needed for bulk operations later.
What you inherit when you pull from a marketplace
Here is something vendors do not mention. When you pull your existing products from a marketplace, you do not just import your product data. You import their data model too.
Years of seller submissions. Their attribute naming conventions. Their select value vocabulary. Their category assignments from years past. All of it comes across. A single attribute like colour, pulled from a large established marketplace account, can easily yield thousands of distinct values. Not because your data is bad. Because the marketplace accepted whatever sellers entered over the years, and you have now inherited it.
The same happens in reverse: years of your own team entering free-text values instead of selecting from dropdowns accumulates as mapping debt. A colour field that should contain "Pink" may contain a full product description. Neither the marketplace's mess nor your own is unusual. Both must be resolved before products flow cleanly.
For configurable attributes specifically (colour, size, style), marketplace configurator requirements often conflict with a unified internal attribute model. The fix most teams arrive at: channel-specific attributes. Amazon-colour, eBay-colour, Shein-colour, alongside a generic colour for simple channels and owned websites. Yes, it adds some duplication. But it stops the constant mapping adjustments required to push configurable products through. Recommended upfront, before the first products are pushed. The signal that you need it: finding yourself adjusting mappings or product rules repeatedly just to get products to publish.
The cost that does not show up at launch
Here is the nuance that changes how you think about data quality: dirty attribute data does not significantly delay time to first product live. The launch timeline is roughly the same whether your data is clean or not.
The real cost sits elsewhere. It is daily drag. Black, black, BLACK. Small, small, S, XSmall, Extra-Small, XS. When attribute values have accumulated in multiple variants, every operator picks through them on every product. It takes a second to select "Small" from one clean value. It takes judgement and time to decide which of eight variants is correct. That drag compounds across the catalogue, indefinitely, until the data is cleaned.
Marketplaces enforce their own attribute structure, but that structure itself grew organically over years of platform evolution and seller submissions. Pulling from Amazon, eBay, or Shein imports that complexity. Tooling exists to clean it up, but it is a large process and not a one-time fix. Taxonomy is not easy to manage across marketplaces at scale. Anyone who tells you it is does not understand the complexity.
Where do you stand? The three-tier test
Before starting any new channel expansion project, ask: how clean is the product data, and how long will it take to get it where it needs to be?
In all three scenarios below, products flow to a new channel gradually — nobody switches on a thousand SKUs overnight. What differs is the pace of that flow and how much friction sits between each product and going live.
- Data well-structured and in a PIM: products start flowing within days. The pace is fast from the start because the mapping work is minimal and the data is ready.
- Data in a PIM but needs work: products can start flowing, but the pace is slower. Cleaning and mapping happen alongside deployment, which creates a longer runway to full coverage.
- No structured product data foundation: meaningful product flow can't start until the foundation is built. That work comes first, which is where the years go — not in the listing itself, but in what has to happen before listing is even possible.
This is not about the API. It is about the data preparation that precedes it. The three tiers describe the speed of your deployment cadence, not a go-live switch.
It sounds harder than it is
With the right approach, this work is visible, manageable, and done once per channel.
In OneSila, exception-based mapping means the listing team keeps working while one person handles the integration layer. The three-tier test tells you what you are getting into before you commit. Deciding upfront on channel-specific configurable attributes prevents weeks of mid-launch friction. Errors surface in real-time against the product record, so nothing sits in an unknown failed state.
The problem is not the complexity. The problem is that nobody tells you what to expect before you start.
Now you know.
Frequently Asked Questions
How long does it take to set up a new marketplace integration?
The API connection takes minutes. The data preparation that gates it takes 1 day to 2 weeks depending on catalogue size and complexity. Teams who frontload all attribute mapping in advance should expect the longer end of that range. Teams using an exception-based approach will find the first 10-20 products take back-and-forth over a few days, after which the process runs smoothly.
What is attribute mapping in ecommerce marketplace integration?
Attribute mapping is connecting your internal product data fields to the fields the marketplace expects. Your PIM might store "Colour"; the marketplace calls it "item_colour" and only accepts specific controlled values. For every attribute and every value, a mapping must exist before products can publish. The number of mappings required depends on catalogue size, product complexity, and how clean the existing data is.
Why are my marketplace product listings failing to publish?
Listing failures almost always trace back to mapping gaps: an attribute the marketplace requires that has no corresponding mapped value, or a value that does not match the marketplace's controlled vocabulary. The fix is to resolve the mapping for that attribute or value. If failures are frequent across many products, the underlying issue may be an attribute model that needs restructuring for that marketplace.
What causes thousands of unmapped attribute values when connecting to a new marketplace?
Two sources. Your own historical data: free-text values entered instead of selecting from dropdowns accumulate over years. The marketplace's own accumulated data: pulling existing products imports their select value vocabulary, years of seller submissions accepted regardless of consistency. Both sources combine to create the mapping debt that must be resolved before products flow cleanly.
How do you manage product data across multiple marketplaces without constant rework?
Three practices help: use exception-based mapping so one person handles the integration layer without disrupting the listing team; decide upfront on channel-specific attributes for configurables to avoid mid-launch incompatibilities; and invest in cleaning attribute data that causes daily drag. Ongoing maintenance is triggered by marketplace taxonomy changes and new product ranges.
What is the difference between a PIM and a listing app for multi-channel ecommerce?
Listing apps get products live quickly but offer no centralised attribute management, no ongoing maintenance window, and no ability to re-use data across marketplaces. A PIM centralises the attribute model, maps it once to each marketplace, and handles exceptions in real-time. The upfront work is greater; the ongoing maintenance is significantly lower.