Two failure modes, not one
Most teams experiencing bad AI content assume there is one failure mode: the AI wrote something terrible. There are actually two, and they look completely different.
The first is fluff. This is what happens when product attributes are thin. The AI writes to fill the expected length, and what it fills with (when it has no specific material to work from) is category-level generality. "Comfortable and durable." "Ideal for everyday use." Technically safe. Converts nobody.
The second is the feature dump. The data is there, but the tool has not been set up to know who it is writing for. The AI includes everything it knows, in no particular order, for no particular reader. Accurate. Unreadable. Also converts nobody.
Both failures land in the same place: content that does not do its job. But they have different causes and need different fixes.
Fluff is not a style failure. It is a data failure. The AI writes to fill space; without specific attribute inputs, it fills with generic text that could describe any product in the category. A feature dump is a setup failure: rich data, wrong audience definition, everything included, nothing prioritised. More editing does not fix either. Tweaking the output on one product does not fix either. The fixes are upstream.
Fixing one product at a time is not a catalogue solution
When AI content is bad, the natural instinct is to fix the output. Tweak the wording. Try adjusting the settings for that product.
This works, up to a point, on one product at a time. It does not scale.
Adjusting the output on a single product is a one-off fix. The configuration that governs how your AI generates content runs automatically across every product in the catalogue. Fix one product manually, and the next one has the same problem. Fix the underlying configuration, and every product benefits.
If your AI content has a quality problem across the catalogue, the question is not "how do I tweak this product's output?" It is "does my AI content tool know who it is writing for? Does it know which attributes matter most to that buyer?" If not, every product has the same problem. Fixing them one at a time is not a solution to a catalogue-level issue.
Three causes of bad attribute data, three different fixes
Before the AI writes content, attributes must exist. In most operations they arrive sparse. A product comes in from an ERP with a name, a category, sometimes an image. The AI fills in what it can infer from that. What is left, the reviewer resolves.
Bad attributes entering the content step come from three distinct sources, each requiring a different fix:
- Wrong AI inference. The AI made a confident but incorrect guess. A product meant for adults was inferred as for children. A seasonal theme filed as Christmas instead of Easter. Fix: correct it at attribute review, and adjust how the AI enrichment is configured so it does not repeat the same mistake.
- Missing supplier data. The information does not exist in any digital form yet. It must be requested from a supplier or a colleague. This is a research and chase problem, not an AI problem. The fix is a process.
- Incomplete configuration. The AI had the information but was not told to use it. The product's Size attribute was correctly inferred, but the eBay Size field was left empty because nobody had set the tool up to carry the value across. Fix: update the configuration — work that requires someone who understands how the AI is set up, and that fixes every product in the catalogue once done.
All three produce the same symptom: a missing or wrong attribute, which becomes thin or incorrect content. Treating them as a single "data quality problem" leads to the wrong intervention every time.
Attribute review: the gate that does two jobs
Attribute review is the human gate between AI enrichment and content writing. It exists because the AI writes from whatever the attributes say. An error caught here costs minutes. The same error caught after content generation has spread across every channel version of that product and must be hunted down in each one.
What the reviewer is actually doing is two structurally different things. First: correcting AI guesses that are wrong. Second: surfacing gaps that require human research: a supplier to contact, a colleague to ask. These require different actions, but today they look identical in the workflow. An empty field could mean the AI could not infer the answer, or it could mean the answer does not exist anywhere yet. The reviewer judges which it is.
If the field is optional, it can slide. If it is required, it must be resolved before the product moves forward. Sound familiar?
One honest limitation worth naming: the workflow does not yet distinguish between "AI was uncertain" and "this data does not exist." Both appear as an empty field. Flagging them differently would make attribute review faster to scope. That is a capability gap, not a current feature.
What good content actually looks like
A well-configured AI content tool knows who it is writing for, and uses that to decide which attributes to introduce, in what order, and at what level of detail. The goal is not to include all attributes. It is to include the ones relevant to that buyer, in the order they care about them.
The result is content that is specific and channel-adjusted. Not necessarily longer than thin-attribute content. More specific. Better targeted at the person making the purchase decision.
Quality product content is not measured in word count. It is measured in specificity and channel fit. A product description that tells the right buyer exactly what they need to know, without making them read through what they do not, will outperform a generic paragraph of the same length every time.
How much of the output quality comes from attribute completeness versus a well-defined brand voice? Honestly: nobody has measured it cleanly. Both matter. What is clear is that you cannot close the quality gap by tuning one and ignoring the other.
The sequence is load-bearing
The workflow is: enrich attributes, review them, then generate content. Each step depends on the one before it.
Product properties are the shared upstream dependency. Whether a human writer or an AI tool produces the content, both are working from the same attribute foundation. Thin attributes produce thin content. Wrong attributes produce wrong content. Neither is recoverable at the content step without going back upstream.
The choice between a more automated workflow and a more hands-on one is yours to make. More automation means faster throughput and lighter per-product control. More hands-on means more quality control and slower scale. That is a quality-versus-quantity tradeoff the operator makes. What does not change is the sequence: enrich, review, generate. Compressing or skipping the review step does not save time. It moves the cost downstream, where it is more expensive to fix.
Frequently Asked Questions
Why does AI produce generic product descriptions?
Generic AI content is almost always a data problem. When product attributes are thin or incomplete, the AI has no specific material to draw from and fills space with category-level generalities. Tweaking the output helps on one product. Completing the attributes fixes the problem across the catalogue.
What is the difference between fluff and a feature dump in AI content?
Fluff is generic content produced when attribute data is thin: the AI fills space with nothing specific. A feature dump is the opposite: accurate content from rich data, but without a defined audience, so the AI includes everything in no order useful to any particular buyer. Fluff is a data problem. Feature dump is a configuration problem.
What data does AI need to write good product descriptions?
Structured attributes that are complete and correct for the product type: material, target audience, use case, key specifications. Plus an AI tool that is configured to know who the content is for, so it knows which attributes to prioritise. Rich attributes with a vague setup produce feature dumps. Well-configured tools with thin attributes produce fluff.
What is attribute review and why does it happen before content generation?
Attribute review is the human check between AI enrichment and content writing. It corrects AI errors and surfaces gaps requiring supplier or team input. It matters because the AI writes from whatever the attributes say. Wrong attributes produce wrong content, which then propagates across every channel version of the product.
Does tweaking the output improve AI product content quality?
Per product, yes. Across the catalogue, no. Adjusting the output manually is a one-off fix. The configuration that governs how the AI generates content runs automatically at scale. Fix one product and the next one has the same problem. Fix the configuration and every product benefits. If you are fixing products one at a time, you are patching a catalogue-level problem one item at a time.
What causes attribute review to take longer than it should?
Two things. First, AI enrichment that is poorly configured: it fills fields incorrectly or leaves fillable fields empty. Both create more review work than well-configured enrichment would. Second, missing supplier data: the reviewer hits an empty field with no way to know whether the AI could not infer the answer or whether the information simply does not exist yet. Distinguishing these two cases in the workflow would make attribute review significantly faster to scope.