Seedbed: Separating the wheat from the chaff in product claims
Wednesday, January 16, 2008
Some tips to help you look critically at the glossy literature and all the product claims to see whether they really represent something that will work on your farm
by KEITH REID
The mailbox is heavy with brochures and flyers, full of glowing product claims, as each company works hard to get your business.
If all these claims were true, your yields would be sure to break all records, but if they're not true, does that mean that all marketing departments are liars?
We do have laws to protect us from outright fraud, so there has to be some support for the claims being made, but the job of a marketer is to present their product as favourably as possible. Your job is to sort out which of the many choices are best for your operation.
It is popular to bash statistics as the tool of the charlatan, as Mark Twain did when he said that there are "lies, damn lies and statistics." This is unfortunate because the initial purpose of statistics is to sort out when a difference is probably real, and when it isn't. Statistics can be misused, but they can also be helpful in deciding how much you should believe a particular claim.
Many of the statistical methods used today were actually developed for agricultural trials, recognizing that there was random variability in every field. Some method was needed to separate out which differences were due to the treatments being tested, and which came from the "background." This is where the whole notion of "least significant differences" came from.
The hardest notion to get your head around is that a difference you can see isn't real, and this is probably where the use of statistical shorthand gets in the way of understanding. The difference is real - we've measured it, we can see it - but the uncertainty is around whether the difference is due to the treatments we are comparing, or to random variability.
How big a difference we need to be pretty sure it is due to our treatment will depend on the number of comparisons (more repetitions of the same comparison will detect smaller differences), and the amount of "background noise." More random variability equals more difficulty in telling if differences are actually due to the treatments. Replicated plots don't just increase the number of comparisons; they also provide a measure of how much variability there is between plots.
What does all this mean, when you are trying to decide whether to use the hottest new product on your acres?
It means that you should look critically at the glossy literature and all the product claims to see whether they really represent something that will work on your farm, at least most of the time. Some things to watch out for:
The data should be based on many trials, preferably over a number of years. One-site, one-year comparisons are really only valid if growing conditions on your farm match that single set of conditions.
The data should be from comparisons similar to your location. Trial results from Iowa or Israel are not as relevant to your farm as trials conducted in Ontario. They may not mean anything at all.
There should be some measure of statistical validity to any differences; that is, some indication of the margin of error, and whether the differences shown are likely to be real. Advertisements don't include this, because marketers don't like to portray uncertainty, so you will have to dig deeper to find this.
The comparison should be measuring something which really matters to you. It is not unusual to see graphs or tables showing the factor where the product being sold shows to the greatest advantage, even if this is not something that will give any particular benefit to you.
Look carefully at the scale in any graphs. A wide spacing on a graph may just mean that the scale has been chosen to exaggerate the difference between treatments.
Not every statistically significant difference is real. The usual statistical test is that a difference is large enough to be certain if it is real 95 per cent of the time. This means that five per cent of the time, or once in 20 comparisons, there will be a larger difference that is just due to chance.
Compare prices as well as effectiveness to make sure that you are getting the product with the greatest potential benefit. Enough extra yield to pay for increased cost, or the same yield for a lower cost, will both help the bottom line.
What about product endorsements? Companies use product endorsements because we tend to believe the statements of a real person who is supporting a particular product.
The trouble with endorsements is that they are always subjective and are not usually based on any comparisons between products at all. Statistically speaking, a product endorsement is a single site, non-replicated, non-comparison.
It is fine if they are used to add credibility to a solid set of data, but you should never be swayed by endorsements alone. BF
Keith Reid is soil fertility specialist with the Ontario Ministry of Agriculture and Food and Rural Affairs based in Stratford. keith.reid@ontario.ca