Under the more stringent FTC rules, here are some insights on those rules found here:
http://www.soundwise.org/gethelp/specratings.htmHere are some pertinent insights on those rules. Highlights mine. You can then make your own conclusions from these.
2. Before the start of the output tests, the amplifier, whether it be a part of a receiver, integrated amplifier, or separate power amplifier, must be "pre-conditioned" by SIMULTANEOUS OPERATION of ALL channels at one-third its rated power for an hour, using a 1k Hz sine wave. Only after this pre-conditioning are the ratings made!!
This particular requirement is where the headaches and controversy for the manufacturers came into play!! Critics of this requirement argued that, even under the most demanding conditions, a home music amplifier would not have to produce one-third power for an hour while being driven by an uninterrupted sine wave. Operated in this mode, many otherwise excellant amplifiers overheated and their protective systems automatically shut them off!!
When that happened, the amplifier flunked the test cycle, and the manufacturer had to then lower the unit's STATED power rating so that at one-third its rated power, it could survive this portion of the test!! 3. With the "pre-conditioning" out of the way, the manufacturer than began the output rating process for the particular amplifier. This is where the amplifier HAD to produce the stated watts-per-channel rating, with all channels driven, and maintain the power rating throughout the frequency range SPECIFIED by the MANUFACTURER.
The importance of this part of the testing is that amplifiers generally deliver maximum power at a mid frequency, such as 1k Hz, and power falls off drastically toward the extremities of the audio spectrum!
Under the FTC rules, amplifiers HAD to be able to deliver the stated power at ALL frequencies within their specified bandwidth, NOT just the mid-frequencies!! Now, even though the manufacturer gets to state the "specified frequency bandwidth, it is the consumer who benefitted from this because even though one amp may be rated at 50 watts per channel at 40-15k Hz, another ramp rated at only 45 wats per channel, BUT from 20-20k Hz would tell the consumer that "something is up" with the first one's rating, and the second one's rating better covered the audio spectrum with its wider range of frequencies!! This also showed that the manufacturer with the wider frequency response in its ratings was being more conservative in its ratings than the other manufacturer was being. This requirement led the manufacturers to generally adopt the "20-20k Hz" rating in order to "hold their own" in the industry and to delineate the "JUNK FROM THE GOOD STUFF" of that time!!
4. Another requirement was that the ratings had to state the load impedance at which the rating was made...normally either 4, 8, or 16 ohms, since these were the general ohm ratings for loudspeakers...and most manufacturers settled into a standard 8 ohm rating for their testing.
5.
The FTC also required that the manufacturers also state the MAXIMUM total harmonic distortion produced by the amplifier from 0.25 watts to its FULL RATED POWER for its SPECIFIED BANDWIDTH. Like power ratings, distortion ratings are likely to be superior if measured at mid-frequencies, and less impressive at the high and low-frequency extremes!! All of the above tests had to be carried out in an environment with a temperature of 77 degrees fahrenheit or higher.
Equipment manufacturers were allowed to provide any additional ratings based on other test methods, provided they were"well-known and generally accepted by the industry." BUT, these other ratings had to be displayed LESS PROMINENTLY than the FTC required ratings, IOW set in a much smaller typeface!