April 20, 2024

What's in the Secret VEIL Test Results?

I wrote last week about how the analog hole bill would mandate use of the secret VEIL technology. Because the law would require compliance with the VEIL specification, that spec would effectively be part of the law. Call me old-fashioned, but I think there’s something wrong when Congress is considering a secret bill that would impose a secret law. We’re talking about television here, not national security.

Monday’s National Journal Tech Daily had a story (subscribers only; sorry) by Sarah Lai Stirland about the controversy, in which VEIL executive Scott Miller said “the company is willing to provide an executive summary of test results of the system to anyone who wants them.”

Let’s take a look at that test summary. The first thing you’ll notice is how scanty the document is. This is all the testing they did to validate the technology?

The second thing you’ll notice is that the results don’t look very good for VEIL. For example, when they tested to see whether VEIL caused a visible difference in the video image, they found that viewers did report a difference 29% of the time (page 4).

More interesting, perhaps, are the results on removability of the VEIL watermark (page 2). They performed ten unspecified transformations on the video signal and measured how often each transformation made the VEIL watermark undetectable. Results were mixed, ranging from 0% success in removing the watermark up to 58%. What they don’t tell us is what the transformations (which they call “impairments”) were. So all we can conclude is that at least one of the transformations they chose for their test can remove the VEIL watermark most of the time. And if you have any experience at all in the industry, you know that vendor-funded “independent” studies like this tend to pick easy test cases. You have to wonder what would have happened if they had chosen more aggressive transformations to try. And notice that they’re refusing to tell us what the transformations were – another hint that the tests weren’t very strenuous.

The VEIL people have more information about all of these tests, but they are withholding the full testing report from us, even while urging our representatives to subject us to the VEIL technology permanently.

Which suggests an obvious question: What is in the secret portion of the VEIL testing results? What are they hiding?

Comments

  1. And LIVE.

  2. It is also an anagram for VILE

  3. Is it just a coincidence that ‘VEIL’ is an anagram for ‘EVIL’ by the way?

  4. If we get stuck with it, hopefully it’s at least better than the horrible orange dots that movies have these days. I wonder what percentage of people noticed those in *that* set of consumer trials.

    Yep, I love paying $15 for a worse theater experience than I get at home.

  5. seaan: there are some hints about the technology used in the test: (page 5): A DVD player and a “broadcast video server”. No indication about whether the viewers were professional video engineers or randomly selected consumers. My experience is that it takes some training to recognise a specific type of image distortion, so even a video engineer without previous knowledge of the watermark and its effects would have had a slim chance of identifying which of the clips would have a watermark.

  6. As an audiophile I also noted the lack of details about what was shown. I would presume these where HDTV clips, but did not see confimraiton of that in the brief.

    The display type could also make a big difference in visibility (some LCD have problems with responsiveness, both plasma and LCD don’t render below black very well, etc.). Did they just chose people off the street and have them watch on a cheap SDTV, or did they get the “golden eye” videophile with $30,000 projectors? Since this was a customer sponsored test I think I know the answer to that last question.

  7. If I were setting up such tests, I could easily bias the results by selecting test data which would tend to obscure the watermark, or test data that would maximize its visibility. I see no reason to suspect that people would have done the latter, and plenty of reason to suspect the former.

    Also, knowing that the “average” viewers identified a watermarked video some portion of the time is meaningless. What would be much more meaningful would be the data for each viewer. If there was any viewer, for example, that could correctly spot the watermarked video 95% of the time, that would suggest that the watermark produces visible degredation, even if most viewers didn’t happen to see it.

    Another thing to consider is that many such technologies have various adjustable “tolerance” values. It may be that one particular implementation can be “fooled” 42% of the time by impairing the video in some particular fashion, but Sony et. al might respond by requiring more units to be more “agressive” in thinking that they see the watermark. This might considerably increase the false-detection rate.

  8. Prof Felten:

    The A-B vs A-B-X point is important I think — not being a psychologist
    I can’t comment with any authority, but if subjects are told “you will
    see two versions of a video clip, one of which has been modified in a
    way which may have degraded it; please try to identify the degraded
    one”, they might be predisposed to believe they can see a difference.
    (One could measure the strength of this effect by giving subjects the
    above instructions, then showing exactly the same clip twice, and seeing
    how many thought they could see a difference.)

    I agree with the overall point that this executive summary lacks
    sufficient detail to meaningfully debate the issue, but I thought on
    this particular point, and emphasising my earlier caveat that they could
    have used carefully-chosen clips etc., they might be right.

    John:

    Agreed. That’s why I prefaced the fragment you quoted with “there might
    be doubts about their methodology, choice of video clips, etc., but on
    the face of it”. Of course they could have biased the study in various
    underhand ways, chosen subjects with poor eyesight, bribed the subjects,
    or for all we know could have made up the whole study, statistics and
    all. My point was that I thought Prof Felten’s original remark might
    have been an unduly harsh interpretation of the numbers.

  9. “…I think the results are on their side on this point.”

    Ben, the results are FROM their side.

    Given your comments, one could deduce that you would buy a car from Ford just because they say it’s the safest, gets the best gas mileage, etc. True?

    Alternate, objective testing is required, otherwise, buyer beware.

  10. Armagon: Agreed. Non-Deterministic Abbreviations suck.

  11. I’m not sure if it is nit-picky or significant, but I really don’t like it when common acronyms are “overloaded”.

    If I can’t play a movie and get a cryptic “Bad RAM” message, do they want me to realize that something is wrong with the “Rights Assertion Mark” or do they want me to think that the Random Access Memory in my video player is damaged? [ie. do they want me to blame the video player instead of the content producer?]

    Ditto for “V-RAM” — “Veil Rights Assertion Mark,” which may be confused with VRAM — Video RAM.

    Maybe I’m being too sensitive, but there is a lot of power in framing an issue — choosing the words used to describe it — and no engineer would’ve come up with the term RAM on accident. One wonders if this is supposed to be a “veil” of confusion.

  12. It could be fairly important to find out which transformations erase the watermark and which don’t; otherwise we could find that certain digitization and compression schemes (either current or future) have been classified as unlawful DRM-circumvention techniques. In addition to mandating use of a particular company’s copy-protection scheme, Congress would then also be mandating the use of particular companies’ technologies for storing and transmitting formerly analog content.

    The 29% of people noticing a difference on tests designed by the promoter of the technology doesn’t bode well, imo. Whether clips were successfully identified as altered or unaltered is less important, since that decision would have been made either going on memories of the clip in question or on preconceptions about what kind of artifacts the watermarking would introduce.

  13. Since the world doesn’t seem to be bothered with logos in the corner or “Coming next” banners, I doubt they will be bothered with the occasional blip from a overt watermark.

    Bill, they bother me at least.

  14. Ben,

    I’m not sure I can agree. If viewers saw a difference 29% of the time, that could be a problem, even if the viewers can’t say reliably whether the modified or unmodified version looks better. It sounds like they had the viewers do A-B comparisons, rather than A-B-X comparisons (i.e., which two of these three are alike) which might have helped straighten this out.

    To have a proper debate about this, we’d need to know more about the tests. But that information is being withheld from us.

  15. “For example, when they tested to see whether VEIL caused a visible
    difference in the video image, they found that viewers did report a
    difference 29% of the time (page 4).”

    True, but of those 29% cases, the VRAM’d clip was correctly identified
    just less than half the time. Overall, in 71% of cases the viewer
    couldn’t tell any difference; in 29% of cases, they thought they could,
    but in fact did no better than chance at telling which was which. To be
    fair, isn’t that fairly good evidence that the system does not introduce
    visible degradation?

    As you say, there might be doubts about their methodology, choice of
    video clips, etc., but on the face of it, I think the results are on
    their side on this point.

  16. This look less and less a way to protect the content and more like a scheme for a company to gain a monopoly by having the government make it compulary to use their product.