Tim Wu: Computers Don't Inherit Their Programmers' Free Speech Rights. But Why Shouldn't They?

Tl;dr In an earlier post, I argued that it is important to recognize that search results should be treated more like editorial content than as mere assertions of fact. Titan of Internet Law, Prof. Tim Wu of Columbia Law, frames the argument differently, and though I think we fundamentally agree, it is worth clarifying why I find his characterization somewhat troublesome. Summary: It is more important to subject automated speech to First Amendment analysis, even if weakened as the result of considerations due to its commercial and automated nature, than to categorically exempt it from procedural constitutional protection. 

Edit/Update: Techdirt has weighed in, and it is gratifying to see that a lot of the same arguments I made below are made by Mr. Masnick and the scholars he references.

Well, I'm stepping right into the line of fire on this one. Prof. Wu is pretty brilliant, I often agree with him, I find him deeply insightful, and, basically, for me to disagree with him is kind of like an ant pulling on Superman's cape. That aside, I do disagree with Prof. Wu's position on this issue, and I think that some law actually backs up my point. For the purposes of this post, I'll call the speech in question -- results of search engines, automated ranking algorithms, stock ratings, etc. -- automated speech.

To summarize Prof. Wu's argument, it appears to be something like the following:
• 1. Automated computer output, by default, should not receive full First Amendment Protection;
• 2. It doesn't make sense to say that automated output should 'inherit' the constitutional rights of the authors of the automated processes;
• 3. It would be troublesome to see First Amendment defenses to Anti-trust arguments.

So I'll make three points in response.

1. Analysis of 'automated speech' should begin with a determination as to whether it qualifies for speech. IT should not assume that such speech fails to qualify as speech by default. Established law demonstrates that many types of automated speech qualify for copyright protection, and it is very confusing to assert that something that is the subject matter of copyright should not be treated as speech.

• First off, Google's ranking database should qualify as a compilation, as defined by 17 USC 103(a). This alone should merit copyright protection.
• Second, in both CCC Information Services, Inc. v. Maclean Hunter Market Reports, Inc. 44 F.3rd 61 (2nd Cir. 1994) and CDN Inc. v. Kapes, 197 F.3d 1256 (9th Cir. 1999) it is clear that compilations which are based on the expert judgment of facts -- in the former case, predictions of car prices, in the latter, coin prices -- are sufficiently creative to merit copyright protection.
• In my opinion, Google is therefore not only the subject of copyright protection as it is a compilation, but, based upon the above case law, it is also creative, as it is the result of expert judgment. Both the above cases are silent on the issue of whether judgment can be the result of an algorithmic process, and, in my eyes, it is irrelevant. (If anyone has any contradictory authority, please send along!)

This leads us to a confusing position, wherein Prof. Wu is arguing that things that qualify for copyright protection should not be treated as speech by default -- I think his assertion is that they need to be 'justified' as speech. I am arguing the inverse, that the default rule should be an assumption that automated speech is speech, and then the burden would be on the challenging party to prove otherwise.

I could be wrong on this, but, from what I understand, while there are many actions that count as "speech" that are outside of copyright, there are no copyright rights that are outside of the realm of speech. The subject matter of copyright, for reference is defined as:
(a) Copyright protection subsists, in accordance with this title, in original works of authorship fixed in any tangible medium of expression, now known or later developed, from which they can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device. Works of authorship include the following categories:
(1) literary works;
(2) musical works, including any accompanying words;
(3) dramatic works, including any accompanying music;
(4) pantomimes and choreographic works;
(5) pictorial, graphic, and sculptural works;
(6) motion pictures and other audiovisual works;
(7) sound recordings; and
(8) architectural works.
(b) In no case does copyright protection for an original work of authorship extend to any idea, procedure, process, system, method of operation, concept, principle, or discovery, regardless of the form in which it is described, explained, illustrated, or embodied in such work.
I just cannot conceive of a hypothetical situation wherein it would be the case that something that is copyrightable is not speech, but I may well be wrong. Either way, it makes me deeply uncomfortable to think that there could be a situation wherein a copyrighted work is categorically not speech -- perhaps I can come up with a really elaborate architectural example, but it would necessarily rest on the total separation of the form of an architectural work from any artistic, cultural or social commentary whatsoever. That is deeply unsettling.

2. The assertion that the output of automated processes designed by humans, exercising their first amendment rights, should not receive the same procedural analysis as outputs of those humans themselves, seems to be an arbitrary distinction. 

I'd argue that Wu's reasoning:
Defenders of Google’s position have argued that since humans programmed the computers that are “speaking,” the computers have speech rights as if by digital inheritance. But the fact that a programmer has the First Amendment right to program pretty much anything he likes doesn’t mean his creation is thereby endowed with his constitutional rights. Doctor Frankenstein’s monster could walk and talk, but that didn’t qualify him to vote in the doctor’s place.
Is conclusory. It doesn't state why this should be, just that it is. Personally, I just do not see why the act of a human applying each set of transformations to a set of input data magically imbues it with 'speech-hood,' as opposed to a computer performing this process by proxy. It is perfectly reasonable to me, for instance, that the creativity necessary to reach the bar of speech is met in the creation of a ranking algorithm and the selection process for its data inputs. Let's put this another way: a regulation covering algorithms that produce publicly viewable results is directly a regulation on the freedom of speech of the programmers that programmed the algorithm, and, additionally, such a regulation is unlikely to produce predictable results, much less results desired by legislators or consumers. I simply cannot see a way around the free speech analysis as a result.

It appears to me that Prof. Wu is basically asserting that when you have an intermediary process between human thought and output, you break the status of speech-hood. This doesn't sit well with me, as we rely on increasingly on technology to express speech, and, importantly, art.

For instance, I use photoshop a lot, and it produces a tremendous variety of effects that I would in no way be able to create with my own hands -- yet my photoshopped images receive full First Amendment protection, despite the fact I have used a variety of totally automated effects and algorithmically based filters in their execution. The same applies for audio editing software and video editing software.

I think Prof. Wu is held up on the semantic nature of text, but I think, increasingly, as NLP becomes more sophisticated, this line will be increasingly hard to draw. I currently have, in the "Services" menu of my Application menu on my Mac, a choice called "Summarize." It uses an algorithm to create synopses of text I have highlighted. Should this automated summary not receive protection as speech? What about the contrapositive? It will not be long before I can feed shorthand notes into MSWord and it will produce full english text, or at least a close enough approximation that it would fool an 8th grader. Should this not receive protection either? A colleague of mine has pointed out that he feels Prof. Wu is hung-up on this because automated drafting of this kind is currently rudimentary, but it will increase in sophistication rapidly over the coming years. I agree.

The point is, I think that the distinction Prof. Wu is insisting on is artificial, and doesn't accurately map to real world use or user expectation.

3. Google has not asserted a First Amendment defense against anti-trust violations, but that is due to the reasons that there are no good anti-trust claims and other defenses are available.
To Google’s credit, while it has claimed First Amendment rights for its search results, it has never formally asserted that it has the constitutional right to ignore privacy or antitrust laws.
I won't reproduce the whole argument here, I'll just refer again to my previous blogpost. Suffice it to say, and, again, I'm not an expert in this topic, but I feel the anti-trust arguments against Google are pretty thin.

Conclusion
To give computers the rights intended for humans is to elevate our machines above ourselves. ...
And that’s where theory hits reality. Consider that Google has attracted attention from both antitrust and consumer protection officials after accusations that it has used its dominance in search to hinder competitors and in some instances has not made clear the line between advertisement and results. Consider that the “decisions” made by Facebook’s computers may involve widely sharing your private information; or that the recommendations made by online markets like Amazon could one day serve as a means for disadvantaging competing publishers. Ordinarily, such practices could violate laws meant to protect consumers. But if we call computerized decisions “speech,” the judiciary must consider these laws as potential censorship, making the First Amendment, for these companies, a formidable anti-regulatory tool.

That top comment is the one that gets me: it's nonsense and it is fear-mongery. As Prof. Wu himself stated many times, a point I agree with, a point that is one of the foundational pillars of my argument, is that the analysis must start with First Amendment considerations, but may well, and probably should, wind up determining that automated speech has less protection than core speech because of the precise reasons that Prof. Wu has cited. No one is saying that automated speech should have the same status as core speech, and absolutely no one is saying that machines should have more rights than humans. All I am saying is that the analysis needs to start at the same place, where Prof. Wu seems to be suggested it can be circumvented altogether.

The issue that I believe Prof. Wu is getting at is that if we assume that automated speech is, in fact, speech, it makes it harder to regulate. So? It should be hard to regulate, and as Prof. Wu has himself asserted, because automated speech is often commercial in nature, and compounded with the fact that it is, in fact, automated, it is entitled to less First Amendment protection than core, political speech by a human. I am okay with this. 

Additionally,  I can see virtually no benefit from making it easier to regulate automated speech: legal attempts to regulate code are often ham-fisted, sometimes disastrous, and, at best, extraordinarily difficult to craft well, especially at the hands of a luddite legislature. For instance, I can see no reason to assume that lawmakers telling Google how to mess with its ranking system would in any way benefit consumers -- Google's smartest competitors have failed to come up with a system as effective as Google. Why should we assume a bunch of lawyers on Capitol Hill have anything meaningful to say about their algorithm?

If legislators want to regulate automated speech, the proposed regulations should have to face the rigor of First Amendment analysis rather than circumvent such analysis by default simply because there is an automated intermediary between the author and the content. If it turns out that automated speech deserves less protection than other types of speech, this should be decision based on First Amendment analysis, not based on skipping the analysis completely.

Comments

Popular posts from this blog

Contract Drafting: Software Development Agreements

Apple Might Be Forced To Reveal & Share iPhone Unlocking Code Widely

Why Stack Overflow Doesn’t Care About Ad Blockers – Stack Overflow Blog – A destination for all things related to development at Stack Overflow