CIEC CHALLENGE:

Post-Hearing Brief



                IN THE UNITED STATES DISTRICT COURT
             FOR THE EASTERN DISTRICT OF PENNSYLVANIA



AMERICAN CIVIL LIBERTIES UNION,    :    CIVIL ACTION
    et al.,                        :
                                   :
               v.                  :
                                   :
JANET RENO, Attorney General of    :
   the United States               :    No. 96-963

_____________________________________________________________


AMERICAN LIBRARY ASSOCIATION,      :    CIVIL ACTION
  INC., et al.,                    :
                                   :
               v.                  :
                                   :
UNITED STATES DEP'T OF JUSTICE     :
   et al.                          :    No. 96-1458



         ALA PLAINTIFFS' POST-HEARING BRIEF IN SUPPORT OF
                 MOTION FOR PRELIMINARY INJUNCTION       
                              

                              Bruce J. Ennis, Jr.
                              Paul M. Smith
                              Donald B. Verrilli, Jr.
                              Ann M. Kappler
                              John B. Morris, Jr.
                              JENNER & BLOCK
                              601 Thirteenth Street, N.W.
                              Washington, D.C. 20005
                              (202) 639-6000
                         
                              Ronald P. Schiller
                                (Atty ID 41357)
                              David L. Weinreb
                                (Atty ID 75557)
                              PIPER & MARBURY, L.L.P.
                              3400 Two Logan Square
                              18th & Arch Streets
                              Philadelphia, PA  19103
                              (215) 656-3365

                              COUNSEL FOR ALA PLAINTIFFS

Ellen M. Kirsh
William W. Burrington
America Online, Inc.
COUNSEL FOR AMERICA ONLINE, INC. 

Richard M. Schmidt, Jr.
Allan R. Adler
Cohn and Marks
COUNSEL FOR AMERICAN SOCIETY OF NEWSPAPER EDITORS

Bruce Rich
Weil, Gotschal & Manges
COUNSEL FOR ASSOCIATION OF AMERICAN PUBLISHERS, INC.

James Wheaton
First Amendment Project
COUNSEL FOR ASSOCIATION OF PUBLISHERS, EDITORS AND WRITERS

Jerry Berman
Center for Democracy and Technology
Elliot M. Mincberg
Jill Lesser
People for the American Way
Andrew J. Schwartzman
Media Access Project
COUNSEL FOR CITIZENS INTERNET EMPOWERMENT COALITION

Ronald Plesser
Jim Halpert
Piper & Marbury
COUNSEL FOR COMMERCIAL INTERNET EXCHANGE ASSOCIATION

Steve Heaton
Compuserve Incorporated
COUNSEL FOR COMPUSERVE INCORPORATED

Gail Markels
Interactive Digital Software Association
COUNSEL FOR INTERACTIVE DIGITAL SOFTWARE ASSOCIATION

James R. Cregan
Magazine Publishers of America
COUNSEL FOR MAGAZINE PUBLISHING OF AMERICA

Thomas W. Burt
Microsoft Corporation
COUNSEL FOR MICROSOFT CORPORATION
   AND THE MICROSOFT NETWORK, L.L.C. 

Mark P. Eissman
Eissman and Associated Counsel
COUNSEL FOR NATIONAL PRESS PHOTOGRAPHERS ASSOCIATION

Robert P. Taylor
Megan W. Pierson
Melissa A. Burke
Pillsbury, Madison & Sutro
COUNSEL FOR NETCOM ON-LINE COMMUNICATIONS SERVICE, INC. 

Rene Milam
Newspaper Association of America
COUNSEL FOR NEWSPAPER ASSOCIATION OF AMERICA

Marc Jacobson
Prodigy Services Company
Robert J. Butler
Clifford M. Sloan
Wiley, Rein & Fielding
COUNSEL FOR PRODIGY SERVICES COMPANY

Bruce W. Sanford
Henry S. Hoberman
Robert D. Lystad
Baker & Hostetler
COUNSEL FOR SOCIETY OF PROFESSIONAL JOURNALISTS

Michael Traynor
John W. Crittenden
Kathryn M. Wheble
Cooley, Godward, Castro, Huddleson & Tatum
COUNSEL FOR HOTWIRED VENTURES LLC AND WIRED VENTURES, LTD.
                    
Date:  April 29, 1996
                           INTRODUCTION
          Now that the evidence has been submitted, it is clear
there are remarkably few disagreements regarding the facts
material to resolution of plaintiffs' motions for preliminary
relief.  The essential facts regarding the nature of the
Internet, the nature of plaintiffs' speech, and the capabilities
of various parental control devices to block or filter speech
that might be inappropriate for minors, are largely undisputed.
          Based on this record, plaintiffs are entitled to
preliminary relief enjoining enforcement of the challenged
provisions of the Communications Decency Act of 1996 ("CDA" or
"Act").[1]  As we argued in our pre-hearing memorandum,[2] and as
the evidence now confirms, the CDA effectively bans an enormous
quantity of speech that is constitutionally protected for adults,
because the "safe harbor" defenses provided in the Act do not
provide technologically or economically feasible means for
plaintiffs, their members and patrons, or for most Internet
speakers, to shield themselves from liability.  Defendants, in
response, have attempted to avoid consideration of the CDA's
actual impact on plaintiffs' speech, because they cannot justify
that actual impact in view of the long line of Supreme Court
precedents holding that government is not permitted to reduce the
adult population to reading and viewing only what is appropriate
for children.[3]   Yet that is precisely what Congress did here,
and that is why the CDA must be enjoined.

                        STATEMENT OF FACTS
          The facts are set forth in the Combined Proposed
Findings of Fact of the ACLU and ALA Plaintiffs (April 29,
1996)("Pl. Pro. Find."), submitted herewith.  The "key" proposed
facts can be found at paragraphs 1 through 18.  

                        SUMMARY OF ARGUMENT
          The evidence shows that the CDA effectively bans a
broad range of speech by plaintiffs and their members and patrons
that might be deemed "indecent" or "patently offensive" for
minors but is fully constitutionally protected for adults.  For
some modes of online communication, e.g., newsgroups, there is
simply no way to comply with the Act.  As to other methods of
communication, those who wish to make their speech available to
the world, and for free, cannot use the "verified credit card"
defense specified in the CDA, and there is no other practical way
for them to screen out potential listeners who are minors.  Pl.
Pro. Find. Part IV.C.1.  Even if they could, the difficulties of
determining which material is actually subject to the Act, and
segregating that material, are virtually insurmountable.  Pl.
Pro. Find. Part IV.C.1.b.(1), IV.D.  As a result, if they wish to
engage in speech potentially covered by the Act, they risk
prosecution under the Act, which makes it a felony to use
interactive computer services "to display" "patently offensive"
speech "in a manner available to a person under 18 years of age." 
47 U.S.C.  223 (d)(1)(B).  As a practical matter, the Act thus
forces plaintiffs and their members and patrons to self-censor
any non-obscene speech that might be deemed patently offensive
for minors, even though all of that speech is fully
constitutionally protected for adults.
          The evidence also shows that such a ban on "patently
offensive" speech between adults is not necessary to protect
minors because parental control software, and other user-based
methods, enable adults to block access to that speech by minors. 
Pl. Pro. Find. Part III.F, IV.F.3.  The Government's expert
witness acknowledged, for example, that PICS technology enables
parents to block access by their children to all Internet speech
that has not been approved for minors by a third-party rating
bureau the parent trusts, without depriving the parent, or any
other adult, of access to that very same speech.  Olsen Test.,
Tr. Vol. IV, at 221:19 to 222:24.  The parental control features
offered by the major online service providers and by private
software companies also enable parents to block or filter the
material available to their children, without depriving any adult
of access to that same material.  Pl. Pro. Find. Part III.F.2, 3. 
          The evidence also shows that the CDA will not protect
minors from the substantial percentage of indecent or patently
offensive speech that is posted abroad, whereas the user-based
methods plaintiffs advocate would block such speech regardless of
where it was posted.  Pl. Pro. Find. Part III.E.2.c, IV.F. 
Indeed, the Government's expert candidly volunteered that "we
would have to rely upon SurfWatch or Net Nanny technology for
foreign speakers."  Olsen Test., Tr. Vol. V, at 44:14-16.
          In short, the CDA will not be effective in protecting
minors from the significant percentage of indecent or patently
offensive speech posted abroad, and seeks to protect minors from
indecent or patently offensive speech posted domestically only by
banning that speech between adults as well.  In contrast, the
parental control technology the free market has developed without
governmental involvement would be more effective in protecting
minors from indecent or patently offensive speech, without
banning any of that speech between adults.
          Given these realities, defendants have not seriously
attempted to defend the constitutionality of the CDA as written. 
Instead, they have labored to craft and urge judicial adoption of
a quite different statute, which bears no resemblance to the
statute Congress enacted, and is actually contrary to the policy
choices reflected in the CDA.  Defendants argue, for example,
that the CDA was only (or primarily) intended to prohibit speech
by commercial entities distributing pornography that lacks
serious value.  But those arguments flatly ignore the text of the
CDA, its legislative history, and the Supreme Court precedent on
which Congress expressly relied.  
          When all is said and done, the Court is confronted with
a hastily drafted statute (the key provision of which -- "display
in a manner available" -- was added, without hearings, by the
Conference Committee) that fails to take account of the unique
nature of the Internet, and obviously sweeps far more broadly
than the First Amendment permits.  Defendants offer no plausible
construction that would narrow the statute's sweep in a
definitive and constitutionally permissible way.  Instead, they
essentially invite this Court to decide what can be done
constitutionally and what should be done, as a policy matter, and
then to rewrite the statute accordingly.  The appropriate
response, however, is to strike this statute down and give
Congress the opportunity to make those policy choices itself.
          Even if the CDA could withstand the strict First
Amendment scrutiny to which it must be subjected -- and it cannot
-- the challenged provisions would have to be struck down because
they are fatally vague in violation of the Fifth Amendment. 
Speakers cannot be forced to guess what constitutes prohibited
speech and what actions they must take to afford themselves of
the CDA's defenses.  But this is precisely the position in which
plaintiffs, their members and patrons find themselves.  The
evidence produced by defendants has only exacerbated the
ambiguity inherent in the CDA's use of the "indecency" and
"patently offensive" standards, and its nebulous "good faith"
defense.  The Fifth Amendment does not permit defendants to force
speakers to rely upon the good intentions of prosecutors in order
to avoid criminal liability for engaging in constitutionally
protected speech.

                             ARGUMENT
I.   THE CDA VIOLATES THE FIRST AMENDMENT BECAUSE IT EFFECTIVELY
     BANS A SUBSTANTIAL CATEGORY OF PROTECTED SPEECH FROM MOST
     PARTS OF THE INTERNET.

          The CDA effectively bans a broad but vaguely defined
category of protected speech from most parts of the Internet. 
Defendants have not demonstrated that this abridgment of the
First Amendment rights of speakers and their audiences directly
and materially serves any compelling governmental interest in the
least restrictive way.  Nor has the Government shown that the
Act's burdens on the rights of adults are outweighed by its
benefits to minors.  The Act is therefore plainly
unconstitutional.  See ALA P.I. Mem. at 25-42.
     A.   The Government Has Failed To Prove There Is Any
          Practical Way For Most Speakers To Transmit Speech That
          Is Potentially Covered By The Act.

          The key provision of the CDA challenged here, 47 U.S.C.
 223(d)(1)(B), makes it a felony to use an "interactive computer
service to display in a manner available to . . . person[s] under
18 years of age" any communication that "in context, depicts or
describes, in terms patently offensive as measured by
contemporary community standards, sexual or excretory activities
or organs."[4]  Defendants have never contended that this
provision, standing alone, would be constitutional.  That is not
surprising, because subsection (d), by itself, would effectively
eliminate constitutionally protected but "patently offensive"
speech from the Internet.  With the possible exception of private
e-mail to a known recipient, no method of communicating over the
Internet can be used with assurance that the message will not be
"available" to persons under age 18.  Pl. Pro. Find. Part
IV.C.1.a.  
          Instead, defendants' case hinges on the proposition
that the Act, taken as a whole, does not ban constitutionally
protected speech between adults -- i.e., that the defenses set
forth in section 223(e) make it feasible for would-be speakers to
use the Internet to transmit "patently offensive" speech to
adults.[5]  The key question, therefore, is whether the subsection
(e) defenses eliminate the otherwise unconstitutional sweep of
subsection (d).  In other words, it is undisputed that subsection
(d), by itself, is not a narrowly tailored law.  The Government
argues that subsection (e) narrows the sweep of subsection (d)
sufficiently that the Act as a whole becomes constitutional.  But
the Government has utterly failed to meet its burden of proving
that the subsection (e) defenses are in fact available to most 
Internet speakers.[6]  For that reason, the Government's defense
of the Act fails.  
     1.   Screening using credit cards and access codes is
          impossible for many speakers and is not a practical
          option for most other speakers.
     
          Section 223(e)(5)(B) provides a defense to conviction
for speakers who can screen prospective recipients "by requiring
use of a verified credit card, debit account, adult access code,
or adult personal identification number."  All the parties'
experts agreed, however, that this defense is not available for
several important modes of online communication.  Scott Bradner
testified that a speaker posting to a newsgroup, mail exploder
(e.g., listserv or Majordomo), or IRC chat channel (the most
predominant form of Internet "real time" chat) has no control
over who receives the communication and, in normal usage, does
not even know who the recipients will be.[7]  Accordingly, Bradner
testified that credit card and adult access code screening is
technologically infeasible for these modes of communication; it
is impossible for the speaker to screen recipients to ensure they
are over 17 years of age.[8]  Dan Olsen, the Government's expert
agreed.[9]  Thus, it is undisputed that requiring such screening
for any messages that might be "indecent" or "patently offensive"
for a minor would have the effect of banning such messages from
these types of online communication.  Pl. Pro. Find. Part IV.B.
          Even as to other modes of communication, including the
World Wide Web, where the defense is arguably technologically
feasible, defendants implicitly concede that this screening
method is only practicable for speakers who operate "'commercial'
adult sites that can take credit cards or devise access codes." 
US ALA Resp. at 17 (emphasis added).  For everyone else
attempting to communicate over the Internet -- including
individuals or entities who have non-commercial Web sites and, in
general, all those who seek to communicate in cyberspace for
reasons other than profit -- screening based on credit cards or
access codes is a practical impossibility.  Pl. Pro. Find. Part
IV.C.1.b.  
          This conclusion was confirmed by several trial
witnesses.  Scott Bradner testified, and defendants did not
dispute, that effective online screening of recipients is not
currently possible.  Bradner Test. Decl.  71.  Verification can
be accomplished only "offline."  However, the cost is so
substantial that offline verification is practical only for a
commercial entity that is charging for access to its
communications.  Id.; Pl. Pro. Find. Part IV.C.1.b.2.  Indeed,
credit card services do not presently verify non-commercial
online transactions.  Anker Test. Decl.  21; Croneberger Test.
Decl.  27; Olsen Test., Tr. Vol. IV, at 197:19-23, 198:6-9; Pl.
Pro. Find. Part IV.C.1.b.2.
          Thus, most World Wide Web speakers are left with the
option of adult access code screening.  But the administrative
burden of creating and maintaining the screening system, and the
ongoing costs for verification services, put this method beyond
the reach of most speakers.  Bradner Test. Decl.  71; Anker
Test. Decl.  13 (estimated cost of developing HotWired's member
registration and recognition system, which is not able to screen
for age, is at least $40,000, with ongoing maintenance costs of
$400 to $1000 a week).  Anker testified that the expense of
third-party adult access code verification would threaten the
financial viability even of a popular and well-established Web
site like HotWired.  Anker Test. Decl.  22-23 (estimating costs
of age verification for its current 300,000 subscribers at
$600,000, plus $2 for each new subscriber and $0.29 for every
individual access to the site).  Croneberger testified that it
would require 65 additional staff members to pre-register and
maintain data files just for the county population that has
physical access to The Carnegie Library, at an annual cost of
$845,000.  Croneberger Test. Decl.  29.  Since the library's Web
site is available to anyone in the world, the library would need
to maintain a 24-hour staff to adequately serve all users of its
site, at significant additional costs.  Id.  30.  Indeed,
Croneberger indicated that the library's new state-of-the-art
technology system, which cost $10 million, simply could not
handle the registration of all the individuals whom they know
visit their site.  Id.  37.
          HotWired's experience with a voluntary, unverified
registration system also shows that user resistance to
registration and verification requirements would substantially
reduce the number of subscribers to an online publication,
reducing in turn its value to advertisers.  Anker Test. Decl.
 16; Anker Test., Tr. Vol. III, at 109:13 to 110:7.
          Finally, a registration requirement would destroy a
principal advantage of the Internet as a medium of communication: 
the ability of Internet users to research and communicate
seamlessly and without interruption across its vast variety of
available resources.[10]
          In any event, even if speakers such as HotWired or the
Carnegie Library could use credit cards or access codes to screen
recipients of their speech, they would still face the more
general problem of deciding which speech falls within the scope
of subsection (d) and thus may only be displayed to a pre-
screened audience of adults.  At the hearing, the testimony of
the Government's own witnesses reflected the difficulty of
deciding which speech is covered and which is not.  See Point II
infra (demonstrating that this vagueness renders the Act
unconstitutional in every application).  
          And even if speakers could determine which material is
covered by the Act, the burden of reviewing existing and future
material and categorizing it as either restricted or unrestricted
would be overwhelming for any speaker who engages in a
significant amount of online speech, and insurmountable for many
of those speakers.  Croneberger testified, for example, that it
would cost $30,000 just to purchase or create software that could
search Carnegie Library's online materials for "key words" that
could indicate material that might be deemed "indecent" or
"patently offensive."  Croneberger Test. Decl.  31.  And, as the
Government's expert agreed, such key word searches could not
ensure that all covered material was located; that task would
require human review and judgment.  Olsen Test., Tr. Vol. V, at
35:24 to 36:19, 40:7-20.[11]  Croneberger estimated that it would
require 180 staff members, at a cost of $3 million, to review the
over 2 million items in the library's catalog.  Croneberger Test.
Decl.  32.  And it would cost an additional $100,000 annually to
review the online information concerning the thousands of titles
the library adds to its collection.  Id.  33.  Monitoring and
reviewing the magazines and abstracts the library provides online
would cost an additional $625,000 annually.  Id.  35. 
Croneberger testified that there simply is no way for the library
to allocate or raise these funds.  Id.  42.  If the cost is
"economically infeasible for the Carnegie Library," as
Croneberger testified, "the costs simply could not be borne by
smaller libraries with smaller budgets."  Id.  44.
          2.   The Government has not demonstrated that there is
               any other defense available to online speakers.

          The only other defense for a content provider is set
forth in section 223(e)(5)(A), which applies where a content
provider "has taken, in good faith, reasonable, effective, and
appropriate actions under the circumstances to restrict or
prevent access by minors."[12]  Nothing in the statutory text or
legislative history lends meaning to this defense.  The
Government has identified no method by which a speaker can comply
with subsection (e)(5)(A).  The most the Government has been
willing to say, through the testimony of its expert witness, Dan
Olsen, is that some sort of "speaker tagging" system might
constitute a safe-harbor defense at some point in the future. 
But it has pointedly refused to specify that speaker tagging will
constitute a defense, either now or in the future.[13]
          That refusal is not surprising, because speaker tagging
plainly is not a defense today, and, even down the road, speaker
tagging alone could not be viewed as a "reasonable, effective,
and appropriate action" that would prevent the tagged speech from
being "available" to minors.  For these reasons, the Government's
tagging evidence is irrelevant, and does not rebut plaintiffs'
showing that the CDA effectively bans covered speech on the
Internet by all speakers who are unable to use the subsection
(e)(5)(B) credit card or adult access code defense.   
          At the hearing, the Government's expert testified that
different types of communications -- from Web sites to e-mail --
could be labelled with the expression "-L18" if they contain
speech covered by the Act.  But his testimony did not explain how
such tagging could constitute a defense.  Tagging alone could
never be a defense.  Indeed, Olsen acknowledged that a speaker's
mere act of tagging or labeling expression -- with "-L18," or
"xxx," or anything else -- does not restrict or prevent minors
from gaining access.  Olsen Testimony, Tr. Vol IV, at 211:8-11,
234:19-25.[14]  Tagging could not even arguably constitute a
defense under the Act unless it is effective in making the tagged
communication unavailable to minors, and the Government's expert
acknowledged that tagging cannot be effective without cooperative
filtering software on the receiving end.  Olsen Test., Tr. Vol
IV, at 209:25 to 210:6.  Since the Government's expert
acknowledges that tagging will be ineffective without the
"cooperation of entities down the communication pipeline," Olsen
Test., Tr. Vol. 5, at 42:10-14, speakers surely cannot rely on
tagging as a safe harbor defense unless and until those entities
cooperate.  And they are not required to cooperate.  To the
contrary, as Olsen acknowledged, "Congress made a considered
decision to impose no requirements on entities down the
communications chain."  Id. at 43:1-4; see id. at 42:15-18; Conf.
Rep. at 190 (no liability on access providers).
          It is undisputed that, today, the necessary cooperation
does not exist.  At present, there is not even common
understanding of what "tag" should be used to denominate indecent
or patently offensive material, and in most computers there is no
software in place that is configured to read tags that indicate a
particular communication is for adults only.  Olsen Test., Tr.
Vol. V, at 91:1-2; Bradner Test. Decl.  79.  Accordingly,
speaker tagging cannot possibly be a safe-harbor defense today.
          Moreover, even if, in the future, there is common
agreement on what tags to use, and most computers have software
that is capable of recognizing those tags, the plain terms of the
Act would still preclude any potential speaker from relying with
confidence on speaker tagging as a defense.  Some "tagged"
communications would still be "available" to minors, since it is
certain that some minors will have access to computers that do
not have the screening software installed, and turned on.  See
Olsen Test., Tr. Vol. V, at 17:14-22, 43:12-20 (acknowledging
that not all Web browsers would be configured to read tags). 
Accordingly, speakers would still have to guess whether their
tagging would constitute a "reasonable, effective, and
appropriate action[]" to prevent access by minors, within the
meaning of subsection (e)(5)(A), and will be forced to self-
censor based on well-grounded fears of prosecution.
          Nor can the Court solve this problem by construing the
statute to mean that a speaker's tagging of indecent speech will
be a complete defense.  It would be strange indeed for a court to
adopt a construction the Government steadfastly refuses to urge,
and Congress could have stated expressly if that was what it
wanted.  More importantly, such a construction would not save the
Act, for several independent reasons.
          First, for the reasons just stated, the Act cannot be
construed as providing a speaker tagging defense today, because
the necessary common understandings and software are not in
place.[15]  Second, a judicially recognized tagging defense would
not deal with the vagueness of subsection (d).  Speakers would
still be at a loss to know precisely when their speech crosses
the line and must be tagged.  Third, they would also face the
burden, insurmountable for many speakers, of segregating
regulated from unregulated speech -- even assuming the line
between the two could be discerned.  See Pl. Pro. Find. Part
IV.C.1.b.(1).
          In any event, a construction of the Act to the effect
that non-commercial speakers can engage in protected but
"patently offensive" speech only by self-labeling that speech
would raise additional First Amendment problems.  Any de jure or
de facto requirement that speakers label their own protected
speech "patently offensive" would amount to a pernicious form of
governmentally compelled speech.  The Government generally cannot
compel citizens to speak, particularly if the speaker is
compelled to attach a pejorative label the speaker does not
believe is warranted.  See Post-Trial Brief of ACLU Plaintiffs in
Support of Their Motion for a Preliminary Injunction ("ACLU Post-
Trial Br.") at 54-58.  Cf. Meese v. Keene, 481 U.S. 465, 483
(1987) (upholding mandatory labelling scheme in part on ground
that label in question was not "pejorative").  See, e.g., Riley
v. National Federation of the Blind, Inc., 487 U.S. 781, 797-800
(1988) (striking down under compelled speech doctrine requirement
that professional fundraisers disclose to potential donors
proportion of proceeds actually passed through to charity; noting
that compelled disclosure, whether of fact or opinion, "burdens
protected speech"); Wooley v. Maynard, 430 U.S. 705, 715 (1977)
(law requiring individuals to display patriotic message on
license plates violated the First Amendment "right of individuals
to hold a point of view different from the majority").  Such
compelled speech is particularly unwarranted here, where the free
market has developed a technology -- PICS -- that does not
require speakers to label their speech, because their speech can
be rated by private third parties selected by parents.[16]  
          In any event, the CDA does not lend itself to a
construction that it merely requires speaker tagging.  Although
courts can construe statutes to avoid or eliminate constitutional
defects, the statute must be "readily susceptible" to a narrowing
construction.  See Virginia v. American Booksellers Ass'n, 484
U.S. 383, 397 (1988); Erznoznik v. Jacksonville, 422 U.S. 205,
216 n.15 (1975).  The Supreme Court recently rejected an
invitation to "redraft" a statute to preserve possibly legitimate
applications.  See United States v. National Treasury Employees
Union, 115 S. Ct. 1003, 1019 (1995)("NTEU").  That task, the
Court concluded, would involve complex policy judgments and might
raise distinct constitutional concerns, and was therefore
"properly left to Congress."  Id.  
          If this Court were to attempt to rewrite the CDA as a
tagging law, it would have to make several quintessentially
legislative judgments.  It would be required, for example, to
determine empirically when the availability and use of blocking
software was sufficiently widespread to justify the conclusion
that speaker tagging would effectively make the tagged speech
unavailable to minors.[17]  And it would also have to determine
what kind of tagging would qualify for the defense.  These are
not matters a court should be resolving in the guise of
interpreting the general language of the CDA.  If Congress wants
to require, or approve as sufficient, tagging of a category of
protected speech, it should say so specifically -- rather than
passing a law that forces speakers to guess whether tagging would
be a "reasonable, effective, and appropriate" means of making
speech unavailable to minors, or forces courts to legislate all
the critical details. 
          Thus, the Government's effort to defend the CDA by
pointing to the mere possibility that speaker tagging "might"
provide a defense is wholly unpersuasive.  It follows that the
Act does operate -- today and for the foreseeable future -- as a
ban on a category of protected speech for most speakers.    
     B.   The Impact Of This Ban On Protected Speech Is Enormous.

          Contrary to the Government's claims, the effects of the
Act will extend far beyond "commercial" purveyors of
"pornography."  See generally ALA P.I. Mem. at 10-16.  Unlike the
dial-a-porn restrictions upon which the CDA is purportedly
modeled, the CDA is not expressly limited to commercial speakers
-- those who charge listeners a fee to obtain their speech.[18] 
To the contrary, the Conference Report confirms that Congress
intended "content regulation of both commercial and non-
commercial providers."[19]
          The Act also plainly applies to a far wider range of
speech than "pornography."  It is expressly made applicable to
"libraries and educational institutions."  47 U.S.C.  230(e)(2). 
As the evidence demonstrated, libraries are unlikely to carry (or
post online) even the illustrations from Playboy magazine, let
alone the hard-core material the Government claims is the
principal target of the Act.  See Croneberger Test., Tr. Vol. II,
at 105:21 to 106:5, 108:14-19. 
          The breadth of the expression covered by the Act is
apparent from the actual standard Congress chose to use in
subsection (d).  The "patently offensive" standard is a term of
art, coined by the FCC in its regulation of broadcast "indecency"
and discussed by the Supreme Court in FCC v. Pacifica Foundation,
438 U.S. 726 (1978).  Conf. Rep. at 188-89.[20]  This
Pacifica/patent offensiveness standard bars broadcast of a great
deal of expression that by no stretch of the imagination would
constitute pornography.  For example, the FCC has suggested an
indirect "rule of thumb" that R-rated movies are indecent.[21] 
And Pacifica itself involved crude language, not pornography. 
          Significantly, the "patently offensive" standard is
also broad enough to include not only graphic "depictions" but
textual "descriptions" of sexual or excretory activities.  The
inclusion of written text vastly expands the potential scope of
subsection (d), and shows that Congress was interested in more
than conventional "pornography."  Indeed, Pacifica itself
involved only words, not images.  Under this standard, it is
reasonable for speakers to fear that any communication containing
some or all of the "seven dirty words" that is "available" to
minors on the Internet now constitutes a felony.[22]  And the
danger is just as great that a combination of seemingly innocuous
words will be deemed "patently offensive" "in context."  See
Olsen Test., Tr. Vol. V, at 36:20 to 37:2 (acknowledging that
combinations of words may render otherwise innocuous text
"offensive").
          Nothing in the "patently offensive" standard prevents
its application to works with serious value.  The broad purpose
of that standard when first adopted was to prohibit images or
descriptions of sexual or excretory activities or organs that
might be deemed inappropriate for children in over-the-air
broadcasts, even if they had serious value.  Acknowledging that
broad purpose, the FCC has ruled, in the broadcast context, that
material may be found patently offensive even where the
information is presented "in the news" and is presented "in a
serious, newsworthy manner."  Letter to Merrell Hansen, 6 FCC
Rcd. 3689 (1990)(citation omitted).  It has expressly declined to
hold that "if a work has merit, it is per se not indecent." 
Id.[23]  Moreover, that standard does not require consideration of
the material as a whole.[24]  By adopting that standard, Congress
has now decreed it is a felony to offer comparable forms of
expression on the Internet.
          Not surprisingly, plaintiffs' trial witnesses expressed
serious concern that the Act could be applied to criminalize
their online posting of textual materials containing the seven
dirty words or discussing sexual matters.  Croneberger Test.
Decl.  16-18, 21-22; Anker Test. Decl.  4, 5, 8.  The
Government's witnesses confirmed the validity of those fears.    
Howard Schmidt (who according to the Government was called to
demonstrate that images subject to the Act could be accessed on
the Internet) utilized several images containing nothing more
than partial or total nudity.[25]  Schmidt testified that, in some
parts of the country, the publisher of Vanity Fair would have
potential liability under the Act for posting online a cover
picture of a naked Demi Moore (even though the print publication
of that image was entirely lawful).  See Schmidt Test., Tr. Vol.
IV, at 138:23 to 139:14.  Dan Olsen, in turn, acknowledged that
any use of "dirty words" might be covered -- stating that each
such use should be tagged just to be sure.  Olsen Test., Tr. Vol
V, at 53:16 to 54:10.  He added that if he were the relevant
"community," a Playboy centerfold would be deemed indecent and
patently offensive.  Olsen Test., Tr. Vol. IV, at 235:9 to
236:14.
          As with "tagging," the Government appears to be
inviting this Court to rewrite the CDA to regulate only
"pornography" that lacks serious value.  But that would require
the Court to perform radical surgery on the statutory language,
and would be contrary to the evident intent of Congress.[26] 
Courts have no "license to rewrite a statutory scheme and `create
distinctions where none were intended.'"  Consumer Party v.
Davis, 778 F.2d 140, 146-47 (3d Cir. 1985) (quoting American
Tobacco Co. v. Patterson, 456 U.S. 63, 71 n.6 (1982)).[27] 
Moreover, such a judicial rewrite would not solve the
constitutional problem.  What the Government calls non-obscene
"pornography" is just as constitutionally protected as other
forms of speech.  Congress's effective ban of a broad and vaguely
defined category of protected speech from most of the Internet
cannot suddenly be made constitutional by narrowing its impact to
a smaller category of equally protected speech.  
     C.   The CDA's Restrictions On "Patently Offensive"
          Expression Fail The Strict Scrutiny Test.

          The Government does not dispute that the CDA's
prohibition of "patently offensive" expression is a content-based
restriction on speech.  It therefore triggers strict scrutiny. 
The Government must demonstrate that the CDA "directly and
materially" serves a compelling governmental interest in the
"least restrictive" way.  Sable Communications, Inc. v. FCC, 492
U.S. 115, 126 (1989); Fabulous Assocs. v. Pennsylvania Pub. Util.
Comm'n, 896 F.2d 780, 784-85 (3d Cir. 1990); see Turner
Broadcasting System v. FCC, 114 S. Ct. 2445, 2470 (1994); ALA
P.I. Mem. at 34-35.  The Government cannot meet that standard. 
          1.   The Government has failed to prove that the CDA
               directly and materially advances a compelling
               interest in protecting minors.

          In the abstract, the Government has a compelling
interest in shielding some minors from at least some forms of
"patently offensive" speech.  Sable, 492 U.S. at 126.  But the
CDA's effective ban of such speech from most of the Internet, far
from advancing that interest "in a direct and material way,"
Turner, 114 S. Ct. at 2470, "provides only the most limited
incremental support for the interest asserted."  Bolger v. Youngs
Drug Prods. Corp., 463 U.S. 60, 73 (1983).[28]
          It is critical to focus on exactly what is at stake
here:  expression that falls within the statutory definition of
subsection (d) but is not obscene or otherwise unprotected for
adults.  All obscene expression and all child pornography is
properly banned under existing law because it lacks First
Amendment protection even for adults.  To justify the restriction
on "patently offensive" expression at issue here, therefore, the
Government must prove that extending the statutory prohibition to
constitutionally protected, non-obscene expression advances its
interests in a direct and material way.[29]
          Moreover, the Government cannot ignore the facts about
existing protections for minors that have been developed as a
result of market forces without governmental coercion.  As
plaintiffs have demonstrated, there is a rapidly growing market
for user-based technologies that already provide a great deal of
protection.  For over a year, readily available software, such as
Cyber Patrol, SurfWatch, CYBERsitter, and Net Nanny, has enabled
parents and other adults to limit the access of children to
material on the Internet.[30]  Stip.  54; Pl. Pro. Find. Part
III.F.3.
          These products place "responsibility for making such
choices . . . where our society has traditionally placed it -- on
the shoulders of the parent."  Fabulous Assocs., 896 F.2d at 788. 
They are easy to install and use,[31] and have a variety of
features.  And, if parents so choose, they can use the software
to block access to all parts of the Internet that might
potentially be deemed inappropriate for children.
          Cyber Patrol, for example, is designed to enable
parents to block access to online material in any or all of 12
CyberNOT categories (including sexual content) with one simple
step.  Stip.  57.  In addition, parents can use Cyber Patrol to
prevent access to particular sites they deem inappropriate for
their children.  Burrington Test. Decl.  26.  See Pl. Pro. Find.
 806.
          SurfWatch blocks access to more than 5,000 Internet
sites that are known or appear likely to contain text or graphics
of a sexual nature -- at any given time, 90 to 95 percent of all
Internet sites that contain sexual material that might be
inappropriate for children.  Duvall Test. Decl.  23; Duvall
Test., Tr. Vol. I, at 154:12-19.[32]  It also prevents children
from using popular Web search engines to search for common words
that would likely lead to listings of sexually explicit sites. 
Duvall Test. Decl.  19; Schmidt Test., Tr. Vol. IV, at 86:16 to
87:2.  SurfWatch, like Cyber Patrol, is routinely updated.  Stip.
 58; Duvall Test. Decl.  11.  And a new feature of SurfWatch,
available soon, will allow adults to block access to all Internet
sites except those they affirmatively pre-select.  Duvall Test.
Decl.  26; Duvall Test., Tr. Vol. I, at 131:11-18, 147:24-148:3.
          The major commercial online services also offer
parental control options free of charge to their members.  Stip.
 69.  These controls permit parents to block their children's
access to elements of the companies' proprietary services, such
as chat rooms and newsgroups, that contain material parents might
consider inappropriate for their children.  Burrington Test.
Decl.  17, 18, 25; Burrington Test., Tr. Vol. III at 43:15.  In
addition, parents using America Online ("AOL") can block all
binary downloads (pictures) from the Internet, Burrington Test.,
Tr. Vol. III, at 43:8-14, while CompuServe and Prodigy give their
subscribers the option of blocking all access to the Internet,
Burrington Test. Decl.  25.  The parental control options
available to online service subscribers are steadily increasing
in number and sophistication.  See generally Burrington Test.
Decl.  13, 29;  see Pl. Pro. Find. Part III.F.2.
          The question, therefore, is whether the CDA's
imposition of speaker-focused controls will add significantly
greater protection.  It clearly will not.  First, even assuming
(as we have argued) that the Act operates to ban "patently
offensive" expression, it will have an impact only on domestic
speakers.  It will have little or no impact on the very
significant percentage of sexually explicit material available
over the Internet that is posted abroad.  See Hoffman Test., Tr.
Vol. II, at 68:1 to 69:14; Duvall Test. Decl.  41; Duvall Test.,
Tr. Vol. I, at 161:17 to 162:18.  See also Schmidt Test., Tr.
Vol. IV, at 68:1-4 (agreeing that a number of sites he accessed
originated abroad).  That is because foreign content providers --
certainly as a practical matter and possibly as a legal matter --
are beyond the reach of United States prosecutors.[33]
          For that reason, even the complete elimination of all
domestically posted sexually explicit material would make no
difference in the ability of a minor -- using a computer without
functioning parental control software -- to gain access to
potentially inappropriate material.  A simple computer search for
sexually explicit material will list both domestic and foreign
sites.  Schmidt Test., Tr. Vol. IV, at 124:21-25.  Even if the
total number of sites were reduced by 60-70 percent (the
approximate percentage of sexually explicit sites currently
posted in this country), there would still be ample "patently
offensive" expression available for any minor who is looking for
it.[34]  Pl. Pro. Find.  1166-67.  This undisputed fact
undercuts any effort to protect minors through controlling
speakers on the Internet.
           The Government, as we have noted, disavows any
intention to ban patently offensive speech, and does not agree
that the CDA will, in the long run, operate as a ban on such
speech.  But if that is true, the argument that the statute
advances a legitimate governmental interest unravels entirely. 
The Government's expert acknowledged that protecting minors from
indecent or patently offensive speech, while continuing to make
that speech available to adults, can be accomplished only through
the cooperation of entities other than the speaker, and those
entities will have to utilize screening software.  Olsen Test.,
Tr. Vol. V, at 42:10-14.  Olsen acknowledged that tagging by
speakers is just a method to facilitate screening by the end-
user.  Accordingly, tagging will not protect minors unless
parents or guardians exercise responsibility by using software
that responds to tags.
          Yet with the advent of PICS technology, the Government
has made no showing that there is any advantage to requiring
speakers to tag their own speech -- as opposed to relying on
third parties to provide this service to parents.[35]  And PICS
would be more effective at dealing with the problem of foreign
speakers, because the software could screen out all speech that
has not been affirmatively identified as appropriate for
children.[36]  Thus, under any reading of the CDA, the statute is
a remarkably ineffective means of advancing the Government's
asserted interest in  protecting minors from "patently offensive"
speech, in a direct and material way.
          2.   The CDA is not the least restrictive means of
               advancing the Government's interests.

          The evidentiary record also confirms plaintiffs'
argument that the CDA is not the least restrictive means of
advancing the Government's asserted interest.  See ALA P.I. Mem.
at 34-42; Pltfs Joint BOP Mem. at 6-11.[37] 
          As just noted, for example, the private market has
already developed methods -- such as user blocking software and
PICS -- that are far more effective than the CDA at restricting
minors' access to potentially inappropriate material.  Instead of
imposing the CDA's sweeping prohibition on speech, Congress could
simply have allowed these private markets to develop, and
intervened only if (contrary to current indications) these
methods did not work in specific areas.  See Turner, 114 S. Ct.
at 2470-72.  Congress could also have furthered these market-
driven developments by educating parents and school
administrators about the Internet and available screening options
(compare 47 U.S.C.  551(a)(9)), by assisting parents or schools
in acquiring screening technologies and information, or by
supporting additional research and development.  Indeed, had
Congress undertaken any meaningful inquiry into the nature of
communication on the Internet, it might have developed still
other, less restrictive means of protecting minors.   
          The Government's arguments in this case suggest other
less drastic alternatives Congress could have considered.  If, as
the Government contends, the CDA was principally aimed at
"commercial providers of on-line pornography" (see US ALA Resp.
at 3, 11, 17), Congress could have passed a statute aimed
specifically at that activity.  Cf. Zauderer v. Office of
Disciplinary Counsel, 471 U.S. 626, 645-46 (1985) (government may
not enact a broad prophylactic rule "simply to spare itself the
trouble" of distinguishing speech that is the subject of
legitimate regulatory concern from speech that is not).[38]
          3.   Even if the CDA directly advanced a compelling
               Governmental interest, it would still violate the
               First Amendment.
     
          The Government does not argue that it can
constitutionally ban "patently offensive" expression between
adults in order to prevent minors from gaining access to that
expression, and any such argument would be foreclosed by
governing Supreme Court precedent.  The First Amendment rights of
adults would have to be protected, even if the statute were the
only way to advance the Government's interest in protecting
minors (and it is not).
          The Supreme Court has repeatedly made clear that
government may not suppress indecent speech altogether in order
to advance its interest in protecting children.  To the contrary,
"government may not `reduce the adult population . . . to . . .
only what is fit for children.'"[39]  "The level of discourse
reaching a mailbox simply cannot be limited to that which would
be suitable for a sandbox," and this is so "regardless of the
strength of the government's interest" in protecting children. 
Bolger, 463 U.S. at 74-75 (emphasis added).  Government may not
constitutionally "quarantin[e] the general reading public against
books not too rugged for grown men and women in order to shield
juvenile innocence . . . .  Surely this is to burn the house to
roast the pig."  Butler v. Michigan, 352 U.S. 380, 383 (1957). 
In Sable, the Court cited Butler to "reiterate" that point, and
stressed that a federal statute is unconstitutional if it has the
"effect of limiting the content of adult [communications] to that
which is suitable for children . . . ."  492 U.S. at 131.[40]
          No court has ever held, in the context of any medium,
that a ban on the transmission of indecent communications to
adults satisfies this exacting standard.  This Court should not
endorse such an unprecedented intrusion on First Amendment
rights.  Instead, it should recognize that the infringement of
First Amendment rights imposed by the Act far outweighs any
incremental advancement of the Government's legitimate interest. 
See Elrod v. Burns, 427 U.S. 347, 363 (1976)("the benefit gained
must outweigh the loss of constitutionally protected rights");
Sable, 492 U.S. at 131; id. at 133 (Scalia, J., concurring);
Fabulous Assocs., 896 F.2d at 787-88.  As in Bolger, "a
restriction of this scope is more extensive than the Constitution
permits."  463 U.S. at 73.
          4.   Strict scrutiny is the appropriate standard of
               review. 

          The Government's prehearing memorandum devotes 25 pages
to strict scrutiny and then appends a half-hearted argument that
an "intermediate scrutiny" standard, drawn from FCC v. Pacifica
Foundation, 438 U.S. 726 (1978), might apply instead. 
Ultimately, however, the Government does not appear to be asking
the Court to apply any standard other than strict scrutiny at
this stage of the case.  See U.S. ALA Br. at 25-26.
          If the Court nevertheless considers this issue, it
should reject defendants' Pacifica argument.  See ALA P.I. Mem.
at 51-57.  The intermediate standard of review in broadcasting
cases originated in Red Lion Broadcasting Co. v. FCC, 395 U.S.
367 (1969).  Its primary rationale is spectrum scarcity.  Because
"there are more would-be broadcasters than frequencies
available," the Supreme Court has allowed the Government "to
place limited content restraints, and impose certain affirmative
obligations, on broadcast licensees."  Turner, 114 S. Ct. at
2456-57.  It was this unique treatment of broadcasters that
formed the basis of the Court's ruling in Pacifica.  438 U.S. at
748.
          The Court has since refused to extend the intermediate-
scrutiny standard beyond broadcasting to cable television,
Turner, 114 S. Ct. at 2457, or to dial-a-porn services, Sable,
492 U.S. at 127-28.  There is no reason to believe it would
arrive at a different conclusion with respect to the Internet,
where there are no limitations on the number of speakers. 
Indeed, the Internet's characteristics are the antithesis of the
"scarcity" that underlies the Red Lion standard.  See Pl. Pro.
Find. Part III.A.9.c.
          Defendants contend that in Pacifica, the Court focused
not so much on scarcity as on broadcasting's "pervasiveness" and
unique accessibility to children.  438 U.S. at 748-49.  They
suggest that these characteristics are shared by the Internet. 
But Pacifica did not hold that the First Amendment can be
relaxed, in any medium, whenever those two characteristics are
present.  To the contrary, before mentioning those
characteristics, the Court made specific reference to Red Lion,
with its scarcity rationale, as the leading case establishing a
uniquely lower First Amendment standard for broadcasting.
          In any event, the Internet is not at all like
broadcasting, and is neither pervasive nor uniquely available to
children.  The Court's central concern in Pacifica was the fact
that broadcasts of "indecent" programming are likely to be turned
on accidentally, by unwilling adults and children alike.  See id.
By contrast, as the evidence here showed, users seeking sexually
explicit material over the Internet must affirmatively seek it
out.  See Schmidt Test., Tr. Vol. IV, at 103:7-12 ("odds are
slim" that an online user would "come across a sexually explicit
site by accident").  See Pl. Pro. Find. Part III.E.2.d.  Before
children will confront such materials on the Internet, they must
be able to perform tasks far more complex than turning on a
television or radio.[41]  
          Thus, strict scrutiny applies here.

II.  THE CDA IS UNCONSTITUTIONALLY VAGUE.
          The Act also is unconstitutionally vague, in violation
of the First and Fifth Amendments.  See ALA P.I. Mem. at 62-78. 
The evidence at the hearing powerfully confirmed this
conclusion.[42]
          The Act does not begin to supply the "definiteness,"
Kolender v. Lawson, 461 U.S. 352, 357 (1983), the Constitution
requires of criminal laws that regulate expression. See generally
Smith v. Goguen, 415 U.S. 566 (1974); Bouie v. City of Columbia,
378 U.S. 347 (1964).  It is exceptionally indefinite in two
fundamental respects.  First, the key statutory terms "indecent"
and "patently offensive," nowhere defined in the Act, are
hopelessly amorphous, and nothing in the text of the Act
clarifies their meaning.  To the contrary, the fact that the two
operative paragraphs of the Act use different terms would
naturally lead speakers to believe they have different meanings,
and thus to wonder what those different meanings might be.[43] 
The ambiguity of these key operative terms is compounded by the
Act's use of the entirely subjective concepts of "context" and
"community standards."  ALA P.I. Mem. at 64-75.  Second, the
statutory defenses -- defenses the Government has effectively
conceded must be truly practicable if the Act is to survive
constitutional scrutiny -- are themselves extraordinarily vague. 
For non-commercial speakers, subsection (e)(5)(A) provides the
only possible defense, but those speakers are given no meaningful
guidance whatsoever -- either by the text of that subsection or
by the Government's statements in this litigation -- regarding
concrete steps they can take to assure they qualify for the
defense.  See ALA P.I. Mem. at 75-78.   
          Although the Government has been unwilling to specify
the CDA's exact scope, what little the Government has said
underscores the Act's vagueness.  It has suggested, for example,
that the Act's principal (though not exclusive) target is
"pornographic materials."  US ALA Resp. at 11.  This term is
itself fatally indefinite, drawn as it is from colloquial usage
rather than any recognized legal source.  Moreover, even if
"pornography" were further defined as images of sexually explicit
conduct, the Government's suggestion is contradicted by the Act's
unquestioned applicability to textual as well as graphic
communications, by Congress's specific rejection of a "harmful to
minors" standard and its "prurient interest" element, and by the
Government's own position that the "patent offensiveness"
language is based on the FCC standard applied in Pacifica, a case
that did not involve "pornography."  See Point I.B. supra.  And
even if the Government were correct that Congress's principal
target was "pornography," that would not remedy the Act's failure
to make clear what other speech is covered by the more
encompassing statutory language Congress actually used.
          The Government has placed great weight on the statutory
term "context" as a way to limit the scope of subsection (d) and
to exempt from its prohibitions speech that has serious value. 
See US ALA Resp. at 10; US ACLU Resp. at 46-49.  In a civil
enforcement regime administered by an administrative agency with
expertise regulating a particular medium, see, e.g., Pacifica,
regulations or rulings regarding "context" could conceivably,
over time, develop some guidelines.[44]  But as a standard for
criminal liability to be applied by hundreds of prosecutors and
judged by diverse communities across the country, "context" is an
open invitation to arbitrary enforcement.  Because speech on the
Internet is communicated throughout the nation (and the world),
speakers will have to decide whether to speak based on how the
most aggressive prosecutor in the most censorious community may
view the "context" of their speech.[45]  The Act thus embodies the
worst evil of vague criminal prohibitions on speech: individuals
will "steer far wider of the unlawful zone," Speiser v. Randall,
357 U.S. 513, 526 (1958), than if the boundaries were "clearly
marked," Baggett v. Bullitt, 377 U.S. 360, 372 (1964).
          The Government has pointedly declined to explain what
it understands "context" to mean.[46]  Moreover, when questioned
by the Court, the Government's own witness (an expert on computer
crime) demonstrated in the most compelling terms the total
indeterminacy of "context" as a limiting principle.  Howard
Schmidt explained that in his opinion depictions of Indian
sculptures of couples engaged in explicit sexual conduct would
not be proscribed because such communications would have an
"educational" or "cultural" context, whereas a far less explicit
nude photograph on the cover of Vanity Fair magazine would be
proscribed because its context indicated it was "for fun more
than anything else."[47]  If, as the Government contends,
particularized inquiry into a communication's "context" is
necessary to determine what is lawful and what is not, then the
CDA starkly epitomizes the very perils of discriminatory,
capricious, unpredictable enforcement the void-for-vagueness
doctrine is designed to avoid.[48]
          The evidence at the hearing also made clear that the
Act's defenses are unconstitutionally vague.  The Government has
effectively conceded that most speakers (including all
noncommercial speakers) are unable to communicate "patently
offensive" material lawfully on the Internet unless they can
qualify for the subsection (e)(5)(A) defense ("good faith,
reasonable, effective, and appropriate actions").  Thus, it is
just as critical for speakers to know whether they qualify for
the (e)(5)(A) defense as it is for them to know whether their
speech is covered by the Act at all.   
          But the Government has done nothing to cure the
statute's utter failure to specify what speakers such as the
plaintiffs here can do to qualify for this defense.  As noted,
the Government has devoted considerable effort to show that
"tagging" is technically feasible in some circumstances, but it
does not suggest tagging could be a defense under (e)(5)(A) until
compatible end-user software is in widespread use, and it refuses
to take a position on whether tagging would be a defense even if
every end-user had compatible software in place today.  If the
Act is not a complete ban of "patently offensive" speech from
most of the Internet, it can only be because there are, today,
"reasonable, effective, and appropriate actions" speakers can
take to restrict access by minors while allowing access by
adults.  But the Government does not even hypothesize what those
actions would be.  It is a violation of due process to compel
speakers to risk their liberty in order to test what those
actions might be.  See Hynes, 425 U.S. at 620; Dombrowski v.
Pfister, 380 U.S. 479, 491 (1965).[49]
          In the end, the Government's only response regarding
the vagueness of the "indecent" and "patently offensive"
standards, and of the scope of the safe harbor and access
provider defenses, is that prosecutors, judges and jurors can be
trusted to construe these provisions reasonably.  But the void-
for-vagueness doctrine specifically rules out that argument.  See
Keyishian v. Board of Regents, 385 U.S. 589, 599 (1967)("It is no
answer" for government to defend a vague law by "say[ing] that
[it] would not be applied in such a case."); Baggett v. Bullitt,
377 U.S. at 373 ("[w]ell-intentioned prosecutors and judicial
safeguards do not neutralize the vice of a vague law").  Citizens
are entitled to fair and reasonably specific warning of what a
criminal law means before they are forced to risk prosecution,
and they cannot be relegated simply to "trusting" the government
to exercise lenity toward speech, particularly speech that may
well be unpopular, caustic, or intensely critical of the
governors themselves.  
          The CDA is therefore unconstitutionally vague. 

III. THE CHALLENGED PROVISIONS OF THE CDA ARE INVALID ON THEIR
     FACE AND AS APPLIED.  

          Because of the force of plaintiffs' constitutional
challenge in this case, the Government's primary goal seems to be
to avoid at all costs having to defend the actual, present impact
of the CDA on protected speech.  To avoid review, the Government
(1) argues that facial review of the Act is inappropriate and
then (2) assumes, without explanation, that the Court should not
review the provisions as applied to plaintiffs.  However,
established First Amendment doctrines strongly support facial
review in this case.  In any event, at a minimum, the Court
should invalidate the statute as applied to the current array of
plaintiffs.
     A.   The Challenged Provisions Are Facially Invalid In All
          Their Applications.

          Although it is defending a law that, on its face,
imposes content-based restrictions on fully protected speech, the
Government argues that the law cannot be facially challenged
because it contains a defense that renders its application
constitutional as to a narrow subset of speakers, i.e. "those
commercial vendors of online pornography who use credit card
authorization or similar means . . . to restrict access to their
material" and thus are able to use the subsection (e)(5)(B)
defense.  US ALA Resp. at 17.  See also id. at 19; Defendants'
Response to the Court's Inquiry Concerning Burden of Proof
Issues, at 3-4.
          This argument is utterly meritless.  As we show in the
next section, a facial challenge to the CDA's restrictions on
"patently offensive" communications is entirely appropriate
because that restriction violates the First Amendment in the
overwhelming majority of its applications.
          In any event, defendants' "commercial vendors" argument
ignores the fact that plaintiffs have mounted a facial challenge
to subsection (d) on vagueness grounds.  As the "void-for-
vagueness" doctrine's name signals, courts have not hesitated to
strike down on their face laws that fail to "define the criminal
offense with sufficient definiteness that ordinary people can
understand what conduct is prohibited and in a manner that does
not encourage arbitrary and discriminatory enforcement." 
Kolender v. Lawson, 461 U.S. at 357.  See id. at 360-61; Smith v.
Goguen, 415 U.S. at 582; Lanzetta v. New Jersey, 306 U.S. 451,
458 (1939).  See also Kreimer v. Bureau of Police, 958 F.2d 1242,
1266 (3d Cir. 1992) ("[A] meritorious First Amendment vagueness
challenge will annul an unclear law that 'chills' protected First
Amendment activities.")  The subsection (e)(5)(B) defense does
nothing to cure this vagueness problem.[50]  
          Furthermore, defendants also ignore the fact that
Congress authorized a facial challenge precisely because it did
not want to wait for years of as-applied challenges to be
resolved before it would know whether it had enacted a statute
that could be applied in more than a handful of cases.[51]  A
holding that the CDA is not subject to facial review because it
can constitutionally be applied to a small segment of speakers
would produce precisely the result Congress wanted to avoid: 
years of as-applied challenges before the constitutional scope of
the CDA would be known.[52]
     B.   The Challenged Provisions Are Facially Invalid Because
          They Are Overbroad.

          Even if the Court were to agree with defendants that
the Act is not invalid in all applications, it would still be
required to hold the Act facially invalid because it remains
invalid in the overwhelming majority of applications.  The
Government certainly has not carried its burden of showing that
non-commercial speakers or speakers utilizing newsgroups, mail
exploders or IRC chat will be able to use the safe-harbor
defenses.  Thus, in all of its applications to such speakers, the
Act imposes an effective ban on "patently offensive" expression
and is, as a result, much broader than the First Amendment
permits.  This means the Act should be struck down on its face on
overbreadth grounds.[53]
          Although in most other areas of constitutional law a
litigant challenging a law "on its face" must show that the law
is invalid in every application, see United States v. Salerno,
481 U.S. 739, 745 (1987), in First Amendment cases the law is
"nearly the opposite:  a statute is invalid in all its
applications if it is invalid in any of them, or at least enough
to make it 'substantially' overbroad."[54]  This rule is "based on
an appreciation that the very existence of some broadly written
laws has the potential to chill the expression of [persons] not
before the court."  Forsyth County v. Nationalist Movement, 112
S. Ct. 2395, 2401 (1992).  See also Secretary of State of
Maryland v. J.H. Munson Co., 467 U.S. 947, 958 (1984); Broadrick
v. Oklahoma, 413 U.S. 601, 612 (1973).  Where, as here, a statute
chills a substantial amount of protected speech, the overbreadth
doctrine ensures that affected speakers and listeners will not
have to self-censor their speech until each possible application
of the statute is litigated on a case-by-case basis.
          The Government has offered two responses to this
overbreadth argument, neither of which is persuasive.  First, the
Government has mischaracterized the argument -- claiming that
plaintiffs argue that the statute is "overbroad" only because it
extends beyond pornography to cover speech on "education,
scientific and/or literary issues."  US ALA Resp at 14.  Based on
this mischaracterization, the Government argues that the solution
is to construe the statute narrowly to exclude coverage of such
"serious value" speech.  Id. at 15.  But see supra, I.B.
(pointing out that this construction is not supported by the
statute's text or history).
          This argument entirely misses plaintiffs' two basic
points.  First, plaintiffs challenge the CDA as it applies to the
entire spectrum of non-obscene communication, no part of which
may be banned for adults.  The Act's suppression of material with
serious value compounds, but does not exhaust, the Act's
unconstitutional effect.  The Act is no less unconstitutional for
material the Government would characterize as "pornography" than
it is for material with "serious value."  See Sable, 492 U.S. at
128-31 (striking down a "dial-a-porn" statute limited to
commercial sexually explicit telephone communications, without
any suggestion that the speech at issue had "serious value"). 
Second, the overbreadth of the Act arises most palpably from its
application to noncommercial speakers and all speakers utilizing
newsgroups, mail exploders or IRC chat, who, regardless of the
content or "value" of their protected expression, cannot
prescreen their potential audience.  That overbreadth problem
would not be eliminated through a narrowing construction of the
statute to exempt "serious value" speech (even if such a
construction were possible). 
          The Government's second response to the facial
overbreadth argument is its suggestion that plaintiffs lack
standing to maintain an overbreadth challenge because they allege
that their own speech is constitutionally protected.  US ALA Resp
at 14.  The Supreme Court's most recent ruling on this issue,
however, flatly rejects the argument that overbreadth claims can
be brought only by parties to whom the statute may
constitutionally be applied.  In Board of Trustees v. Fox, 492
U.S. 469, 484 (1989), the Court stated that "while the
overbreadth doctrine was born as an expansion of the law of
standing, it would produce absurd results to limit its
application strictly to that context."  The Court reasoned that
it would make no sense to grant the right to mount a facial
challenge to parties whose speech has validly been regulated and
who are personally unaffected by the statute's overbreadth, while
denying that right to parties who claim to have been directly
affected by the statute's overbreadth themselves.  Id.[55]
          Thus, it is clear that plaintiffs may challenge the CDA
as facially overbroad, even though plaintiffs also claim that the
challenged provisions infringe their own First Amendment rights. 
Plaintiffs' claims obviously implicate the core concern of the
overbreadth doctrine -- the danger "that the statute's very
existence may cause others not before the court to refrain from
constitutionally protected speech or expression."  Broadrick, 413
U.S. at 612.  
          Accordingly, even if the Act does not unduly restrict
speech by a small class of commercial content providers who can
prescreen, it does effectively ban a great deal of protected
speech by everyone else.  Therefore, it is beyond reasonable
dispute that the CDA operates unconstitutionally for "a
substantial category" of the speakers it covers, Village of
Schaumberg v. Citizens for a Better Environment, 444 U.S. 620,
634 (1980), and that it "criminalizes a substantial amount of
constitutionally protected speech," City of Houston v. Hill, 482
U.S. 451, 466 (1987).
          This substantial overbreadth requires invalidation of
Section (d) on its face.  The provision cannot be saved by a
narrowing construction because it is not "easily susceptible" to
such a construction.  Erznoznik, 422 U.S. at 216 n.15.  This
Court would have to limit the Act's application to "pornography,"
despite Congress' clear intention that it apply far more broadly,
and to commercial speakers who, as a practical matter, can avail
themselves of credit card or adult access screening, despite
Congress' clear intention that the Act should govern all Internet
users.  This Court should not rewrite the Act in the face of such
clear indicia of contrary Congressional intent.  And without such
a rewrite, the Act will continue to suppress and deter vast
amounts of protected expression.  Thus, this Court must
invalidate Section (d) in toto.[56]

     C.   The Challenged Provisions Are Unconstitutional As
          Applied To Plaintiffs.

          The Government (for understandable reasons) never
attempts to defend the constitutionality of the CDA as applied to
plaintiffs.  However, even if the Government were correct that
the CDA should not be invalidated "facially" (i.e., in all
applications), plaintiffs would nonetheless be entitled to relief
because the challenged provisions are unconstitutional as applied
to them.  See NTEU, 115 S. Ct. at 1018-19.
          Plaintiffs have not limited their claims to "facial"
claims, and they are not required to choose between a facial
challenge and an as-applied challenge.  Indeed, First Amendment
plaintiffs commonly combine the two.  E.g., Fox, 492 U.S. at 484;
Board of Airport Comm'rs v. Jews for Jesus, Inc., 482 U.S. 569
(1987); Lind v. Grimmer, 30 F.3d 1115 (9th Cir. 1994), cert.
denied, 115 S. Ct. 902 (1995).  And Congress specifically
contemplated that plaintiffs bringing a facial challenge to the
CDA could also bring an as-applied challenge.  See Conf. Rep. at
197 ("[T]he three-judge district court could hear both a facial
challenge and an 'as-applied' challenge if they were combined in
the same action and facial validity had not yet been
determined.")
          Thus, if the Court were to conclude that the CDA should
not be invalidated in all its applications, plaintiffs' as-
applied challenge would have to be addressed.  Plaintiffs have
made a comprehensive (and largely unrebutted) showing that the
Act directly infringes their own First Amendment rights.[57]  No
plaintiff falls within the small category of content providers
the Government contends can feasibly comply with the Act by means
short of self-censorship -- "those commercial vendors of online
pornography who use credit card authorization or similar means .
. . to restrict access to their material."  US ALA Resp. at 17.
          Moreover, plaintiffs' as-applied challenge is fully
ripe even though no plaintiff has yet been prosecuted.  See,
e.g., NTEU, 115 S. Ct. at 1011; American Library Ass'n v. Reno,
33 F.3d 78, 83-84 (D.C. Cir. 1994), cert. denied, 115 S. Ct. 2610
(1995).  The CDA imposes direct compliance burdens on all of the
plaintiffs, and is suppressing their speech right now.  The
Government has given no assurance that it will not prosecute in
the future for current speech, and plaintiffs have "an actual and
well-founded fear that the law will be enforced against them." 
Virginia v. American Booksellers Ass'n, 484 U.S. 383, 393 (1988). 
That continuing threat of criminal liability is pressuring each
of them to engage in "self-censorship; a harm that can be
realized even without an actual prosecution."  Id.  A preliminary
injunction would prevent that harm.  At a minimum, therefore,
plaintiffs are entitled to a remedy sufficient to preclude
enforcement of the challenged provisions against them.
          If the Court concludes (as it should) that plaintiffs'
facial and as-applied challenges are both valid, it will have
discretion whether to grant "facial" relief or instead limit its
injunction to the CDA's application to the plaintiffs.[58]  Here,
for strong reasons of practicality and substantive First
Amendment policy, the Court should declare the challenged
provisions facially invalid.
          First, because plaintiffs represent a vast array of
speakers and listeners using all the modes of speech that exist
in cyberspace, the very judicial economy concerns that
"ordinarily" favor deciding an as-applied challenge before a
facial one, see Fox, 492 U.S. at 485, compel the opposite
conclusion here.  Second, Congress itself evinced a desire to
have the CDA's validity dispositively determined in an expedited
preenforcement proceeding.  See Point III.A. supra.  An "as-
applied" ruling, finding the CDA unconstitutional as applied to
plaintiffs but leaving its provisions nominally in force as to
non-parties, would be inconsistent with that desire.  Third, and
most important, an "as-applied" remedy would not eliminate the
chilling effect of the CDA's grossly overbroad and vague
provisions.[59]  Although the Government might not prosecute
individuals or entities whose situation is precisely the same as
one or more of the plaintiffs sheltered by such an injunction, a
remedy limited to the plaintiffs would inevitably leave many non-
parties in a state of perilous uncertainty about the CDA's
applicability to them.  Facial invalidation is appropriate where,
as here, the speech rights of non-parties would otherwise hinge
on the results of "a series of adjudications."  Jews for Jesus,
482 U.S. at 576.  "[T]he chilling effect of the [statute] on
protected speech in the meantime would make such a case-by-case
adjudication intolerable."  Id. at 575-76.

IV.  THE IRREPARABLE HARM TO PLAINTIFFS, THEIR MEMBERS, AND THEIR
     SUBSCRIBERS, PATRONS, AND CUSTOMERS FAR OUTWEIGHS ANY HARM
     TO THE GOVERNMENT IF AN INJUNCTION ISSUES, AND THE PUBLIC
     INTEREST FAVORS INJUNCTIVE RELIEF.  

          It is a bedrock legal principle that any deprivation of
First Amendment rights constitutes irreparable injury.  See ALA
P.I. Mem. at 78-80.  Plaintiffs reasonably fear prosecution under
the Act, and their only alternative to prosecution is self-
censorship.[60]
          The public also has a substantial, constitutionally
protected interest in having access to a robust, uninhibited flow
of constitutionally protected speech.  See Turner, 114 S. Ct. at
2458 ("At the heart of the First Amendment lies the principle
that each person should decide for him or herself the ideas and
beliefs deserving of expression, consideration and adherence
. . . .  Government action that stifles speech . . . ,
contravenes this essential right."); Virginia State Bd. of
Pharmacy v. Virginia Consumer Council, 425 U.S. 748 (1976).  This
interest is severely damaged by the Act.  Indeed, as shown above,
in addition to its suppressive effect on plaintiffs' speech, the
Act is currently having a widespread chilling effect on thousands
of speakers and listeners who are not before the Court.  Although
the public's interest in protecting minors from communications
that might be inappropriate for them is also substantial, that
interest is -- at best -- only marginally advanced by the
provisions plaintiffs ask this Court to enjoin.  Moreover, user
software will continue to block communications parents deem
inappropriate for their children, even if the challenged
provisions of the Act are enjoined.
          The damage to constitutionally protected expression
that has already occurred, and the further damage that will occur
if defendants are not enjoined, is incalculable.  Substantial
quantities of constitutionally protected speech are being and
will continue to be suppressed across the Nation.  The
concomitant negative impact on the burgeoning medium for
interactive computer services will be substantial.  Entry of the
relief requested, pending final resolution of the merits, will
cause defendants no significant harm, especially because there
are other criminal statutes (including this Act's application to
"obscene" communications) that prohibit the communications they
claim are the principal focus of the Act.  Any delay in entry of
relief, however, perpetuates and compounds the damage to
plaintiffs' First Amendment rights and the First Amendment rights
of millions of adults to send and receive communications that are
indisputably constitutionally protected for them.

                            CONCLUSION
          Plaintiffs have shown they are likely to prevail on the
merits of their constitutional challenge, and that they are
suffering irreparable injury.  Accordingly, preliminary
injunctive relief against enforcement of the challenged
provisions of 47 U.S.C.  223(a)(1)(B)(ii), 223(a)(2) and 223(d)
is essential and amply justified.
                              Respectfully submitted,


                              _____________________________
                              Bruce J. Ennis, Jr.
                              Paul M. Smith
                              Donald B. Verrilli, Jr.
                              Ann M. Kappler
                              John B. Morris, Jr.
                              JENNER & BLOCK
                              601 Thirteenth Street, N.W.
                              Washington, D.C. 20005
                              (202) 639-6000

                              Ronald P. Schiller
                                (Atty ID 41357)
                              David L. Weinreb
                                (Atty ID 75557)
                              PIPER & MARBURY, L.L.P.
                              3400 Two Logan Square
                              18th & Arch Streets
                              Philadelphia, PA  19103
                              (215) 656-3365

                              COUNSEL FOR ALA PLAINTIFFS


Date:  April 29, 1996

Ellen M. Kirsh
William W. Burrington
America Online, Inc.
COUNSEL FOR AMERICA ONLINE, INC. 

Richard M. Schmidt, Jr.
Allan R. Adler
Cohn and Marks
COUNSEL FOR AMERICAN SOCIETY OF NEWSPAPER EDITORS

Bruce Rich
Weil, Gotschal & Manges
COUNSEL FOR ASSOCIATION OF AMERICAN PUBLISHERS, INC.

James Wheaton
First Amendment Project
COUNSEL FOR ASSOCIATION OF PUBLISHERS, EDITORS AND WRITERS

Jerry Berman
Center for Democracy and Technology
Elliot M. Mincberg
Jill Lesser
People for the American Way
Andrew J. Schwartzman
Media Access Project
COUNSEL FOR CITIZENS INTERNET EMPOWERMENT COALITION

Ronald Plesser
Jim Halpert
Piper & Marbury
COUNSEL FOR COMMERCIAL INTERNET EXCHANGE ASSOCIATION

Steve Heaton
Compuserve Incorporated
COUNSEL FOR COMPUSERVE INCORPORATED

Gail Markels
Interactive Digital Software Association
COUNSEL FOR INTERACTIVE DIGITAL SOFTWARE ASSOCIATION

James R. Cregan
Magazine Publishers of America
COUNSEL FOR MAGAZINE PUBLISHING OF AMERICA

Thomas W. Burt
Microsoft Corporation
COUNSEL FOR MICROSOFT CORPORATION
   AND THE MICROSOFT NETWORK, L.L.C. 

Mark P. Eissman
Eissman and Associated Counsel
COUNSEL FOR NATIONAL PRESS PHOTOGRAPHERS ASSOCIATION

Robert P. Taylor
Megan W. Pierson
Melissa A. Burke
Pillsbury, Madison & Sutro
COUNSEL FOR NETCOM ON-LINE COMMUNICATIONS SERVICE, INC. 

Rene Milam
Newspaper Association of America
COUNSEL FOR NEWSPAPER ASSOCIATION OF AMERICA

Marc Jacobson
Prodigy Services Company
Robert J. Butler
Clifford M. Sloan
Wiley, Rein & Fielding
COUNSEL FOR PRODIGY SERVICES COMPANY

Bruce W. Sanford
Henry S. Hoberman
Robert D. Lystad
Baker & Hostetler
COUNSEL FOR SOCIETY OF PROFESSIONAL JOURNALISTS

Michael Traynor
John W. Crittenden
Kathryn M. Wheble
Cooley, Godward, Castro, Huddleson & Tatum
COUNSEL FOR HOTWIRED VENTURES LLC AND WIRED VENTURES, LTD.


1. Section 502 of the Telecommunications Act of 1996, Pub. L.
No. 104-104, 110 Stat. 56, 133 (1996) (to be codified at 47
U.S.C.  223(a)(1)(B), (a)(2) and (d)).
2. ALA Plaintiffs' Memorandum of Law in Support of Their Motion
for a Preliminary Injunction (March 1, 1996) ("ALA P.I. Mem."),
at 16-21, and 25-42.  Plaintiffs respectfully refer the Court to
that Memorandum for a more comprehensive discussion of our First
and Fifth Amendment arguments and other legal points.
3. See ALA P.I. Mem. at 27-30, and Point I.C.3., infra.
4. Section 223(a)(1)(B) bans use of a "telecommunications
device" for any "obscene or indecent" communication with a person
known to be under age 18, and Section 223(a)(2) imposes liability
on a person who "knowingly permits any telecommunications
facility under his control to be used for any activity prohibited
by [Section 223(a)(1)]."  47 U.S.C.  223(a)(1)(B)(ii), (a)(2). 
The primary problem with these provisions is their overbreadth: 
for example, they would punish a university professor for
transmitting to 17-year-old freshmen sexually explicit course
materials.  See ALA P.I. Mem. at 45-48; Boiss� Supp. Decl.  3-
4, 6. 
5. See Defendants' Opposition to Plaintiff American Library
Association, et al.'s Motion for a Preliminary Injunction ("US
ALA Resp.") at 16-17 ("[S]ection 223(d) . . . by no means is a
'ban' on adult expression.  Rather, it is a command to
information content providers on the Internet to undertake
reasonable measures to direct their communications to other
adults, rather than to minors.").
6. It is undisputed that the burden of demonstrating that the
Act is narrowly tailored rests squarely on the Government.  See
ACLU and ALA Plaintiffs' Joint Response to the Court's Questions
Regarding the Burden of Proof ("Pltfs Joint BOP Mem."), at 6-11;
Defendants' Response to the Court's Inquiry Concerning Burden of
Proof Issues, at 5.
7. Bradner Test. Decl.  46-49 (newsgroup); 39-41 (mail
exploder); 56-59 (IRC chat).
8. Bradner Test. Decl.  41, 42, 49-50, 59; see also Schmidt
Test., Tr. Vol. IV, at 73:15 to 74:2; cf. Bradner Test. Decl.
 22-33 (same technological infeasibility applies to one-to-one
e-mail).
9. Olsen Test., Tr. Vol. IV, at 219:23 to 220:8; Olsen Test.
Decl.  19.
10. The Government's expert witness testified that even a one-
minute delay in Internet access would be intolerable for the
average user.  See Olsen Test., Tr. Vol. V, at 73:3-9.
11. Imposing this task on content providers is constitutionally
suspect.  See Smith v. California, 361 U.S. 147, 152-55 (1959).
12. The defenses in section 223(e)(1)-(4) are irrelevant.  
13. Plaintiffs specifically asked the Government whether a
speaker rating or labeling system would constitute a defense. 
ALA Plaintiffs' First Set of Interrogatories to Defendants,
Interrogatory 1 (March 18, 1996).  The Government objected to
that interrogatory and refused to provide a specific response. 
Defendants' Responses to ALA Plaintiffs' First Set of
Interrogatories (March 20, 1996).

     The Government's ambiguous posture mirrors language in the
Conference Report, which acknowledges that "content selection
standards" and "other technologies" that are "currently under
development, might qualify" as defenses, but makes clear they
would qualify only "if they are effective at protecting minors
from exposure to indecent material via the Internet."  H.R. Conf.
Rep. No. 458, 104th Cong., 2d Sess., at 190 (1996)(emphasis
added)("Conf. Rep.").
14. He confirmed, for example, that there is currently no way to
post an "indecent" message in a newsgroup and avoid the risk of
prosecution under the Act, because, even if the message is
tagged, there is no way of insuring that the message will not be
"available" to persons under 18.  Id. at 234:19-25.

     Olsen acknowledged that the tag "-L18" is technologically
and functionally identical to the tag "xxx."  Olsen Test., Tr.
Vol. IV, at 208:19-209:3.  Of course, some speakers already tag
their speech "xxx," or in other ways that indicate its sexually
oriented content (such as "alt.sex.erotica"), and Senator Exon,
the sponsor of the CDA, certainly did not consider those tags to
be a defense; to the contrary, he considered them to be a
problem, because minors can use those tags to find inappropriate
speech.  See 141 Cong. Rec. S8089 (daily ed. June 9,
1995)(statement of Sen. Exon).
15. The CDA specifies that measures taken under subsection (e)
to restrict or prevent access by minors must be "effective" and
"feasible under available technology."  47 U.S.C.  223(e)(5)(A).
16. See Pl. Pro. Find. Part III.F.4; Olsen Test., Tr. Vol. IV,
at 222:10 to 224:11; id., Tr. Vol. V, at 89:5-8 (acknowledging
that PICS works in this manner and will be available in 2-3
months).
17. There is no evidence in this record on that point.  See
Sable Communications, Inc. v. FCC, 492 U.S. 115, 129-30 (1989).
18. See 47 U.S.C.  223(b)(2)(A) (criminalizing making any
indecent telephone communication "for commercial purposes"
(emphasis added).
19. Conf. Rep. at 191.  Indeed, the Government's hearing
evidence included repeated references to newsgroups, even though
newsgroups and newsgroup speakers are not commercial speakers. 
Senator Exon also referred to newsgroups as a principal target of
the legislation.  See 141 Cong. Rec. S8089 (daily ed. June 9,
1995).
20. Congress deliberately rejected the narrower "harmful-to-
minors" standard that originated in Ginsberg v. New York, 390
U.S. 629 (1968).  See Conf. Rep. 189.  Indeed, Congress rejected
an amendment that would have specified that speech could not be
found indecent or patently offensive if it had serious value for
minors.  See ACLU Post-Trial Br., at 9-15 & n. 15.  Thus, the
Government now asks the Court to adopt an interpretation
approximating the very standard Congress rejected.
21. See, e.g., Enforcement of Prohibitions Against Broadcast
Indecency in 18 U.S.C.  1464, 5 FCC Rcd. 5297, 5308-09 (1990). 
See also Gillett Communications of Atlanta v. Becker, 807 F.
Supp. 757 (N.D. Ga. 1992)(holding TV broadcast of videotape
"Abortion in America:  The Real Story," as part of a political
advertisement by a candidate for public office, was indecent),
appeal dismissed, 5 F.3d 1500 ( 11th Cir. 1993).
22. Thus, although the Pacifica Court emphasized that that case
did not involve a criminal sanction, 438 U.S. at 750, the ACLU
now faces potential prosecution for posting the same George
Carlin monologue online.  Stip.  78.  Similarly, it was not a
crime to publish the "Rimm" study in print, but online
publication of that study puts libraries at risk of prosecution
under the Act.  Croneberger Test. Decl.  15, 17.  The
Government has shown no justification for criminalizing speech on
the Internet that would not be criminally sanctioned if it were
broadcast on television or radio, or published in a print medium.
23. See also KLOL(FM), 8 FCC Rcd. 3228 (1993); WVIC-FM, 6 FCC
Rcd. 7484 (1991).  Indeed, in Pacifica, the Supreme Court did not
suggest that George Carlin's satirical monologue lacked serious
value.  See Action For Children's TV v. FCC, 852 F.2d 1332, 1340
n.13 (D.C. Cir. 1988) (concluding monologue "may be an example of
indecent material possessing significant social value").
24. Illinois Citizens Committee for Broadcasting v. FCC, 515
F.2d 397, 406 (D.C. Cir. 1974); Implementation of Section 10 of
the Cable Consumer Protection and Competition Act of 1992, 8 FCC
Rcd. 998, 1004 (1993) ("We do not agree that any determination of
indecency [in the cable medium] is required [to] take into
account the work as a whole."); WIOD, 6 FCC Rcd. 3704, 3705
(1989)(less than 5 percent of a program devoted to sexually-
oriented material supports an indecency finding "[w]hether or not
the context of the entire Neil Rogers Show dwelt on sexual
themes").
25. See Schmidt Test. Decl. Exs. 3, 8, 11, 12, 25, 26, 27, 30,
31, 38, 41, 42, 43, 44, 47, 52.  At least one online image was
virtually identical to, but less explicit than, an image in the
print version of Playboy Magazine.  Compare Def. Exh. 42, at 5
(print version) with Def. Ex. 43, at 3 (online image); see
Schmidt Test., Tr. Vol. IV, at 98:13-21; Schmidt Test. Decl.
 46.  Senator Exon confirmed that the prohibition would apply to
an online version of Playboy Magazine.  141 Cong. Rec. S8330
(June 14, 1995) (Sen. Exon).  Thus, publication of the print
version of the image was entirely lawful; publication of the
online version is apparently a felony.
26. See n. 20 supra, and accompanying text.
27. It would not be possible to narrow the scope of the Act by
severing "invalid" language.  This case is thus unlike Brockett
v. Spokane Arcades, Inc., 472 U.S. 491 (1985), where the Court
held that it was permissible to sever one problematic word --
"lust" -- from an otherwise complete and valid law in order to
assure that the law covered only obscenity.  See also Lind v.
Grimmer, 30 F.3d 1115, 1122 (9th Cir. 1994) (discussing
Brockett), cert. denied, 115 S. Ct. 902 (1995).
28. As explained more fully in our prehearing memorandum (see
ALA P.I. Mem. at 38-42), here, as in Sable, the legislative
record provides no support for the Government on this point.  The
Government's case therefore depends entirely on the factual
record put before this Court.
29. Some of the images referred to by Howard Schmidt involve
depictions of sexually explicit conduct.  See Schmidt Test. Decl.
Exs. 4, 14, 19, 22, 24, 46, 49, 51.  These images meet the
threshold requirement for an obscenity prosecution, and could
therefore be found obscene if a jury determined they meet the
Miller test.  Miller v. California, 413 U.S. 15 (1973).  The
Government already has ample means to punish such communications
under existing laws banning obscenity and child pornography.  See
ALA P.I. Mem. at 38.  To the extent purveyors of such material
are already willing to violate these other laws, the CDA will not
deter them (although user blocking would prevent their material
from reaching children).  See, e.g., Riley v. National Fed'n of
the Blind, Inc., 487 U.S. 781, 800 (1988) (statute is not
narrowly tailored if "vigorously enforc[ing]" existing laws would
have substantially the same effect).
30. The Government made no attempt to rebut this evidence -- or
even to "test" these control mechanisms.  See Schmidt Test., Tr.
Vol. IV, at 79:4 to 80:1 (tested only SurfWatch); Olsen Test.,
Tr. Vol. IV, at 196:17-21.
31. Duvall Test. Decl.  13; Pl. Pro. Find.  788, 807.
32. SurfWatch blocks all of the sites Senator Exon used to
justify the Act.  141 Cong. Rec. S8089 (daily ed. June 9, 1995). 
For example, it blocks access to all "alt.sex" newsgroups. 
Duvall Test. Decl.  24.  It also blocks all of the sites
submitted by defendants as exhibits in opposition to the ACLU's
TRO motion, including excerpted material such as the "Internet
Yellow Pages."  Id.  25.  See Pl. Pro. Find.  800.
33. It is highly doubtful whether the United States even has the
authority under customary international law to restrict
"indecent" or "patently offensive" speech placed on the Internet
by foreign speakers, especially where the speech was not
specifically directed to the United States.  See 1 Restatement
(3d) of the Foreign Relations Law of the United States  403
(limitations on nations' "jurisdiction to prescribe").  It is
unlikely that the United States would wish to -- or even that it
could consistent with the First Amendment -- subscribe to a
principle of reciprocity in international law whereby American
Internet users would be subject to foreign nations' often very
draconian speech restrictions for any Internet speech accessible
in those nations.  Surely the United States would not agree that
a domestic poster of Internet speech critical of China's human
rights record could be prosecuted by that country for violation
of its law prohibiting speech critical of the government.

     Domestic caching of foreign material (to the extent it
occurs) is not a solution to this problem.  The operator of a
cache on the Internet (such as the operator of a transatlantic
cable) could not be treated as a republisher of the overseas
content for the purpose of the CDA.  As defendants' expert
admits, the cache operator does not have any knowledge of the
data that is temporarily cached.  Olsen Test., Tr. Vol. V., at
55:24 to 56:1.  Caching is an automatic process that takes place
without any knowledge on the part of the cache operator of the
content being cached -- the content is simply a "stream of bits." 
Id. at 56:1.  This type of "intermediate storage" is precisely
what is excluded from coverage by the CDA in  223(e)(1).  Olsen
testified that, if a screening requirement were imposed on
cachers, they would simply stop caching.  Olsen Test., Tr. Vol.
V, at 55:19 to 57:6.
34. In fact, to the extent the same or virtually identical
content is available from both domestic and foreign sites, the
Act will not diminish minors' ability to access that content at
all.  Much "pornography" is functionally indistinguishable from
other "pornography."  And as more and more domestic sites move
abroad, or use anonymous foreign re-mailers, the problem of
foreign postings will be compounded.
35. PICS technology enables parents or other adults to select
which Internet material children can access, based on labels that
assign ratings to the content.  Vezza Test. Decl.  5.  PICS will
provide the ability for third parties, as well as individual
content providers, to rate content on the Internet in a variety
of ways.  Stip.  50; Vezza Test. Decl.  5.  For example,
material could be rated as inappropriate for children under 10;
or inappropriate because of violence or sexual content.  This
flexibility enables parents to give older children access to more
material than younger children.  Vezza Test. Decl.  12.  PICS-
compatible Web browsers, Usenet News Group readers, and other
Internet applications, will provide parents the ability to choose
from a variety of rating services, or a combination of services. 
Stip.  50.  See Pl. Pro. Find.  826-28, 837.

          The PICS working group participants include many of the
major online services providers, commercial Internet access
providers, hardware and software companies, major Internet
content providers, and consumer organizations.  Stip.  51, 52. 
Given the powerful market forces driving development of PICS, the
chance that all vendors will adhere to the PICS standard are
"extremely high."  Vezza Test., Tr. Vol. IV, at 182:23 to 183:20. 
Indeed, it is anticipated that all of the major online services
and Web browsing software programs will be PICS-compatible by
early summer 1996, and that at least four independent content
ratings will be available (three of them at no cost) by that
time.  Burrington Test. Decl.  29.
36. No system dependent on tagging by speakers of their own
speech is ever going to be fully effective.  For example, Olsen
acknowledged it is unlikely that foreign speakers would tag their
own speech.  Olsen Test., Tr. Vol. IV, at 226:4-18.  And Schmidt
agreed that foreign countries where sexual material often
originates have more lenient views about the legality of such
material.  Schmidt Test., Tr. Vol. IV, at 131:14-24.
37. Congress enacted the CDA after only cursory examination of
alternative, less restrictive approaches, and without making any
findings concerning the nature of the problem it sought to
address.  See ALA P.I. Mem. at 35-37 & n.72.  Congress's failure
to recognize user blocking as a less restrictive alternative in
the computer context is especially indefensible in light of
Congress's explicit endorsement of user blocking as a "narrowly
tailored" approach to addressing violence and sex on television. 
See Pltfs Joint BOP Mem. at 9-10.
38. The Government has essentially conceded that a great deal of
the expression covered by the CDA is at the outer periphery of
Congress's concern.  It has not seriously contended, for example,
that purely textual "indecency" -- including the satirical use of
profanity at issue in Pacifica -- was viewed by Congress as a
serious concern.  Although eliminating the CDA's application to
textual material would not by itself have resolved the CDA's
constitutional defects, it would have dramatically reduced the
amount of constitutionally protected speech covered by the
statute, without material harm to the Government's asserted
interests.  See ALA P.I. Mem. at 41. 
39. Sable Communications v. FCC, 492 U.S. at 128 (quoting from
Bolger v. Youngs Drug Products Corp. 463 U.S. 60, 73 (1983),
which in turn quoted from Butler v. Michigan, 352 U.S. 380, 383
(1957)).
40. Thus, this bedrock principle has already been applied in the
context of sale or distribution of immoral books and other
printed materials (Butler), use of the postal service to mail
unsolicited commercial advertisements about contraceptives to
homes (Bolger), and use of the telephone to sell sexually
explicit communications (Sable).
41. Anker Test. Decl.  28; see Duvall Test., Tr. Vol. I, at
107:8 to 132:14 (demonstrating complexity of computer and
Internet tasks).  Moreover, as noted, parents have a range of
software options to limit children's access to the Internet.  See
Point I.C.1. supra.
42. The record contains ample evidence of plaintiffs' inability
to define the scope of the CDA's prohibitions and defenses.  Pl.
Pro. Find. Part IV.D.1.  See also ALA P.I. Mem. at 79-80.
43. Furthermore, the Conference Report (which most speakers will
never see) compounds the confusion by ascribing to the FCC
"indecency" precedents the Act purportedly adopts, a gloss on
those terms that cannot be found in those precedents.  See, e.g.,
Conf. Rep. at 189 (stating that the FCC's "patent offensiveness"
standard requires the "intention to be offensive"). In fact, it
does not.  Moreover, although the Government elsewhere invokes
the Conference Report in an effort to give meaning to the CDA's
terms, e.g., US ALA Resp. at 11 ("the Conference Report ... makes
crystal-clear the statute's intended scope"), it nowhere
contends, notwithstanding the explicit statement in the
Conference Report, that "patent offensiveness" under subsection
(d) requires proof of "intention to offend."  It is
understandable that the Government would choose to ignore this
language from the Conference Report, because an "intent to
offend" requirement would provide a defense even for
"pornographers" if they could show their only intent was to
titillate or to make money by selling their speech to adults.  
44. In fact, however, the FCC has had extreme difficulty
determining the meaning of indecency even in the broadcast
context.  See, e.g., William J. Byrnes, Esq., 63 Rad. Reg. 2d
(P&F) 216 (Mass Media Bureau 1987)(declining to clarify whether
Pacifica radio's broadcast of its annual Bloomsday reading from
James Joyce's Ulysses would be indecent).  See also ACLU Post-
Trial Br. at 9-15.
45. The Government's expert witness acknowledged that community
standards vary throughout the country.  Schmidt Test., Tr. Vol.
IV, at 137-38.
46. It might refer, for example, to the nature of the rest of
the communication of which the potentially "offensive" message is
a part, the time of day it was conveyed, the purpose of the
communication, the intended, probable or actual audience, the
elegance or inelegance of its presentation, the identity of the
speaker, the presence or absence of warnings or other indications
that the communication may not be appropriate for minors, or the
specific mode of communication.  
47. Schmidt Test., Tr. Vol. III at 140-41.  As Schmidt's
testimony suggests, one variant of "context" that some
prosecutors may draw upon to decide whether a communication is
prohibited is whether the communication constitutes "art" and
"entertainment."  That highly subjective decision should not be
left to government officials, whether prosecutors or judges.  Cf.
Bleistein v. Donaldson Lithographing Co., 188 U.S. 239, 251
(1903) ("It would be a dangerous undertaking for persons trained
only to the law to constitute themselves final judges of the
worth of pictorial illustrations, outside of the narrowest and
most obvious limits.").    
48. See Hynes v. Mayor & Council, 425 U.S. 610, 622 (1976);
Community Television of Utah v. Wilkinson, 611 F. Supp. 1099,
1117 (D. Utah 1985), aff'd, 800 F.2d 989 (10th Cir. 1986), aff'd,
480 U.S. 926 (1987)("context" does not adequately limit scope of
"indecency" regulation).
49. Nor has the Government done anything to obviate the
ambiguities concerning application of the access-provider defense
to service providers, such as Carnegie Library of Pittsburgh, who
set up Web sites for other speakers on their servers.  See
Croneberger Test. Decl.  39; 47 U.S.C.  223(e)(1); ALA P.I.
Mem. at 77-78.
50. The Court may "invalidate [for vagueness] a criminal statute
on its face even when it could conceivably have had some valid
applications."  Kolender, 461 U.S. at 358 n.8.
51. See Telecommunications Act of 1996,  561; 142 Cong. Rec.
S694 (daily ed. Feb. 1, 1996) (statement of Sen. Leahy) (Because
conferees "have doubts about [the CDA's] constitutionality," they
"added a section to speed up judicial review to see if the
legislation passes muster.").  See also 142 Cong. Rec. S2095
(daily ed. March 14, 1996) (statement of Sen. Exon) 142 Cong.
Rec. S715 (daily ed. Feb 1, 1996) (statement of Sen. Moynihan)
142 Cong. Rec. S1180 (daily ed. Feb. 9, 1986) (statement of Sen.
Leahy).  Congress made clear that it did not want to repeat its
experience with the dial-a-porn statute, the constitutionality of
which was litigated for over a decade.  See 141 Cong. Rec. S8346
(daily ed. June 14, 1995) (statement of Sen. Biden);  id. S8342
(statement of Sen. Leahy).
52. Thus, to the extent judge-made limitations on facial
challenges are intended to insulate legislative enactments from
broad judicial scrutiny, those policies do not apply where, as
here, Congress has specifically authorized a facial challenge.
53. Plaintiffs also have demonstrated that subsections
(a)(1)(B)(ii) and (d) are overbroad because they prohibit all
minors from receiving material deemed inappropriate for the
youngest users of the Internet.  See ALA P.I. Mem. 45-48; Schmidt
Test., Tr. Vol. IV, at 130:24 to 131:7 (material inappropriate
for his son at age 11 will not be harmful at age 17).
54. National Treasury Employees Union v. United States, 990 F.2d
1271, 1280 (D.C. Cir. 1993) (Randolph, J., concurring) (citing
numerous cases), aff'd, 115 S. Ct. 1003 (1995).  See Salerno, 481
U.S. at 745 (noting that "every application" rule applies
"outside the limited context of the First Amendment"); see also
Michael C. Dorf, Facial Challenges to State and Federal Statutes,
46 Stan. L. Rev. 235, 265 (1994).
55. In several cases, the Supreme Court has addressed
overbreadth claims without finding it necessary to determine
first whether the plaintiffs' own expression was or was not
validly regulated.  See, e.g., Board of Airport Comm'rs v. Jews
for Jesus, Inc., 482 U.S. 569, 572, 575-76 (1987) (holding
restriction on expressive activity overbroad and therefore
facially invalid without finding it necessary to rule on
plaintiffs' claim that their own activity was constitutionally
protected); Village of Schaumburg v. Citizens for a Better
Environment, 444 U.S. 620, 634 & n.8 (1980) (same).  
56. Plaintiffs do not seek invalidation of the entire CDA. 
Their facial challenge is limited to subsections (a)(1)(B)(ii),
(a)(2) and (d).  Under ordinary principles of severability, see
Alaska Airlines, Inc. v. Brock, 480 U.S. 678, 684-86 (1987), the
remainder of the CDA may be left intact.  Moreover, plaintiffs
challenge these provisions only insofar as they apply to speech
that is constitutionally protected for adults.  Subsection
(a)(1)(B)(ii) (and hence Subsection (a)(2)) may be left intact
insofar as it proscribes "obscene" communications.  However,
there is no evidence or reason to believe that Congress intended
Subsection (d) -- which does not refer separately to "obscene"
communications -- to address material that is legally obscene,
which it dealt with in other provisions that would render
Subsection (d) superfluous as an anti-obscenity measure.  See CDA
Section 507 (amending 18 U.S.C.  1462 to "clarify" criminal
prohibition on transmitting "obscene" materials by "interactive
computer service"); United States v. Thomas, 74 F.3d 701 (6th
Cir. 1996) (affirming conviction under  1462 and 1465 for
transmitting obscenity by computer).  Therefore, subsection (d)
should be set aside in toto. 
57. Pl. Pro. Find. Part IV.A.  
58. The Supreme Court has observed that a court should
"ordinarily" decide an as-applied challenge before proceeding to
rule on a facial one, Fox, 492 U.S. at 485, and should generally
restrict its remedy to what is sufficient to protect the
litigants, e.g., NTEU, 115 S. Ct. at 1018.  However, in the First
Amendment context, the preference for litigant-specific relief
over facial relief is not an inflexible rule, e.g., Fox, 492 U.S.
at 485; see also Renne v. Geary, 501 U.S. 312, 345 (1991)
(Marshall, J., dissenting), so that in appropriate cases a court
may strike a law on its face even though a narrower remedy would
be sufficient to protect the particular litigant's speech. 
Indeed, the Court has often held laws facially invalid without
ruling on the litigants' alternative claim that the law was also
unconstitutional "as applied" to them.  E.g., Board of Airport
Comm'rs v. Jews for Jesus, Inc., 482 U.S. 569 (1987); Houston v.
Hill, 482 U.S. 451 (1987); Schaumberg v. Citizens for a Better
Environment, 444 U.S. 620 (1980).
59."Self-censorship is immune to an `as-applied' challenge, for
it derives from the individual's own actions, not an abuse of
government power."  City of Lakewood v. Plain Dealer Pub. Co.,
486 U.S. 750, 757 (1988).  See also Dombrowski v. Pfister, 380
U.S. 479, 491 (1965) (persons covered by a law that restricts
expression in vague and overbroad terms should not be expected to
"hammer[] out the structure of the statute piecemeal, with no
likelihood of obviating similar uncertainty for others").
60. Under the terms of the Stipulation entered between the
parties, the Government has retained the right to prosecute any
of the plaintiffs for speech posted online during the pendency of
this proceeding in the event plaintiffs' motion for a preliminary
injunction is denied.


Back.
CIEC Home Page.