GNU Wget 1.17.1 The non-interactive download utility Updated for Wget 1.17.1, 10 December 2015 by Hrvoje Nikˇ si´ c and others This file documents the GNU Wget utility for downloading network data. Copyright c ? 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2015 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copy- right notice and this permission notice are preserved on all copies. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back- Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”. i Table of Contents Chapter 1: Overview 1 1 Overview GNU Wget is a free utility for non-interactive download of files from the Web. It supports http, https, and ftp protocols, as well as retrieval through http proxies. This chapter is a partial overview of Wget’s features. • Wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting Wget finish the work. By contrast, most of the Web browsers require constant user’s presence, which can be a great hindrance when transferring a lot of data. • Wget can follow links in html, xhtml, and css pages, to create local versions of remote web sites, fully recreating the directory structure of the original site. This is sometimes referred to as “recursive downloading.” While doing that, Wget respects the Robot Exclusion Standard (‘/robots.txt’). Wget can be instructed to convert the links in downloaded files to point at the local files, for offline viewing. • File name wildcard matching and recursive mirroring of directories are available when re- trieving via ftp. Wget can read the time-stamp information given by both http and ftp servers, and store it locally. Thus Wget can see if the remote file has changed since last retrieval, and automatically retrieve the new version if it has. This makes Wget suitable for mirroring of ftp sites, as well as home pages. • Wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved. If the server supports regetting, it will instruct the server to continue the download from where it left off. • Wget supports proxy servers, which can lighten the network load, speed up retrieval and provide access behind firewalls. Wget uses the passive ftp downloading by default, active ftp being an option. • Wget supports IP version 6, the next generation of IP. IPv6 is autodetected at compile-time, and can be disabled at either build or run time. Binaries built with IPv6 support work well in both IPv4-only and dual family environments. • Built-in features offer mechanisms to tune which links you wish to follow (see hundefinedi [Following Links], page hundefinedi). • The progress of individual downloads is traced using a progress gauge. Interactive downloads are tracked using a “thermometer”-style gauge, whereas non-interactive ones are traced with dots, each dot representing a fixed amount of data received (1KB by default). Either gauge can be customized to your preferences. • Most of the features are fully configurable, either through command line options, or via the initialization file ‘.wgetrc’ (see hundefinedi [Startup File], page hundefinedi). Wget allows you to define global startup files (‘/usr/local/etc/wgetrc’ by default) for site settings. You can also specify the location of a startup file with the –config option. • Finally, GNU Wget is free software. This means that everyone may use it, redistribute it and/or modify it under the terms of the GNU General Public License, as published by the Free Software Foundation (see the file ‘COPYING’ that came with GNU Wget, for details). Chapter 2: Invoking 2 2 Invoking By default, Wget is very simple to invoke. The basic syntax is: wget [option]... [URL]... Wget will simply download all the urls specified on the command line. URL is a Uniform Resource Locator, as defined below. However, you may wish to change some of the default parameters of Wget. You can do it two ways: permanently, adding the appropriate command to ‘.wgetrc’ (see hundefinedi [Startup File], page hundefinedi), or specifying it on the command line. 2.1 URL Format URL is an acronym for Uniform Resource Locator. A uniform resource locator is a compact string representation for a resource available via the Internet. Wget recognizes the url syntax as per rfc1738. This is the most widely used form (square brackets denote optional parts): http://host[:port]/directory/file ftp://host[:port]/directory/file You can also encode your username and password within a url: ftp://user:password@host/path http://user:password@host/path Either user or password, or both, may be left out. If you leave out either the http username or password, no authentication will be sent. If you leave out the ftp username, ‘anonymous’ will be used. If you leave out the ftp password, your email address will be supplied as a default password. 1 Important Note: if you specify a password-containing url on the command line, the user- name and password will be plainly visible to all users on the system, by way of ps. On multi-user systems, this is a big security risk. To work around it, use wget -i - and feed the urls to Wget’s standard input, each on a separate line, terminated by C-d. You can encode unsafe characters in a url as ‘%xy’, xy being the hexadecimal representation of the character’s ascii value. Some common unsafe characters include ‘%’ (quoted as ‘%25’), ‘:’ (quoted as ‘%3A’), and ‘@’ (quoted as ‘%40’). Refer to rfc1738 for a comprehensive list of unsafe characters. Wget also supports the type feature for ftp urls. By default, ftp documents are retrieved in the binary mode (type ‘i’), which means that they are downloaded unchanged. Another useful mode is the ‘a’ (ASCII) mode, which converts the line delimiters between the different operating systems, and is thus useful for text files. Here is an example: ftp://host/directory/file;type=a Two alternative variants of url specification are also supported, because of historical (hys- terical?) reasons and their widespreaded use. ftp-only syntax (supported by NcFTP): host:/dir/file http-only syntax (introduced by Netscape): host[:port]/dir/file These two alternative forms are deprecated, and may cease being supported in the future. If you do not understand the difference between these notations, or do not know which one to use, just use the plain ordinary format you use with your favorite browser, like Lynx or Netscape. 1 If you have a ‘.netrc’ file in your home directory, password will also be searched for there. Chapter 2: Invoking 3 2.2 Option Syntax Since Wget uses GNU getopt to process command-line arguments, every option has a long form along with the short one. Long options are more convenient to remember, but take time to type. You may freely mix different option styles, or specify options after the command-line arguments. Thus you may write: wget -r --tries=10 http://fly.srk.fer.hr/ -o log The space between the option accepting an argument and the argument may be omitted. Instead of ‘-o log’ you can write ‘-olog’. You may put several options that do not require arguments together, like: wget -drc URL This is completely equivalent to: wget -d -r -c URL Since the options can be specified after the arguments, you may terminate them with ‘--’. So the following will try to download url ‘-x’, reporting failure to ‘log’: wget -o log -- -x The options that accept comma-separated lists all respect the convention that specifying an empty list clears its value. This can be useful to clear the ‘.wgetrc’ settings. For instance, if your ‘.wgetrc’ sets exclude_directories to ‘/cgi-bin’, the following example will first reset it, and then set it to exclude ‘/~nobody’ and ‘/~somebody’. You can also clear the lists in ‘.wgetrc’ (see hundefinedi [Wgetrc Syntax], page hundefinedi). wget -X ’’ -X /~nobody,/~somebody Most options that do not accept arguments are boolean options, so named because their state can be captured with a yes-or-no (“boolean”) variable. For example, ‘--follow-ftp’ tells Wget to follow FTP links from HTML files and, on the other hand, ‘--no-glob’ tells it not to perform file globbing on FTP URLs. A boolean option is either affirmative or negative (beginning with ‘--no’). All such options share several properties. Unless stated otherwise, it is assumed that the default behavior is the opposite of what the option accomplishes. For example, the documented existence of ‘--follow-ftp’ assumes that the default is to not follow FTP links from HTML pages. Affirmative options can be negated by prepending the ‘--no-’ to the option name; negative options can be negated by omitting the ‘--no-’ prefix. This might seem superfluous—if the default for an affirmative option is to not do something, then why provide a way to explicitly turn it off? But the startup file may in fact change the default. For instance, using follow_ftp = on in ‘.wgetrc’ makes Wget follow FTP links by default, and using ‘--no-follow-ftp’ is the only way to restore the factory default from the command line. 2.3 Basic Startup Options ‘-V’ ‘--version’ Display the version of Wget. ‘-h’ ‘--help’ Print a help message describing all of Wget’s command-line options. ‘-b’ ‘--background’ Go to background immediately after startup. If no output file is specified via the ‘-o’, output is redirected to ‘wget-log’. Chapter 2: Invoking 4 ‘-e command’ ‘--execute command’ Execute command as if it were a part of ‘.wgetrc’ (see hundefinedi [Startup File], page hundefinedi). A command thus invoked will be executed after the commands in ‘.wgetrc’, thus taking precedence over them. If you need to specify more than one wgetrc command, use multiple instances of ‘-e’. 2.4 Logging and Input File Options ‘-o logfile’ ‘--output-file=logfile’ Log all messages to logfile. The messages are normally reported to standard error. ‘-a logfile’ ‘--append-output=logfile’ Append to logfile. This is the same as ‘-o’, only it appends to logfile instead of overwriting the old log file. If logfile does not exist, a new file is created. ‘-d’ ‘--debug’ Turn on debug output, meaning various information important to the developers of Wget if it does not work properly. Your system administrator may have chosen to compile Wget without debug support, in which case ‘-d’ will not work. Please note that compiling with debug support is always safe—Wget compiled with the debug support will not print any debug info unless requested with ‘-d’. See hundefinedi [Reporting Bugs], page hundefinedi, for more information on how to use ‘-d’ for sending bug reports. ‘-q’ ‘--quiet’ Turn off Wget’s output. ‘-v’ ‘--verbose’ Turn on verbose output, with all the available data. The default output is verbose. ‘-nv’ ‘--no-verbose’ Turn off verbose without being completely quiet (use ‘-q’ for that), which means that error messages and basic information still get printed. ‘--report-speed=type’ Output bandwidth as type. The only accepted value is ‘bits’. ‘-i file’ ‘--input-file=file’ Read urls from a local or external file. If ‘-’ is specified as file, urls are read from the standard input. (Use ‘./-’ to read from a file literally named ‘-’.) If this function is used, no urls need be present on the command line. If there are urls both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. If ‘--force-html’ is not specified, then file should consist of a series of URLs, one per line. However, if you specify ‘--force-html’, the document will be regarded as ‘html’. In that case you may have problems with relative links, which you can solve either by adding to the documents or by specifying ‘--base=url’ on the command line. Chapter 2: Invoking 5 If the file is an external one, the document will be automatically treated as ‘html’ if the Content-Type matches ‘text/html’. Furthermore, the file’s location will be implicitly used as base href if none was specified. ‘--input-metalink=file’ Downloads files covered in local Metalink file. Metalink version 3 and 4 are sup- ported. ‘--metalink-over-http’ Issues HTTP HEAD request instead of GET and extracts Metalink metadata from response headers. Then it switches to Metalink download. If no valid Metalink metadata is found, it falls back to ordinary HTTP download. ‘--preferred-location’ Set preferred location for Metalink resources. This has effect if multiple resources with same priority are available. ‘-F’ ‘--force-html’ When input is read from a file, force it to be treated as an html file. This enables you to retrieve relative links from existing html files on your local disk, by adding
to html, or using the ‘--base’ command-line option. ‘-B URL’ ‘--base=URL’ Resolves relative links using URL as the point of reference, when reading links from an HTML file specified via the ‘-i’/‘--input-file’ option (together with ‘--force-html’, or when the input file was fetched remotely from a server describing it as html). This is equivalent to the presence of a BASE tag in the html input file, with URL as the value for the href attribute. For instance, if you specify ‘http://foo/bar/a.html’ for URL, and Wget reads ‘../baz/b.html’ from the input file, it would be resolved to ‘http://foo/baz/b.html’. ‘--config=FILE’ Specify the location of a startup file you wish to use. ‘--rejected-log=logfile’ Logs all URL rejections to logfile as comma separated values. The values include the reason of rejection, the URL and the parent URL it was found in. 2.5 Download Options ‘--bind-address=ADDRESS’ When making client TCP/IP connections, bind to ADDRESS on the local machine. ADDRESS may be specified as a hostname or IP address. This option can be useful if your machine is bound to multiple IPs. ‘-t number’ ‘--tries=number’ Set number of tries to number. Specify 0 or ‘inf’ for infinite retrying. The default is to retry 20 times, with the exception of fatal errors like “connection refused” or “not found” (404), which are not retried. ‘-O file’ ‘--output-document=file’ The documents will not be written to the appropriate files, but all will be concate- nated together and written to file. If ‘-’ is used as file, documents will be printed Chapter 2: Invoking 6 to standard output, disabling link conversion. (Use ‘./-’ to print to a file literally named ‘-’.) Use of ‘-O’ is not intended to mean simply “use the name file instead of the one in the URL;” rather, it is analogous to shell redirection: ‘wget -O file http://foo’ is intended to work like ‘wget -O - http://foo > file’; ‘file’ will be truncated immediately, and all downloaded content will be written there. For this reason, ‘-N’ (for timestamp-checking) is not supported in combination with ‘-O’: since file is always newly created, it will always have a very new timestamp. A warning will be issued if this combination is used. Similarly, using ‘-r’ or ‘-p’ with ‘-O’ may not work as you expect: Wget won’t just download the first file to file and then download the rest to their normal names: all downloaded content will be placed in file. This was disabled in version 1.11, but has been reinstated (with a warning) in 1.11.2, as there are some cases where this behavior can actually have some use. A combination with ‘-nc’ is only accepted if the given output file does not exist. Note that a combination with ‘-k’ is only permitted when downloading a single document, as in that case it will just convert all relative URIs to external ones; ‘-k’ makes no sense for multiple URIs when they’re all being downloaded to a single file; ‘-k’ can be used only when the output is a regular file. ‘-nc’ ‘--no-clobber’ If a file is downloaded more than once in the same directory, Wget’s behavior de- pends on a few options, including ‘-nc’. In certain cases, the local file will be clobbered, or overwritten, upon repeated download. In other cases it will be pre- served. When running Wget without ‘-N’, ‘-nc’, ‘-r’, or ‘-p’, downloading the same file in the same directory will result in the original copy of file being preserved and the second copy being named ‘file.1’. If that file is downloaded yet again, the third copy will be named ‘file.2’, and so on. (This is also the behavior with ‘-nd’, even if ‘-r’ or ‘-p’ are in effect.) When ‘-nc’ is specified, this behavior is suppressed, and Wget will refuse to download newer copies of ‘file’. Therefore, “no-clobber” is actually a misnomer in this mode—it’s not clobbering that’s prevented (as the numeric suffixes were already preventing clobbering), but rather the multiple version saving that’s prevented. When running Wget with ‘-r’ or ‘-p’, but without ‘-N’, ‘-nd’, or ‘-nc’, re- downloading a file will result in the new copy simply overwriting the old. Adding ‘-nc’ will prevent this behavior, instead causing the original version to be preserved and any newer copies on the server to be ignored. When running Wget with ‘-N’, with or without ‘-r’ or ‘-p’, the decision as to whether or not to download a newer copy of a file depends on the local and remote timestamp and size of the file (see hundefinedi [Time-Stamping], page hundefinedi). ‘-nc’ may not be specified at the same time as ‘-N’. A combination with ‘-O’/‘--output-document’ is only accepted if the given output file does not exist. Note that when ‘-nc’ is specified, files with the suffixes ‘.html’ or ‘.htm’ will be loaded from the local disk and parsed as if they had been retrieved from the Web. Chapter 2: Invoking 7 ‘--backups=backups’ Before (over)writing a file, back up an existing file by adding a ‘.1’ suffix (‘_1’ on VMS) to the file name. Such backup files are rotated to ‘.2’, ‘.3’, and so on, up to backups (and lost beyond that). ‘-c’ ‘--continue’ Continue getting a partially-downloaded file. This is useful when you want to finish up a download started by a previous instance of Wget, or by another program. For instance: wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z If there is a file named ‘ls-lR.Z’ in the current directory, Wget will assume that it is the first portion of the remote file, and will ask the server to continue the retrieval from an offset equal to the length of the local file. Note that you don’t need to specify this option if you just want the current invocation of Wget to retry downloading a file should the connection be lost midway through. This is the default behavior. ‘-c’ only affects resumption of downloads started prior to this invocation of Wget, and whose local files are still sitting around. Without ‘-c’, the previous example would just download the remote file to ‘ls-lR.Z.1’, leaving the truncated ‘ls-lR.Z’ file alone. Beginning with Wget 1.7, if you use ‘-c’ on a non-empty file, and it turns out that the server does not support continued downloading, Wget will refuse to start the download from scratch, which would effectively ruin existing contents. If you really want the download to start from scratch, remove the file. Also beginning with Wget 1.7, if you use ‘-c’ on a file which is of equal size as the one on the server, Wget will refuse to download the file and print an explana- tory message. The same happens when the file is smaller on the server than lo- cally (presumably because it was changed on the server since your last download attempt)—because “continuing” is not meaningful, no download occurs. On the other side of the coin, while using ‘-c’, any file that’s bigger on the server than locally will be considered an incomplete download and only (length(remote)
wget --load-cookies cookies.txt \ -p http://server.com/interesting/article.php If the server is using session cookies to track user authentication, the above will not work because ‘--save-cookies’ will not save them (and neither will browsers) and the ‘cookies.txt’ file will be empty. In that case use ‘--keep-session-cookies’ along with ‘--save-cookies’ to force saving of session cookies. ‘--method=HTTP-Method’ For the purpose of RESTful scripting, Wget allows sending of other HTTP Methods without the need to explicitly set them using ‘--header=Header-Line’. Wget will use whatever string is passed to it after ‘--method’ as the HTTP Method to the server. ‘--body-data=Data-String’ ‘--body-file=Data-File’ Must be set when additional data needs to be sent to the server along with the Method specified using ‘--method’. ‘--body-data’ sends string as data, whereas ‘--body-file’ sends the contents of file. Other than that, they work in exactly the same way. Currently, ‘--body-file’ is not for transmitting files as a whole. Wget does not cur- rently support multipart/form-data for transmitting data; only application/x- www-form-urlencoded. In the future, this may be changed so that wget sends the ‘--body-file’ as a complete file instead of sending its contents to the server. Please be aware that Wget needs to know the contents of BODY Data in advance, and hence the argument to ‘--body-file’ should be a regular file. See ‘--post-file’ for a more detailed explanation. Only one of ‘--body-data’ and ‘--body-file’ should be specified. If Wget is redirected after the request is completed, Wget will suspend the current method and send a GET request till the redirection is completed. This is true for all redirection response codes except 307 Temporary Redirect which is used to explicitly specify that the request method should not change. Another exception is when the method is set to POST, in which case the redirection rules specified under ‘--post-data’ are followed. ‘--content-disposition’ If this is set to on, experimental (not fully-functional) support for Content- Disposition headers is enabled. This can currently result in extra round-trips to the server for a HEAD request, and is known to suffer from a few bugs, which is why it is not currently enabled by default. This option is useful for some file-downloading CGI programs that use Content- Disposition headers to describe what the name of a downloaded file should be. ‘--content-on-error’ If this is set to on, wget will not skip the content when the server responds with a http status code that indicates error. Chapter 2: Invoking 20 ‘--trust-server-names’ If this is set to on, on a redirect the last component of the redirection URL will be used as the local file name. By default it is used the last component in the original URL. ‘--auth-no-challenge’ If this option is given, Wget will send Basic HTTP authentication information (plaintext username and password) for all requests, just like Wget 1.10.2 and prior did by default. Use of this option is not recommended, and is intended only to support some few obscure servers, which never send HTTP authentication challenges, but accept un- solicited auth info, say, in addition to form-based authentication. 2.8 HTTPS (SSL/TLS) Options To support encrypted HTTP (HTTPS) downloads, Wget must be compiled with an external SSL library. The current default is GnuTLS. In addition, Wget also supports HSTS (HTTP Strict Transport Security). If Wget is compiled without SSL support, none of these options are available. ‘--secure-protocol=protocol’ Choose the secure protocol to be used. Legal values are ‘auto’, ‘SSLv2’, ‘SSLv3’, ‘TLSv1’, ‘TLSv11’, ‘TLSv1_2’ and ‘PFS’. If ‘auto’ is used, the SSL library is given the liberty of choosing the appropriate protocol automatically, which is achieved by sending a TLSv1 greeting. This is the default. Specifying ‘SSLv2’, ‘SSLv3’, ‘TLSv1’, ‘TLSv1_1’ or ‘TLSv1_2’ forces the use of the corresponding protocol. This is useful when talking to old and buggy SSL server implementations that make it hard for the underlying SSL library to choose the correct protocol version. Fortunately, such servers are quite rare. Specifying ‘PFS’ enforces the use of the so-called Perfect Forward Security cipher suites. In short, PFS adds security by creating a one-time key for each SSL con- nection. It has a bit more CPU impact on client and server. We use known to be secure ciphers (e.g. no MD4) and the TLS protocol. ‘--https-only’ When in recursive mode, only HTTPS links are followed. ‘--no-check-certificate’ Don’t check the server certificate against the available certificate authorities. Also don’t require the URL host name to match the common name presented by the certificate. As of Wget 1.10, the default is to verify the server’s certificate against the recog- nized certificate authorities, breaking the SSL handshake and aborting the download if the verification fails. Although this provides more secure downloads, it does break interoperability with some sites that worked with previous Wget versions, particu- larly those using self-signed, expired, or otherwise invalid certificates. This option forces an “insecure” mode of operation that turns the certificate verification errors into warnings and allows you to proceed. If you encounter “certificate verification” errors or ones saying that “common name doesn’t match requested host name”, you can use this option to bypass the verifi- cation and proceed with the download. Only use this option if you are otherwise convinced of the site’s authenticity, or if you really don’t care about the validity of its certificate. It is almost always a bad idea not to check the certificates when Chapter 2: Invoking 21 transmitting confidential or important data. For self-signed/internal certificates, you should download the certificate and verify against that instead of forcing this insecure mode. If you are really sure of not desiring any certificate verification, you can specify –check-certificate=quiet to tell wget to not print any warning about invalid certificates, albeit in most cases this is the wrong thing to do. ‘--certificate=file’ Use the client certificate stored in file. This is needed for servers that are configured to require certificates from the clients that connect to them. Normally a certificate is not required and this switch is optional. ‘--certificate-type=type’ Specify the type of the client certificate. Legal values are ‘PEM’ (assumed by default) and ‘DER’, also known as ‘ASN1’. ‘--private-key=file’ Read the private key from file. This allows you to provide the private key in a file separate from the certificate. ‘--private-key-type=type’ Specify the type of the private key. Accepted values are ‘PEM’ (the default) and ‘DER’. ‘--ca-certificate=file’ Use file as the file with the bundle of certificate authorities (“CA”) to verify the peers. The certificates must be in PEM format. Without this option Wget looks for CA certificates at the system-specified locations, chosen at OpenSSL installation time. ‘--ca-directory=directory’ Specifies directory containing CA certificates in PEM format. Each file contains one CA certificate, and the file name is based on a hash value derived from the cer- tificate. This is achieved by processing a certificate directory with the c_rehash utility supplied with OpenSSL. Using ‘--ca-directory’ is more efficient than ‘--ca-certificate’ when many certificates are installed because it allows Wget to fetch certificates on demand. Without this option Wget looks for CA certificates at the system-specified locations, chosen at OpenSSL installation time. ‘--crl-file=file’ Specifies a CRL file in file. This is needed for certificates that have been revocated by the CAs. ‘--random-file=file’ [OpenSSL and LibreSSL only] Use file as the source of random data for seeding the pseudo-random number generator on systems without ‘/dev/urandom’. On such systems the SSL library needs an external source of randomness to initialize. Randomness may be provided by EGD (see ‘--egd-file’ below) or read from an external source specified by the user. If this option is not specified, Wget looks for random data in $RANDFILE or, if that is unset, in ‘$HOME/.rnd’. If you’re getting the “Could not seed OpenSSL PRNG; disabling SSL.” error, you should provide random data using some of the methods described above. ‘--egd-file=file’ [OpenSSL only] Use file as the EGD socket. EGD stands for Entropy Gathering Daemon, a user-space program that collects data from various unpredictable system Chapter 2: Invoking 22 sources and makes it available to other programs that might need it. Encryption software, such as the SSL library, needs sources of non-repeating randomness to seed the random number generator used to produce cryptographically strong keys. OpenSSL allows the user to specify his own source of entropy using the RAND FILE environment variable. If this variable is unset, or if the specified file does not produce enough randomness, OpenSSL will read random data from EGD socket specified using this option. If this option is not specified (and the equivalent startup command is not used), EGD is never contacted. EGD is not needed on modern Unix systems that support ‘/dev/urandom’. ‘--no-hsts’ Wget supports HSTS (HTTP Strict Transport Security, RFC 6797) by default. Use ‘--no-hsts’ to make Wget act as a non-HSTS-compliant UA. As a consequence, Wget would ignore all the Strict-Transport-Security headers, and would not enforce any existing HSTS policy. ‘--hsts-file=file’ By default, Wget stores its HSTS database in ‘~/.wget-hsts’. You can use ‘--hsts-file’ to override this. Wget will use the supplied file as the HSTS data- base. Such file must conform to the correct HSTS database format used by Wget. If Wget cannot parse the provided file, the behaviour is unspecified. The Wget’s HSTS database is a plain text file. Each line contains an HSTS entry (ie. a site that has issued a Strict-Transport-Security header and that therefore has specified a concrete HSTS policy to be applied). Lines starting with a dash (#) are ignored by Wget. Please note that in spite of this convenient human-readability hand-hacking the HSTS database is generally not a good idea. An HSTS entry line consists of several fields separated by one or more whitespace: SP  SP SP SP The hostname and port fields indicate the hostname and port to which the given HSTS policy applies. The port field may be zero, and it will, in most of the cases. That means that the port number will not be taken into account when deciding whether such HSTS policy should be applied on a given request (only the host- name will be evaluated). When port is different to zero, both the target hostname and the port will be evaluated and the HSTS policy will only be applied if both of them match. This feature has been included for testing/development purposes only. The Wget testsuite (in ‘testenv/’) creates HSTS databases with explicit ports with the purpose of ensuring Wget’s correct behaviour. Applying HSTS policies to ports other than the default ones is discouraged by RFC 6797 (see Appendix B "Differences between HSTS Policy and Same-Origin Policy"). Thus, this function- ality should not be used in production environments and port will typically be zero. The last three fields do what they are expected to. The field include subdomains can either be 1 or 0 and it signals whether the subdomains of the target domain should be part of the given HSTS policy as well. The created and max-age fields hold the timestamp values of when such entry was created (first seen by Wget) and the HSTS-defined value ’max-age’, which states how long should that HSTS policy remain active, measured in seconds elapsed since the timestamp stored in created. Once that time has passed, that HSTS policy will no longer be valid and will eventually be removed from the database. If you supply your own HSTS database via ‘--hsts-file’, be aware that Wget may modify the provided file if any change occurs between the HSTS policies requested Chapter 2: Invoking 23 by the remote servers and those in the file. When Wget exists, it effectively updates the HSTS database by rewriting the database file with the new entries. If the supplied file does not exist, Wget will create one. This file will contain the new HSTS entries. If no HSTS entries were generated (no Strict-Transport-Security headers were sent by any of the servers) then no file will be created, not even an empty one. This behaviour applies to the default database file (‘~/.wget-hsts’) as well: it will not be created until some server enforces an HSTS policy. Care is taken not to override possible changes made by other Wget processes at the same time over the HSTS database. Before dumping the updated HSTS entries on the file, Wget will re-read it and merge the changes. Using a custom HSTS database and/or modifying an existing one is discouraged. For more information about the potential security threats arised from such practice, see section 14 "Security Considerations" of RFC 6797, specially section 14.9 "Creative Manipulation of HSTS Policy Store". ‘--warc-file=file’ Use file as the destination WARC file. ‘--warc-header=string’ Use string into as the warcinfo record. ‘--warc-max-size=size’ Set the maximum size of the WARC files to size. ‘--warc-cdx’ Write CDX index files. ‘--warc-dedup=file’ Do not store records listed in this CDX file. ‘--no-warc-compression’ Do not compress WARC files with GZIP. ‘--no-warc-digests’ Do not calculate SHA1 digests. ‘--no-warc-keep-log’ Do not store the log file in a WARC record. ‘--warc-tempdir=dir’ Specify the location for temporary files created by the WARC writer. 2.9 FTP Options ‘--ftp-user=user’ ‘--ftp-password=password’ Specify the username user and password password on an ftp server. Without this, or the corresponding startup option, the password defaults to ‘-wget@’, normally used for anonymous FTP. Another way to specify username and password is in the url itself (see hundefinedi [URL Format], page hundefinedi). Either method reveals your password to anyone who bothers to run ps. To prevent the passwords from being seen, store them in ‘.wgetrc’ or ‘.netrc’, and make sure to protect those files from other users with chmod. If the passwords are really important, do not leave them lying in those files either—edit the files and delete them after Wget has started the download. See hundefinedi [Security Considerations], page hundefinedi, for more information about security issues with Wget. Chapter 2: Invoking 24 ‘--no-remove-listing’ Don’t remove the temporary ‘.listing’ files generated by ftp retrievals. Normally, these files contain the raw directory listings received from ftp servers. Not removing them can be useful for debugging purposes, or when you want to be able to easily check on the contents of remote server directories (e.g. to verify that a mirror you’re running is complete). Note that even though Wget writes to a known filename for this file, this is not a security hole in the scenario of a user making ‘.listing’ a symbolic link to ‘/etc/passwd’ or something and asking root to run Wget in his or her directory. Depending on the options used, either Wget will refuse to write to ‘.listing’, mak- ing the globbing/recursion/time-stamping operation fail, or the symbolic link will be deleted and replaced with the actual ‘.listing’ file, or the listing will be written to a ‘.listing.number’ file. Even though this situation isn’t a problem, though, root should never run Wget in a non-trusted user’s directory. A user could do something as simple as linking ‘index.html’ to ‘/etc/passwd’ and asking root to run Wget with ‘-N’ or ‘-r’ so the file will be overwritten. ‘--no-glob’ Turn off ftp globbing. Globbing refers to the use of shell-like special characters (wildcards), like ‘’, ‘?’, ‘[’ and ‘]’ to retrieve more than one file from the same directory at once, like: wget ftp://gnjilux.srk.fer.hr/.msg By default, globbing will be turned on if the url contains a globbing character. This option may be used to turn globbing on or off permanently. You may have to quote the url to protect it from being expanded by your shell. Globbing makes Wget look for a directory listing, which is system-specific. This is why it currently works only with Unix ftp servers (and the ones emulating Unix ls output). ‘--no-passive-ftp’ Disable the use of the passive FTP transfer mode. Passive FTP mandates that the client connect to the server to establish the data connection rather than the other way around. If the machine is connected to the Internet directly, both passive and active FTP should work equally well. Behind most firewall and NAT configurations passive FTP has a better chance of working. However, in some rare firewall configurations, active FTP actually works when passive FTP doesn’t. If you suspect this to be the case, use this option, or set passive_ftp=off in your init file. ‘--preserve-permissions’ Preserve remote file permissions instead of permissions set by umask. ‘--retr-symlinks’ By default, when retrieving ftp directories recursively and a symbolic link is encoun- tered, the symbolic link is traversed and the pointed-to files are retrieved. Currently, Wget does not traverse symbolic links to directories to download them recursively, though this feature may be added in the future. When ‘--retr-symlinks=no’ is specified, the linked-to file is not downloaded. In- stead, a matching symbolic link is created on the local filesystem. The pointed-to file will not be retrieved unless this recursive retrieval would have encountered it separately and downloaded it anyway. This option poses a security risk where a Chapter 2: Invoking 25 malicious FTP Server may cause Wget to write to files outside of the intended directories through a specially crafted .listing file. Note that when retrieving a file (not a directory) because it was specified on the command-line, rather than because it was recursed to, this option has no effect. Symbolic links are always traversed in this case. 2.10 FTPS Options ‘--ftps-implicit’ This option tells Wget to use FTPS implicitly. Implicit FTPS consists of initializing SSL/TLS from the very beginning of the control connection. This option does not send an AUTH TLS command: it assumes the server speaks FTPS and directly starts an SSL/TLS connection. If the attempt is successful, the session continues just like regular FTPS (PBSZ and PROT are sent, etc.). Implicit FTPS is no longer a requirement for FTPS implementations, and thus many servers may not support it. If ‘--ftps-implicit’ is passed and no explicit port number specified, the default port for implicit FTPS, 990, will be used, instead of the default port for the "normal" (explicit) FTPS which is the same as that of FTP, 21. ‘--no-ftps-resume-ssl’ Do not resume the SSL/TLS session in the data channel. When starting a data connection, Wget tries to resume the SSL/TLS session previously started in the control connection. SSL/TLS session resumption avoids performing an entirely new handshake by reusing the SSL/TLS parameters of a previous session. Typically, the FTPS servers want it that way, so Wget does this by default. Under rare circumstances however, one might want to start an entirely new SSL/TLS session in every data connection. This is what ‘--no-ftps-resume-ssl’ is for. ‘--ftps-clear-data-connection’ All the data connections will be in plain text. Only the control connection will be under SSL/TLS. Wget will send a PROT C command to achieve this, which must be approved by the server. ‘--ftps-fallback-to-ftp’ Fall back to FTP if FTPS is not supported by the target server. For security reasons, this option is not asserted by default. The default behaviour is to exit with an error. If a server does not successfully reply to the initial AUTH TLS command, or in the case of implicit FTPS, if the initial SSL/TLS connection attempt is rejected, it is considered that such server does not support FTPS. 2.11 Recursive Retrieval Options ‘-r’ ‘--recursive’ Turn on recursive retrieving. See hundefinedi [Recursive Download], page hunde- finedi, for more details. The default maximum depth is 5. ‘-l depth’ ‘--level=depth’ Specify recursion maximum depth level depth (see hundefinedi [Recursive Down- load], page hundefinedi). ‘--delete-after’ This option tells Wget to delete every single file it downloads, after having done so. It is useful for pre-fetching popular pages through a proxy, e.g.: Chapter 2: Invoking 26 wget -r -nd --delete-after http://whatever.com/~popular/page/ The ‘-r’ option is to retrieve recursively, and ‘-nd’ to not create directories. Note that ‘--delete-after’ deletes files on the local machine. It does not is- sue the ‘DELE’ command to remote FTP sites, for instance. Also note that when ‘--delete-after’ is specified, ‘--convert-links’ is ignored, so ‘.orig’ files are simply not created in the first place. ‘-k’ ‘--convert-links’ After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-html content, etc. Each link will be changed in one of the two ways: • The links to files that have been downloaded by Wget will be changed to refer to the file they point to as a relative link. Example: if the downloaded file ‘/foo/doc.html’ links to ‘/bar/img.gif’, also downloaded, then the link in ‘doc.html’ will be modified to point to ‘../bar/img.gif’. This kind of transformation works reliably for arbitrary combinations of directories. • The links to files that have not been downloaded by Wget will be changed to include host name and absolute path of the location they point to. Example: if the downloaded file ‘/foo/doc.html’ links to ‘/bar/img.gif’ (or to ‘../bar/img.gif’), then the link in ‘doc.html’ will be modified to point to ‘http://hostname/bar/img.gif’. Because of this, local browsing works reliably: if a linked file was downloaded, the link will refer to its local name; if it was not downloaded, the link will refer to its full Internet address rather than presenting a broken link. The fact that the former links are converted to relative links ensures that you can move the downloaded hierarchy to another directory. Note that only at the end of the download can Wget know which links have been downloaded. Because of that, the work done by ‘-k’ will be performed at the end of all the downloads. ‘--convert-file-only’ This option converts only the filename part of the URLs, leaving the rest of the URLs untouched. This filename part is sometimes referred to as the "basename", although we avoid that term here in order not to cause confusion. It works particularly well in conjunction with ‘--adjust-extension’, although this coupling is not enforced. It proves useful to populate Internet caches with files downloaded from different hosts. Example: if some link points to ‘//foo.com/bar.cgi?xyz’ with ‘--adjust-extension’ asserted and its local destination is intended to be ‘./foo.com/bar.cgi?xyz.css’, then the link would be converted to ‘//foo.com/bar.cgi?xyz.css’. Note that only the filename part has been modified. The rest of the URL has been left untouched, including the net path (//) which would otherwise be processed by Wget and converted to the effective scheme (ie. http://). Chapter 2: Invoking 27 ‘-K’ ‘--backup-converted’ When converting a file, back up the original version with a ‘.orig’ suffix. Affects the behavior of ‘-N’ (see hundefinedi [HTTP Time-Stamping Internals], page hun- definedi). ‘-m’ ‘--mirror’ Turn on options suitable for mirroring. This option turns on recursion and time- stamping, sets infinite recursion depth and keeps ftp directory listings. It is cur- rently equivalent to ‘-r -N -l inf --no-remove-listing’. ‘-p’ ‘--page-requisites’ This option causes Wget to download all the files that are necessary to properly display a given html page. This includes such things as inlined images, sounds, and referenced stylesheets. Ordinarily, when downloading a single html page, any requisite documents that may be needed to display it properly are not downloaded. Using ‘-r’ together with ‘-l’ can help, but since Wget does not ordinarily distinguish between external and inlined documents, one is generally left with “leaf documents” that are missing their requisites. For instance, say document ‘1.html’ contains an tag referencing ‘1.gif’ and an tag pointing to external document ‘2.html’. Say that ‘2.html’ is similar but that its image is ‘2.gif’ and it links to ‘3.html’. Say this continues up to some arbitrarily high number. If one executes the command: wget -r -l 2 http://site/1.html then ‘1.html’, ‘1.gif’, ‘2.html’, ‘2.gif’, and ‘3.html’ will be downloaded. As you can see, ‘3.html’ is without its requisite ‘3.gif’ because Wget is simply counting the number of hops (up to 2) away from ‘1.html’ in order to determine where to stop the recursion. However, with this command: wget -r -l 2 -p http://site/1.html all the above files and ‘3.html’’s requisite ‘3.gif’ will be downloaded. Similarly, wget -r -l 1 -p http://site/1.html will cause ‘1.html’, ‘1.gif’, ‘2.html’, and ‘2.gif’ to be downloaded. One might think that: wget -r -l 0 -p http://site/1.html would download just ‘1.html’ and ‘1.gif’, but unfortunately this is not the case, because ‘-l 0’ is equivalent to ‘-l inf’—that is, infinite recursion. To download a single html page (or a handful of them, all specified on the command-line or in a ‘-i’ url input file) and its (or their) requisites, simply leave off ‘-r’ and ‘-l’: wget -p http://site/1.html Note that Wget will behave as if ‘-r’ had been specified, but only that single page and its requisites will be downloaded. Links from that page to external documents will not be followed. Actually, to download a single page and all its requisites (even if they exist on separate websites), and make sure the lot displays properly locally, this author likes to use a few options in addition to ‘-p’: wget -E -H -k -K -p http://site/document Chapter 2: Invoking 28 To finish off this topic, it’s worth knowing that Wget’s idea of an external document link is any URL specified in an tag, an tag, or a tag other than . ‘--strict-comments’ Turn on strict parsing of html comments. The default is to terminate comments at the first occurrence of ‘-->’. According to specifications, html comments are expressed as sgml declarations. Declaration is special markup that begins with ‘<!’ and ends with ‘>’, such as ‘<!DOCTYPE ...>’, that may contain comments between a pair of ‘--’ delimiters. html comments are “empty declarations”, sgml declarations without any non- comment text. Therefore, ‘’ is a valid comment, and so is ‘’, but ‘’ is not. On the other hand, most html writers don’t perceive comments as anything other than text delimited with ‘’, which is not quite the same. For example, something like ‘’ works as a valid comment as long as the number of dashes is a multiple of four (!). If not, the comment technically lasts until the next ‘--’, which may be at the other end of the document. Because of this, many popular browsers completely ignore the specificationand implement what users have come to expect: comments delimited with ‘’. Until version 1.9, Wget interpreted comments strictly, which resulted in missing links in many web pages that displayed fine in browsers, but had the misfortune of containing non-compliant comments. Beginning with version 1.9, Wget has joined the ranks of clients that implements “naive” comments, terminating each comment at the first occurrence of ‘-->’. If, for whatever reason, you want strict comment parsing, use this option to turn it on. 2.12 Recursive Accept/Reject Options ‘-A acclist --accept acclist’ ‘-R rejlist --reject rejlist’ Specify comma-separated lists of file name suffixes or patterns to accept or reject (see hundefinedi [Types of Files], page hundefinedi). Note that if any of the wildcard characters, ‘’, ‘?’, ‘[’ or ‘]’, appear in an element of acclist or rejlist, it will be treated as a pattern, rather than a suffix. In this case, you have to enclose the pattern into quotes to prevent your shell from expanding it, like in ‘-A ".mp3"’ or ‘-A ’.mp3’’. ‘--accept-regex urlregex’ ‘--reject-regex urlregex’ Specify a regular expression to accept or reject the complete URL. ‘--regex-type regextype’ Specify the regular expression type. Possible types are ‘posix’ or ‘pcre’. Note that to be able to use ‘pcre’ type, wget has to be compiled with libpcre support. ‘-D domain-list’ ‘--domains=domain-list’ Set domains to be followed. domain-list is a comma-separated list of domains. Note that it does not turn on ‘-H’. ‘--exclude-domains domain-list’ Specify the domains that are not to be followed (see hundefinedi [Spanning Hosts], page hundefinedi). Chapter 2: Invoking 29 ‘--follow-ftp’ Follow ftp links from html documents. Without this option, Wget will ignore all the ftp links. ‘--follow-tags=list’ Wget has an internal table of html tag / attribute pairs that it considers when looking for linked documents during a recursive retrieval. If a user wants only a subset of those tags to be considered, however, he or she should be specify such tags in a comma-separated list with this option. ‘--ignore-tags=list’ This is the opposite of the ‘--follow-tags’ option. To skip certain html tags when recursively looking for documents to download, specify them in a comma-separated list. In the past, this option was the best bet for downloading a single page and its requisites, using a command-line like: wget --ignore-tags=a,area -H -k -K -r http://site/document However, the author of this option came across a page with tags like and came to the realization that specifying tags to ignore was not enough. One can’t just tell Wget to ignore , because then stylesheets will not be downloaded. Now the best bet for downloading a single page and its requisites is the dedicated ‘--page-requisites’ option. ‘--ignore-case’ Ignore case when matching files and directories. This influences the behavior of -R, -A, -I, and -X options, as well as globbing implemented when downloading from FTP sites. For example, with this option, ‘-A ".txt"’ will match ‘file1.txt’, but also ‘file2.TXT’, ‘file3.TxT’, and so on. The quotes in the example are to prevent the shell from expanding the pattern. ‘-H’ ‘--span-hosts’ Enable spanning across hosts when doing recursive retrieving (see hundefinedi [Span- ning Hosts], page hundefinedi). ‘-L’ ‘--relative’ Follow relative links only. Useful for retrieving a specific home page without any distractions, not even those from the same hosts (see hundefinedi [Relative Links], page hundefinedi). ‘-I list’ ‘--include-directories=list’ Specify a comma-separated list of directories you wish to follow when downloading (see hundefinedi [Directory-Based Limits], page hundefinedi). Elements of list may contain wildcards. ‘-X list’ ‘--exclude-directories=list’ Specify a comma-separated list of directories you wish to exclude from download (see hundefinedi [Directory-Based Limits], page hundefinedi). Elements of list may contain wildcards. ‘-np’ Chapter 2: Invoking 30 ‘--no-parent’ Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files below a certain hierarchy will be downloaded. See hundefinedi [Directory-Based Limits], page hundefinedi, for more details. 2.13 Exit Status Wget may return one of several error codes if it encounters problems. 0 No problems occurred. 1 Generic error code. 2 Parse error—for instance, when parsing command-line options, the ‘.wgetrc’ or ‘.netrc’... 3 File I/O error. 4 Network failure. 5 SSL verification failure. 6 Username/password authentication failure. 7 Protocol errors. 8 Server issued an error response. With the exceptions of 0 and 1, the lower-numbered exit codes take precedence over higher- numbered ones, when multiple types of errors are encountered. In versions of Wget prior to 1.12, Wget’s exit status tended to be unhelpful and incon- sistent. Recursive downloads would virtually always return 0 (success), regardless of any is- sues encountered, and non-recursive fetches only returned the status corresponding to the most recently-attempted download. Chapter 3: Recursive Download 31 3 Recursive Download GNU Wget is capable of traversing parts of the Web (or a single http or ftp server), following links and directory structure. We refer to this as to recursive retrieval, or recursion. With http urls, Wget retrieves and parses the html or css from the given url, retrieving the files the document refers to, through markup like href or src, or css uri values specified using the ‘url()’ functional notation. If the freshly downloaded file is also of type text/html, application/xhtml+xml, or text/css, it will be parsed and followed further. Recursive retrieval of http and html/css content is breadth-first. This means that Wget first downloads the requested document, then the documents linked from that document, then the documents linked by them, and so on. In other words, Wget first downloads the documents at depth 1, then those at depth 2, and so on until the specified maximum depth. The maximum depth to which the retrieval may descend is specified with the ‘-l’ option. The default maximum depth is five layers. When retrieving an ftp url recursively, Wget will retrieve all the data from the given directory tree (including the subdirectories up to the specified depth) on the remote server, creating its mirror image locally. ftp retrieval is also limited by the depth parameter. Unlike http recursion, ftp recursion is performed depth-first. By default, Wget will create a local directory tree, corresponding to the one found on the remote server. Recursive retrieving can find a number of applications, the most important of which is mir- roring. It is also useful for www presentations, and any other opportunities where slow network connections should be bypassed by storing the files locally. You should be warned that recursive downloads can overload the remote servers. Because of that, many administrators frown upon them and may ban access from your site if they detect very fast downloads of big amounts of content. When downloading from Internet servers, consider using the ‘-w’ option to introduce a delay between accesses to the server. The download will take a while longer, but the server administrator will not be alarmed by your rudeness. Of course, recursive download may cause problems on your machine. If left to run unchecked, it can easily fill up the disk. If downloading from local network, it can also take bandwidth on the system, as well as consume memory and CPU. Try to specify the criteria that match the kind of download you are trying to achieve. If you want to download only one page, use ‘--page-requisites’ without any additional recursion. If you want to download things under one directory, use ‘-np’ to avoid downloading things from other directories. If you want to download all the files from one directory, use ‘-l 1’ to make sure the recursion depth never exceeds one. See hundefinedi [Following Links], page hundefinedi, for more information about this. Recursive retrieval should be used with care. Don’t say you were not warned. Chapter 4: Following Links 32 4 Following Links When retrieving recursively, one does not wish to retrieve loads of unnecessary data. Most of the time the users bear in mind exactly what they want to download, and want Wget to follow only specific links. For example, if you wish to download the music archive from ‘fly.srk.fer.hr’, you will not want to download all the home pages that happen to be referenced by an obscure part of the archive. Wget possesses several mechanisms that allows you to fine-tune which links it will follow. 4.1 Spanning Hosts Wget’s recursive retrieval normally refuses to visit hosts different than the one you specified on the command line. This is a reasonable default; without it, every retrieval would have the potential to turn your Wget into a small version of google. However, visiting different hosts, or host spanning, is sometimes a useful option. Maybe the images are served from a different server. Maybe you’re mirroring a site that consists of pages interlinked between three servers. Maybe the server has two equivalent names, and the html pages refer to both interchangeably. Span to any host—‘-H’ The ‘-H’ option turns on host spanning, thus allowing Wget’s recursive run to visit any host referenced by a link. Unless sufficient recursion-limiting criteria are applied depth, these foreign hosts will typically link to yet more hosts, and so on until Wget ends up sucking up much more data than you have intended. Limit spanning to certain domains—‘-D’ The ‘-D’ option allows you to specify the domains that will be followed, thus limiting the recursion only to the hosts that belong to these domains. Obviously, this makes sense only in conjunction with ‘-H’. A typical example would be downloading the contents of ‘www.server.com’, but allowing downloads from ‘images.server.com’, etc.: wget -rH -Dserver.com http://www.server.com/ You can specify more than one address by separating them with a comma, e.g. ‘-Ddomain1.com,domain2.com’. Keep download off certain domains—‘--exclude-domains’ If there are domains you want to exclude specifically, you can do it with ‘--exclude-domains’, which accepts the same type of arguments of ‘-D’, but will exclude all the listed domains. For example, if you want to download all the hosts from ‘foo.edu’ domain, with the exception of ‘sunsite.foo.edu’, you can do it like this: wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu \ http://www.foo.edu/ 4.2 Types of Files When downloading material from the web, you will often want to restrict the retrieval to only certain file types. For example, if you are interested in downloading gifs, you will not be overjoyed to get loads of PostScript documents, and vice versa. Wget offers two options to deal with this problem. Each option description lists a short name, a long name, and the equivalent command in ‘.wgetrc’. Chapter 4: Following Links 33 ‘-A acclist’ ‘--accept acclist’ ‘accept = acclist’ ‘--accept-regex urlregex’ ‘accept-regex = urlregex’ The argument to ‘--accept’ option is a list of file suffixes or patterns that Wget will download during recursive retrieval. A suffix is the ending part of a file, and consists of “normal” letters, e.g. ‘gif’ or ‘.jpg’. A matching pattern contains shell-like wildcards, e.g. ‘books’ or ‘zelazny196[0-9]’. So, specifying ‘wget -A gif,jpg’ will make Wget download only the files end- ing with ‘gif’ or ‘jpg’, i.e. gifs and jpegs. On the other hand, ‘wget -A "zelazny196[0-9]"’ will download only files beginning with ‘zelazny’ and con- taining numbers from 1960 to 1969 anywhere within. Look up the manual of your shell for a description of how pattern matching works. Of course, any number of suffixes and patterns can be combined into a comma- separated list, and given as an argument to ‘-A’. The argument to ‘--accept-regex’ option is a regular expression which is matched against the complete URL. ‘-R rejlist’ ‘--reject rejlist’ ‘reject = rejlist’ ‘--reject-regex urlregex’ ‘reject-regex = urlregex’ The ‘--reject’ option works the same way as ‘--accept’, only its logic is the re- verse; Wget will download all files except the ones matching the suffixes (or patterns) in the list. So, if you want to download a whole page except for the cumbersome mpegs and .au files, you can use ‘wget -R mpg,mpeg,au’. Analogously, to download all files except the ones beginning with ‘bjork’, use ‘wget -R "bjork"’. The quotes are to prevent expansion by the shell. The argument to ‘--accept-regex’ option is a regular expression which is matched against the complete URL. The ‘-A’ and ‘-R’ options may be combined to achieve even better fine-tuning of which files to retrieve. E.g. ‘wget -A "zelazny" -R .ps’ will download all the files having ‘zelazny’ as a part of their name, but not the PostScript files. Note that these two options do not affect the downloading of html files (as determined by a ‘.htm’ or ‘.html’ filename prefix). This behavior may not be desirable for all users, and may be changed for future versions of Wget. Note, too, that query strings (strings at the end of a URL beginning with a question mark (‘?’) are not included as part of the filename for accept/reject rules, even though these will actually contribute to the name chosen for the local file. It is expected that a future version of Wget will provide an option to allow matching against query strings. Finally, it’s worth noting that the accept/reject lists are matched twice against downloaded files: once against the URL’s filename portion, to determine if the file should be downloaded in the first place; then, after it has been accepted and successfully downloaded, the local file’s name is also checked against the accept/reject lists to see if it should be removed. The rationale was that, since ‘.htm’ and ‘.html’ files are always downloaded regardless of accept/reject rules, they should be removed after being downloaded and scanned for links, if they did match the accept/reject lists. However, this can lead to unexpected results, since the local filenames can Chapter 4: Following Links 34 differ from the original URL filenames in the following ways, all of which can change whether an accept/reject rule matches: • If the local file already exists and ‘--no-directories’ was specified, a numeric suffix will be appended to the original name. • If ‘--adjust-extension’ was specified, the local filename might have ‘.html’ appended to it. If Wget is invoked with ‘-E -A.php’, a filename such as ‘index.php’ will match be accepted, but upon download will be named ‘index.php.html’, which no longer matches, and so thefile will be deleted. • Query strings do not contribute to URL matching, but are included in local filenames, and so do contribute to filename matching. This behavior, too, is considered less-than-desirable, and may change in a future version of Wget. 4.3 Directory-Based Limits Regardless of other link-following facilities, it is often useful to place the restriction of what files to retrieve based on the directories those files are placed in. There can be many reasons for this—the home pages may be organized in a reasonable directory structure; or some directories may contain useless information, e.g. ‘/cgi-bin’ or ‘/dev’ directories. Wget offers three different options to deal with this requirement. Each option description lists a short name, a long name, and the equivalent command in ‘.wgetrc’. ‘-I list’ ‘--include list’ ‘include_directories = list’ ‘-I’ option accepts a comma-separated list of directories included in the retrieval. Any other directories will simply be ignored. The directories are absolute paths. So, if you wish to download from ‘http://host/people/bozo/’ following only links to bozo’s colleagues in the ‘/people’ directory and the bogus scripts in ‘/cgi-bin’, you can specify: wget -I /people,/cgi-bin http://host/people/bozo/ ‘-X list’ ‘--exclude list’ ‘exclude_directories = list’ ‘-X’ option is exactly the reverse of ‘-I’—this is a list of directories excluded from the download. E.g. if you do not want Wget to download things from ‘/cgi-bin’ directory, specify ‘-X /cgi-bin’ on the command line. The same as with ‘-A’/‘-R’, these two options can be combined to get a better fine- tuning of downloading subdirectories. E.g. if you want to load all the files from ‘/pub’ hierarchy except for ‘/pub/worthless’, specify ‘-I/pub -X/pub/worthless’. ‘-np’ ‘--no-parent’ ‘no_parent = on’ The simplest, and often very useful way of limiting directories is disallowing retrieval of the links that refer to the hierarchy above than the beginning directory, i.e. disallowing ascent to the parent directory/directories. The ‘--no-parent’ option (short ‘-np’) is useful in this case. Using it guarantees that you will never leave the existing hierarchy. Supposing you issue Wget with: wget -r --no-parent http://somehost/~luzer/my-archive/ Chapter 4: Following Links 35 You may rest assured that none of the references to ‘/~his-girls-homepage/’ or ‘/~luzer/all-my-mpegs/’ will be followed. Only the archive you are interested in will be downloaded. Essentially, ‘--no-parent’ is similar to ‘-I/~luzer/my-archive’, only it handles redirections in a more intelligent fashion. Note that, for HTTP (and HTTPS), the trailing slash is very important to ‘--no-parent’. HTTP has no concept of a “directory”—Wget relies on you to indicate what’s a directory and what isn’t. In ‘http://foo/bar/’, Wget will con- sider ‘bar’ to be a directory, while in ‘http://foo/bar’ (no trailing slash), ‘bar’ will be considered a filename (so ‘--no-parent’ would be meaningless, as its parent is ‘/’). 4.4 Relative Links When ‘-L’ is turned on, only the relative links are ever followed. Relative links are here defined those that do not refer to the web server root. For example, these links are relative: These links are not relative: Using this option guarantees that recursive retrieval will not span hosts, even without ‘-H’. In simple cases it also allows downloads to “just work” without having to convert links. This option is probably not very useful and might be removed in a future release. 4.5 Following FTP Links The rules for ftp are somewhat specific, as it is necessary for them to be. ftp links in html documents are often included for purposes of reference, and it is often inconvenient to download them by default. To have ftp links followed from html documents, you need to specify the ‘--follow-ftp’ option. Having done that, ftp links will span hosts regardless of ‘-H’ setting. This is logical, as ftp links rarely point to the same host where the http server resides. For similar reasons, the ‘-L’ options has no effect on such downloads. On the other hand, domain acceptance (‘-D’) and suffix rules (‘-A’ and ‘-R’) apply normally. Also note that followed links to ftp directories will not be retrieved recursively further. Chapter 5: Time-Stamping 36 5 Time-Stamping One of the most important aspects of mirroring information from the Internet is updating your archives. Downloading the whole archive again and again, just to replace a few changed files is expen- sive, both in terms of wasted bandwidth and money, and the time to do the update. This is why all the mirroring tools offer the option of incremental updating. Such an updating mechanism means that the remote server is scanned in search of new files. Only those new files will be downloaded in the place of the old ones. A file is considered new if one of these two conditions are met:
Chapter 7: Examples 49 7 Examples The examples are divided into three sections loosely based on their complexity. 7.1 Simple Usage • Say you want to download a url. Just type: wget http://fly.srk.fer.hr/ • But what will happen if the connection is slow, and the file is lengthy? The connection will probably fail before the whole file is retrieved, more than once. In this case, Wget will try getting the file until it either gets the whole of it, or exceeds the default number of retries (this being 20). It is easy to change the number of tries to 45, to insure that the whole file will arrive safely: wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg • Now let’s leave Wget to work in the background, and write its progress to log file ‘log’. It is tiring to type ‘--tries’, so we shall use ‘-t’. wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg & The ampersand at the end of the line makes sure that Wget works in the background. To unlimit the number of retries, use ‘-t inf’. • The usage of ftp is as simple. Wget will take care of login and password. wget ftp://gnjilux.srk.fer.hr/welcome.msg • If you specify a directory, Wget will retrieve the directory listing, parse it and convert it to html. Try: wget ftp://ftp.gnu.org/pub/gnu/ links index.html 7.2 Advanced Usage • You have a file that contains the URLs you want to download? Use the ‘-i’ switch: wget -i file If you specify ‘-’ as file name, the urls will be read from standard input. • Create a five levels deep mirror image of the GNU web site, with the same directory structure the original has, with only one try per document, saving the log of the activities to ‘gnulog’: wget -r http://www.gnu.org/ -o gnulog • The same as the above, but convert the links in the downloaded files to point to local files, so you can view the documents off-line: wget --convert-links -r http://www.gnu.org/ -o gnulog • Retrieve only one html page, but make sure that all the elements needed for the page to be displayed, such as inline images and external style sheets, are also downloaded. Also make sure the downloaded page references the downloaded links. wget -p --convert-links http://www.server.com/dir/page.html The html page will be saved to ‘www.server.com/dir/page.html’, and the images, stylesheets, etc., somewhere under ‘www.server.com/’, depending on where they were on the remote server. • The same as the above, but without the ‘www.server.com/’ directory. In fact, I don’t want to have all those random server directories anyway—just save all those files under a ‘download/’ subdirectory of the current directory. wget -p --convert-links -nH -nd -Pdownload \ http://www.server.com/dir/page.html Chapter 7: Examples 50 • Retrieve the index.html of ‘www.lycos.com’, showing the original server headers: wget -S http://www.lycos.com/ • Save the server headers with the file, perhaps for post-processing. wget --save-headers http://www.lycos.com/ more index.html • Retrieve the first two levels of ‘wuarchive.wustl.edu’, saving them to ‘/tmp’. wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/ • You want to download all the gifs from a directory on an http server. You tried ‘wget http://www.server.com/dir/.gif’, but that didn’t work because http retrieval does not support globbing. In that case, use: wget -r -l1 --no-parent -A.gif http://www.server.com/dir/ More verbose, but the effect is the same. ‘-r -l1’ means to retrieve recursively (see hunde- finedi [Recursive Download], page hundefinedi), with maximum depth of 1. ‘--no-parent’ means that references to the parent directory are ignored (see hundefinedi [Directory-Based Limits], page hundefinedi), and ‘-A.gif’ means to download only the gif files. ‘-A ".gif"’ would have worked too. • Suppose you were in the middle of downloading, when Wget was interrupted. Now you do not want to clobber the files already present. It would be: wget -nc -r http://www.gnu.org/ • If you want to encode your own username and password to http or ftp, use the appropriate url syntax (see hundefinedi [URL Format], page hundefinedi). wget ftp://hniksic:firstname.lastname@example.org/.emacs Note, however, that this usage is not advisable on multi-user systems because it reveals your password to anyone who looks at the output of ps. • You would like the output documents to go to standard output instead of to files? wget -O - http://jagor.srce.hr/ http://www.srce.hr/ You can also combine the two options and make pipelines to retrieve the documents from remote hotlists: wget -O - http://cool.list.com/ | wget --force-html -i - 7.3 Very Advanced Usage • If you wish Wget to keep a mirror of a page (or ftp subdirectories), use ‘--mirror’ (‘-m’), which is the shorthand for ‘-r -l inf -N’. You can put Wget in the crontab file asking it to recheck a site each Sunday: crontab 0 0 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog • In addition to the above, you want the links to be converted for local viewing. But, after having read this manual, you know that link conversion doesn’t play well with timestamping, so you also want Wget to back up the original html files before the conversion. Wget invocation would look like this: wget --mirror --convert-links --backup-converted \ http://www.gnu.org/ -o /home/me/weeklog • But you’ve also noticed that local viewing doesn’t work all that well when html files are saved under extensions other than ‘.html’, perhaps because they were served as ‘index.cgi’. So you’d like Wget to rename all the files served with content-type ‘text/html’ or ‘application/xhtml+xml’ to ‘name.html’. Chapter 7: Examples 51 wget --mirror --convert-links --backup-converted \ --html-extension -o /home/me/weeklog \ http://www.gnu.org/ Or, with less typing: wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog Chapter 8: Various 52 8 Various This chapter contains all the stuff that could not fit anywhere else. 8.1 Proxies Proxies are special-purpose http servers designed to transfer data from remote servers to local clients. One typical use of proxies is lightening network load for users behind a slow connection. This is achieved by channeling all http and ftp requests through the proxy which caches the transferred data. When a cached resource is requested again, proxy will return the data from cache. Another use for proxies is for companies that separate (for security reasons) their internal networks from the rest of Internet. In order to obtain information from the Web, their users connect and retrieve remote data using an authorized proxy. Wget supports proxies for both http and ftp retrievals. The standard way to specify proxy location, which Wget recognizes, is using the following environment variables: http_proxy https_proxy If set, the http_proxy and https_proxy variables should contain the urls of the proxies for http and https connections respectively. ftp_proxy This variable should contain the url of the proxy for ftp connections. It is quite common that http_proxy and ftp_proxy are set to the same url. no_proxy This variable should contain a comma-separated list of domain extensions proxy should not be used for. For instance, if the value of no_proxy is ‘.mit.edu’, proxy will not be used to retrieve documents from MIT. In addition to the environment variables, proxy location and settings may be specified from within Wget itself. ‘--no-proxy’ ‘proxy = on/off’ This option and the corresponding command may be used to suppress the use of proxy, even if the appropriate environment variables are set. ‘http_proxy = URL’ ‘https_proxy = URL’ ‘ftp_proxy = URL’ ‘no_proxy = string’ These startup file variables allow you to override the proxy settings specified by the environment. Some proxy servers require authorization to enable you to use them. The authorization consists of username and password, which must be sent by Wget. As with http authorization, several authentication schemes exist. For proxy authorization only the Basic authentication scheme is currently implemented. You may specify your username and password either through the proxy url or through the command-line options. Assuming that the company’s proxy is located at ‘proxy.company.com’ at port 8001, a proxy url location containing authorization data might look like this: http://hniksic:email@example.com:8001/ Alternatively, you may use the ‘proxy-user’ and ‘proxy-password’ options, and the equiv- alent ‘.wgetrc’ settings proxy_user and proxy_password to set the proxy username and pass- word. Chapter 8: Various 53 8.2 Distribution Like all GNU utilities, the latest version of Wget can be found at the master GNU archive site ftp.gnu.org, and its mirrors. For example, Wget 1.17.1 can be found at ftp://ftp.gnu.org/pub/gnu/wget/wget-1.17.1.tar.gz 8.3 Web Site The official web site for GNU Wget is at http://www.gnu.org/software/wget/. However, most useful information resides at “The Wget Wgiki”, http://wget.addictivecode.org/. 8.4 Mailing Lists Primary List The primary mailinglist for discussion, bug-reports, or questions about GNU Wget is at firstname.lastname@example.org. To subscribe, send an email to email@example.com, or visit http://lists.gnu.org/mailman/listinfo/bug-wget. You do not need to subscribe to send a message to the list; however, please note that unsub- scribed messages are moderated, and may take a while before they hit the list—usually around a day. If you want your message to show up immediately, please subscribe to the list before posting. Archives for the list may be found at http://lists.gnu.org/pipermail/bug-wget/. An NNTP/Usenettish gateway is also available via Gmane. You can see the Gmane archives at http://news.gmane.org/gmane.comp.web.wget.general. Note that the Gmane archives conveniently include messages from both the current list, and the previous one. Messages also show up in the Gmane archives sooner than they do at lists.gnu.org. Bug Notices List Additionally, there is the firstname.lastname@example.org mailing list. This is a non-discussion list that receives bug report notifications from the bug-tracker. To subscribe to this list, send an email to email@example.com, or visit http://addictivecode.org/mailman/listinfo/wget-notify. Obsolete Lists Previously, the mailing list firstname.lastname@example.org was used as the main discussion list, and another list, email@example.com was used for submitting and discussing patches to GNU Wget. Messages from firstname.lastname@example.org are archived at http://www.mail-archive.com/wget%40sunsite.dk/ and at http://news.gmane.org/gmane.comp.web.wget.general (which also continues to archive the current list, email@example.com). Messages from firstname.lastname@example.org are archived at http://news.gmane.org/gmane.comp.web.wget.patches. 8.5 Internet Relay Chat In addition to the mailinglists, we also have a support channel set up via IRC at irc.freenode.org, #wget. Come check it out! 8.6 Reporting Bugs You are welcome to submit bug reports via the GNU Wget bug tracker (see http://wget.addictivecode.org/BugTracker). Before actually submitting a bug report, please try to follow a few simple guidelines. Chapter 8: Various 54
Please try to ascertain that the behavior you see really is a bug. If Wget crashes, it’s a bug. If Wget does not behave as documented, it’s a bug. If things work strange, but you are not sure about the way they are supposed to work, it might well be a bug, but you might want to double-check the documentation and the mailing lists (see hundefinedi [Mailing Lists], page hundefinedi).
Try to repeat the bug in as simple circumstances as possible. E.g. if Wget crashes while downloading ‘wget -rl0 -kKE -t5 --no-proxy http://yoyodyne.com -o /tmp/log’, you should try to see if the crash is repeatable, and if will occur with a simpler set of op- tions. You might even try to start the download at the page where the crash occurred to see if that page somehow triggered the crash. Also, while I will probably be interested to know the contents of your ‘.wgetrc’ file, just dumping it into the debug message is probably a bad idea. Instead, you should first try to see if the bug repeats with ‘.wgetrc’ moved out of the way. Only if it turns out that ‘.wgetrc’ settings affect the bug, mail me the relevant parts of the file.
Please start Wget with ‘-d’ option and send us the resulting output (or relevant parts thereof). If Wget was compiled without debug support, recompile it—it is much easier to trace bugs with debug support on. Note: please make sure to remove any potentially sensitive information from the debug log before sending it to the bug address. The -d won’t go out of its way to collect sensitive information, but the log will contain a fairly complete transcript of Wget’s communication with the server, which may include passwords and pieces of downloaded data. Since the bug address is publically archived, you may assume that all bug reports are visible to the public.
If Wget has crashed, try to run it in a debugger, e.g. gdb ‘which wget‘ core and type where to get the backtrace. This may not work if the system administrator has disabled core files, but it is safe to try. 8.7 Portability Like all GNU software, Wget works on the GNU system. However, since it uses GNU Autoconf for building and configuring, and mostly avoids using “special” features of any particular Unix, it should compile (and work) on all common Unix flavors. Various Wget versions have been compiled and tested under many kinds of Unix systems, including GNU/Linux, Solaris, SunOS 4.x, Mac OS X, OSF (aka Digital Unix or Tru64), Ultrix, *BSD, IRIX, AIX, and others. Some of those systems are no longer in widespread use and may not be able to support recent versions of Wget. If Wget fails to compile on your system, we would like to know about it. Thanks to kind contributors, this version of Wget compiles and works on 32-bit Microsoft Windows platforms. It has been compiled successfully using MS Visual C++ 6.0, Watcom, Borland C, and GCC compilers. Naturally, it is crippled of some features available on Unix, but it should work as a substitute for people stuck with Windows. Note that Windows-specific portions of Wget are not guaranteed to be supported in the future, although this has been the case in practice for many years now. All questions and problems in Windows usage should be reported to Wget mailing list at email@example.com where the volunteers who maintain the Windows-related features might look at them. Support for building on MS-DOS via DJGPP has been contributed by Gisle Vanem; a port to VMS is maintained by Steven Schweda, and is available at http://antinode.org/. Chapter 8: Various 55 8.8 Signals Since the purpose of Wget is background work, it catches the hangup signal (SIGHUP) and ignores it. If the output was on standard output, it will be redirected to a file named ‘wget-log’. Otherwise, SIGHUP is ignored. This is convenient when you wish to redirect the output of Wget after having started it. $ wget http://www.gnus.org/dist/gnus.tar.gz & ... $ kill -HUP %% SIGHUP received, redirecting output to ‘wget-log’. Other than that, Wget will not try to interfere with signals in any way. C-c, kill -TERM and kill -KILL should kill it alike. Chapter 9: Appendices 56 9 Appendices This chapter contains some references I consider useful. 9.1 Robot Exclusion It is extremely easy to make Wget wander aimlessly around a web site, sucking all the available data in progress. ‘wget -r site’, and you’re set. Great? Not for the server admin. As long as Wget is only retrieving static pages, and doing it at a reasonable rate (see the ‘--wait’ option), there’s not much of a problem. The trouble is that Wget can’t tell the difference between the smallest static page and the most demanding CGI. A site I know has a section handled by a CGI Perl script that converts Info files to html on the fly. The script is slow, but works well enough for human users viewing an occasional Info file. However, when someone’s recursive Wget download stumbles upon the index page that links to all the Info files through the script, the system is brought to its knees without providing anything useful to the user (This task of converting Info files could be done locally and access to Info documentation for all installed GNU software on a system is available from the info command). To avoid this kind of accident, as well as to preserve privacy for documents that need to be protected from well-behaved robots, the concept of robot exclusion was invented. The idea is that the server administrators and document authors can specify which portions of the site they wish to protect from robots and those they will permit access. The most popular mechanism, and the de facto standard supported by all the major robots, is the “Robots Exclusion Standard” (RES) written by Martijn Koster et al. in 1994. It specifies the format of a text file containing directives that instruct the robots which URL paths to avoid. To be found by the robots, the specifications must be placed in ‘/robots.txt’ in the server root, which the robots are expected to download and parse. Although Wget is not a web robot in the strictest sense of the word, it can download large parts of the site without the user’s intervention to download an individual page. Because of that, Wget honors RES when downloading recursively. For instance, when you issue: wget -r http://www.server.com/ First the index of ‘www.server.com’ will be downloaded. If Wget finds that it wants to down- load more documents from that server, it will request ‘http://www.server.com/robots.txt’ and, if found, use it for further downloads. ‘robots.txt’ is loaded only once per each server. Until version 1.8, Wget supported the first version of the standard, written by Martijn Koster in 1994 and available at http://www.robotstxt.org/wc/norobots.html. As of version 1.8, Wget has supported the additional directives specified in the internet draft ‘’ titled “A Method for Web Robots Control”. The draft, which has as far as I know never made to an rfc, is available at http://www.robotstxt.org/wc/norobots-rfc.txt. This manual no longer includes the text of the Robot Exclusion Standard. The second, less known mechanism, enables the author of an individual document to specify whether they want the links from the file to be followed by a robot. This is achieved using the META tag, like this:
This is explained in some detail at http://www.robotstxt.org/wc/meta-user.html. Wget supports this method of robot exclusion in addition to the usual ‘/robots.txt’ exclusion. If you know what you are doing and really really wish to turn off the robot exclusion, set the robots variable to ‘off’ in your ‘.wgetrc’. You can achieve the same effect from the command line using the -e switch, e.g. ‘wget -e robots=off url...’. Chapter 9: Appendices 57 9.2 Security Considerations When using Wget, you must be aware that it sends unencrypted passwords through the network, which may present a security problem. Here are the main issues, and some solutions.
The passwords on the command line are visible using ps. The best way around it is to use wget -i - and feed the urls to Wget’s standard input, each on a separate line, terminated by C-d. Another workaround is to use ‘.netrc’ to store passwords; however, storing unencrypted passwords is also considered a security risk.
Using the insecure basic authentication scheme, unencrypted passwords are transmitted through the network routers and gateways.
The ftp passwords are also in no way encrypted. There is no good solution for this at the moment.
Although the “normal” output of Wget tries to hide the passwords, debugging logs show them, in all forms. This problem is avoided by being careful when you send debug logs (yes, even when you send them to me). 9.3 Contributors GNU Wget was written by Hrvoje Nikˇ si´ c firstname.lastname@example.org, However, the development of Wget could never have gone as far as it has, were it not for the help of many people, either with bug reports, feature proposals, patches, or letters saying “Thanks!”. Special thanks goes to the following people (no particular order): • Dan Harkless—contributed a lot of code and documentation of extremely high quality, as well as the --page-requisites and related options. He was the principal maintainer for some time and released Wget 1.6. • Ian Abbott—contributed bug fixes, Windows-related fixes, and provided a prototype im- plementation of the breadth-first recursive download. Co-maintained Wget during the 1.8 release cycle. • The dotsrc.org crew, in particular Karsten Thygesen—donated system resources such as the mailing list, web space, ftp space, and version control repositories, along with a lot of time to make these actually work. Christian Reiniger was of invaluable help with setting up Subversion. • Heiko Herold—provided high-quality Windows builds and contributed bug and build reports for many years. • Shawn McHorse—bug reports and patches. • Kaveh R. Ghazi—on-the-fly ansi2knr-ization. Lots of portability fixes. • Gordon Matzigkeit—‘.netrc’ support. • Zlatko ˇ Caluˇ si´ c, Tomislav Vujec and Draˇ zen Kaˇ car—feature suggestions and “philosophical” discussions. • Darko Budor—initial port to Windows. • Antonio Rosella—help and suggestions, plus the initial Italian translation. • Tomislav Petrovi´ c, Mario Mikoˇ cevi´ c—many bug reports and suggestions. • Fran¸ cois Pinard—many thorough bug reports and discussions. • Karl Eichwalder—lots of help with internationalization, Makefile layout and many other things. • Junio Hamano—donated support for Opie and http Digest authentication. • Mauro Tortonesi—improved IPv6 support, adding support for dual family systems. Refac- tored and enhanced FTP IPv6 code. Maintained GNU Wget from 2004–2007. Chapter 9: Appendices 58 • Christopher G. Lewis—maintenance of the Windows version of GNU WGet. • Gisle Vanem—many helpful patches and improvements, especially for Windows and MS- DOS support. • Ralf Wildenhues—contributed patches to convert Wget to use Automake as part of its build process, and various bugfixes. • Steven Schubiger—Many helpful patches, bugfixes and improvements. Notably, conversion of Wget to use the Gnulib quotes and quoteargs modules, and the addition of password prompts at the console, via the Gnulib getpasswd-gnu module. • Ted Mielczarek—donated support for CSS. • Saint Xavier—Support for IRIs (RFC 3987). • People who provided donations for development—including Brian Gough. The following people have provided patches, bug/build reports, useful suggestions, beta test- ing services, fan mail and all the other things that make maintenance so much fun: Tim Adam, Adrian Aichner, Martin Baehr, Dieter Baron, Roger Beeman, Dan Berger, T. Bharath, Christian Biere, Paul Bludov, Daniel Bodea, Mark Boyns, John Burden, Julien Buty, Wanderlei Cavassin, Gilles Cedoc, Tim Charron, Noel Cragg, Kristijan ˇ Conkaˇ s, John Daily, An- dreas Damm, Ahmon Dancy, Andrew Davison, Bertrand Demiddelaer, Alexander Dergachev, Andrew Deryabin, Ulrich Drepper, Marc Duponcheel, Damir Dˇ zeko, Alan Eldridge, Hans- Andreas Engel, Aleksandar Erkalovi´ c, Andy Eskilsson, Jo~ ao Ferreira, Christian Fraenkel, David Fritz, Mike Frysinger, Charles C. Fu, FUJISHIMA Satsuki, Masashi Fujita, Howard Gayle, Mar- cel Gerrits, Lemble Gregory, Hans Grobler, Alain Guibert, Mathieu Guillaume, Aaron Hawley, Jochen Hein, Karl Heuer, Madhusudan Hosaagrahara, HIROSE Masaaki, Ulf Harnhammar, Gre- gor Hoffleit, Erik Magnus Hulthen, Richard Huveneers, Jonas Jensen, Larry Jones, Simon Josef- sson, Mario Juri´ c, Hack Kampbjørn, Const Kaplinsky, Goran Kezunovi´ c, Igor Khristophorov, Robert Kleine, KOJIMA Haime, Fila Kolodny, Alexander Kourakos, Martin Kraemer, Sami Krank, Jay Krell, (Simos KSenitellis), Christian Lackas, Hrvoje Lacko, Daniel S. Lewart, Nicolás Lichtmeier, Dave Love, Alexander V. Lukyanov, Thomas Lußnig, Andre Majorel, Au- relien Marchand, Matthew J. Mellon, Jordan Mendelson, Ted Mielczarek, Robert Millan, Lin Zhe Min, Jan Minar, Tim Mooney, Keith Moore, Adam D. Moss, Simon Munton, Charlie Ne- gyesi, R. K. Owen, Jim Paris, Kenny Parnell, Leonid Petrov, Simone Piunno, Andrew Pollock, Steve Pothier, Jan Pˇ rikryl, Marin Purgar, Csaba Ráduly, Keith Refson, Bill Richardson, Tyler Riddle, Tobias Ringstrom, Jochen Roderburg, Juan Jos´ e Rodr´ıguez, Maciej W. Rozycki, Ed- ward J. Sabol, Heinz Salzmann, Robert Schmidt, Nicolas Schodet, Benno Schulenberg, Andreas Schwab, Steven M. Schweda, Chris Seawood, Pranab Shenoy, Dennis Smit, Toomas Soome, Tage Stabell-Kulo, Philip Stadermann, Daniel Stenberg, Sven Sternberger, Markus Strasser, John Summerfield, Szakacsits Szabolcs, Mike Thomas, Philipp Thomas, Mauro Tortonesi, Dave Turner, Gisle Vanem, Rabin Vincent, Russell Vincent, ˇ Zeljko Vrba, Charles G Waldman, Dou- glas E. Wegscheid, Ralf Wildenhues, Joshua David Williams, Benjamin Wolsey, Saint Xavier, YAMAZAKI Makoto, Jasmin Zainul, Bojan ˇ Zdrnja, Kristijan Zimmer, Xin Zou. Apologies to all who I accidentally left out, and many thanks to all the subscribers of the Wget mailing list. Appendix A: Copying this manual 59 Appendix A Copying this manual A.1 GNU Free Documentation License Version 1.3, 3 November 2008 Copyright c ? 2000, 2001, 2002, 2007, 2008, 2015 Free Software Foundation, Inc. http://fsf.org/ Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
PREAMBLE The purpose of this License is to make a manual, textbook, or other functional and useful document free in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others. This License is a kind of “copyleft”, which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software. We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
APPLICABILITY AND DEFINITIONS This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The “Document”, below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as “you”. You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law. A “Modified Version” of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language. A “Secondary Section” is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document’s overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them. The “Invariant Sections” are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none. Appendix A: Copying this manual 60 The “Cover Texts” are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words. A “Transparent” copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not “Transparent” is called “Opaque”. Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaT E X input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modifi- cation. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only. The “Title Page” means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, “Title Page” means the text near the most prominent appearance of the work’s title, preceding the beginning of the body of the text. The “publisher” means any person or entity that distributes copies of the Document to the public. A section “Entitled XYZ” means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as “Acknowledgements”, “Dedications”, “Endorsements”, or “History”.) To “Preserve the Title” of such a section when you modify the Document means that it remains a section “Entitled XYZ” according to this definition. The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
VERBATIM COPYING You may copy and distribute the Document in any medium, either commercially or noncom- mercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3. You may also lend copies, under the same conditions stated above, and you may publicly display copies. Appendix A: Copying this manual 61
COPYING IN QUANTITY If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document’s license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects. If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages. If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public. It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
MODIFICATIONS You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version: A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission. B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement. C. State on the Title page the name of the publisher of the Modified Version, as the publisher. D. Preserve all the copyright notices of the Document. E. Add an appropriate copyright notice for your modifications adjacent to the other copy- right notices. F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below. G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document’s license notice. Appendix A: Copying this manual 62 H. Include an unaltered copy of this License. I. Preserve the section Entitled “History”, Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled “History” in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence. J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the “History” section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission. K. For any section Entitled “Acknowledgements” or “Dedications”, Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles. M. Delete any section Entitled “Endorsements”. Such a section may not be included in the Modified Version. N. Do not retitle any existing section to be Entitled “Endorsements” or to conflict in title with any Invariant Section. O. Preserve any Warranty Disclaimers. If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version’s license notice. These titles must be distinct from any other section titles. You may add a section Entitled “Endorsements”, provided it contains nothing but endorse- ments of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard. You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one. The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
COMBINING DOCUMENTS You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers. The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Appendix A: Copying this manual 63 Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. In the combination, you must combine any sections Entitled “History” in the various original documents, forming one section Entitled “History”; likewise combine any sections Entitled “Acknowledgements”, and any sections Entitled “Dedications”. You must delete all sections Entitled “Endorsements.”
COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an “aggregate” if the copyright resulting from the compilation is not used to limit the legal rights of the compilation’s users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document. If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document’s Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
TRANSLATION Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail. If a section in the Document is Entitled “Acknowledgements”, “Dedications”, or “History”, the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
TERMINATION You may not copy, modify, sublicense, or distribute the Document except as expressly pro- vided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License. However, if you cease all violation of this License, then your license from a particular copy- right holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Appendix A: Copying this manual 64 Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it.
FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new, revised versions of the GNU Free Doc- umentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/. Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License “or any later version” applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy’s public statement of acceptance of a version permanently authorizes you to choose that version for the Document.
RELICENSING “Massive Multiauthor Collaboration Site” (or “MMC Site”) means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A “Massive Multiauthor Collaboration” (or “MMC”) contained in the site means any set of copyrightable works thus published on the MMC site. “CC-BY-SA” means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal place of business in San Francisco, California, as well as future copyleft versions of that license published by that same organization. “Incorporate” means to publish or republish a Document, in whole or in part, as part of another Document. An MMC is “eligible for relicensing” if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008. The operator of an MMC Site may republish an MMC contained in the site under CC-BY- SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing. Appendix A: Copying this manual 65 ADDENDUM: How to use this License for your documents To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page: Copyright (C) year your name. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled ‘‘GNU Free Documentation License’’. If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with. . .Texts.” line with this: with the Invariant Sections being list their titles, with the Front-Cover Texts being list, and with the Back-Cover Texts being list. If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation. If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software. Concept Index 66 Concept Index (Index is nonexistent) ii Table of Contents 1 Overview ....................................................... 1 2 Invoking........................................................ 2 2.1 URL Format ...................................................................... 2 2.2 Option Syntax..................................................................... 3 2.3 Basic Startup Options............................................................. 3 2.4 Logging and Input File Options.................................................... 4 2.5 Download Options................................................................. 5 2.6 Directory Options................................................................ 13 2.7 HTTP Options................................................................... 14 2.8 HTTPS (SSL/TLS) Options...................................................... 20 2.9 FTP Options..................................................................... 23 2.10 FTPS Options .................................................................. 25 2.11 Recursive Retrieval Options ..................................................... 25 2.12 Recursive Accept/Reject Options................................................ 28 2.13 Exit Status ..................................................................... 30 3 Recursive Download ......................................... 31 4 Following Links............................................... 32 4.1 Spanning Hosts .................................................................. 32 4.2 Types of Files.................................................................... 32 4.3 Directory-Based Limits........................................................... 34 4.4 Relative Links.................................................................... 35 4.5 Following FTP Links............................................................. 35 5 Time-Stamping............................................... 36 5.1 Time-Stamping Usage............................................................ 36 5.2 HTTP Time-Stamping Internals.................................................. 37 5.3 FTP Time-Stamping Internals.................................................... 37 6 Startup File................................................... 38 6.1 Wgetrc Location ................................................................. 38 6.2 Wgetrc Syntax ................................................................... 38 6.3 Wgetrc Commands............................................................... 38 6.4 Sample Wgetrc................................................................... 45 7 Examples...................................................... 49 7.1 Simple Usage..................................................................... 49 7.2 Advanced Usage.................................................................. 49 7.3 Very Advanced Usage ............................................................ 50 iii 8 Various........................................................ 52 8.1 Proxies........................................................................... 52 8.2 Distribution...................................................................... 53 8.3 Web Site......................................................................... 53 8.4 Mailing Lists..................................................................... 53 Primary List ....................................................................... 53 Bug Notices List ................................................................... 53 Obsolete Lists...................................................................... 53 8.5 Internet Relay Chat .............................................................. 53 8.6 Reporting Bugs .................................................................. 53 8.7 Portability ....................................................................... 54 8.8 Signals........................................................................... 55 9 Appendices ................................................... 56 9.1 Robot Exclusion.................................................................. 56 9.2 Security Considerations........................................................... 57 9.3 Contributors ..................................................................... 57 Appendix A Copying this manual ............................ 59 A.1 GNU Free Documentation License ............................................... 59 Concept Index.................................................... 66