Docs Overview
- Curl Post Data From File
- Curl Post Data
- Curl Post Data From Stdin
- Curl Post Data-raw
- Curl Post Data Form
- Curl Post Data
Project
The mitigation that exists to make this scenario less frequent is to have curl pass on an extra header, Expect: 100-continue, which gives the server a chance to deny the request before a lot of data is sent off. Curl sends this Expect: header by default if the POST it will do is known or suspected to be larger than just minuscule. $ curl localhost:3000/api/json -X POST -d @data.json -header 'Content-Type: application/json'.
Protocols
Releases
Tool
Related:
Man Page
FAQ
Man Page
FAQ
Simple Usage
Get the main page from a web-server:
Get the README file the user's home directory at funet's ftp-server:
Get a web page from a server using port 8000:
Get a directory listing of an FTP site:
Get the definition of curl from a dictionary:
Fetch two documents at once:
Get a file off an FTPS server:
or use the more appropriate FTPS way to get the same file:
Get a file from an SSH server using SFTP:
Get a file from an SSH server using SCP using a private key (not password-protected) to authenticate:
Get a file from an SSH server using SCP using a private key (password-protected) to authenticate:
Get the main page from an IPv6 web server:
Get a file from an SMB server:
Download to a File
Get a web page and store in a local file with a specific name:
Get a web page and store in a local file, make the local file get the name of the remote document (if no file name part is specified in the URL, this will fail):
Fetch two files and store them with their remote names:
Using Passwords
FTP
To ftp files using name+passwd, include them in the URL like:
or specify them with the -u flag like
FTPS
It is just like for FTP, but you may also want to specify and use SSL-specific options for certificates etc.
Note that using
FTPS://
as prefix is the 'implicit' way as described in the standards while the recommended 'explicit' way is done by using FTP:// and the --ftp-ssl
option.SFTP / SCP
This is similar to FTP, but you can use the
--key
option to specify a private key to use instead of a password. Note that the private key may itself be protected by a password that is unrelated to the login password of the remote system; this password is specified using the --pass
option. Typically, curl will automatically extract the public key from the private key file, but in cases where curl does not have the proper library support, a matching public key file must be specified using the --pubkey
option.HTTP
Curl also supports user and password in HTTP URLs, thus you can pick a file like:
or specify user and password separately like in
HTTP offers many different methods of authentication and curl supports several: Basic, Digest, NTLM and Negotiate (SPNEGO). Without telling which method to use, curl defaults to Basic. You can also ask curl to pick the most secure ones out of the ones that the server accepts for the given URL, by using
--anyauth
.Note! According to the URL specification, HTTP URLs can not contain a user and password, so that style will not work when using curl via a proxy, even though curl allows it at other times. When using a proxy, you must use the
-u
style for user and password.HTTPS
Probably most commonly used with private certificates, as explained below.
Proxy
curl supports both HTTP and SOCKS proxy servers, with optional authentication. It does not have special support for FTP proxy servers since there are no standards for those, but it can still be made to work with many of them. You can also use both HTTP and SOCKS proxies to transfer files to and from FTP servers.
Get an ftp file using an HTTP proxy named my-proxy that uses port 888:
Get a file from an HTTP server that requires user and password, using the same proxy as above:
Some proxies require special authentication. Specify by using -U as above:
A comma-separated list of hosts and domains which do not use the proxy can be specified as:
If the proxy is specified with
--proxy1.0
instead of --proxy
or -x
, then curl will use HTTP/1.0 instead of HTTP/1.1 for any CONNECT
attempts.curl also supports SOCKS4 and SOCKS5 proxies with
--socks4
and --socks5
.See also the environment variables Curl supports that offer further proxy control.
Most FTP proxy servers are set up to appear as a normal FTP server from the client's perspective, with special commands to select the remote FTP server. curl supports the
-u
, -Q
and --ftp-account
options that can be used to set up transfers through many FTP proxies. For example, a file can be uploaded to a remote FTP server using a Blue Coat FTP proxy with the options:See the manual for your FTP proxy to determine the form it expects to set up transfers, and curl's
-v
option to see exactly what curl is sending.Ranges
HTTP 1.1 introduced byte-ranges. Using this, a client can request to get only one or more subparts of a specified document. Curl supports this with the
-r
flag.Get the first 100 bytes of a document:
Get the last 500 bytes of a document:
Curl also supports simple ranges for FTP files as well. Then you can only specify start and stop position.
Get the first 100 bytes of a document using FTP:
Uploading
FTP / FTPS / SFTP / SCP
Upload all data on stdin to a specified server:
Upload data from a specified file, login with user and password:
Upload a local file to the remote site, and use the local file name at the remote site too:
Upload a local file to get appended to the remote file:
Curl also supports ftp upload through a proxy, but only if the proxy is configured to allow that kind of tunneling. If it does, you can run curl in a fashion similar to:
SMB / SMBS
HTTP
Upload all data on stdin to a specified HTTP site:
Note that the HTTP server must have been configured to accept PUT before this can be done successfully.
For other ways to do HTTP data upload, see the POST section below.
Verbose / Debug
If curl fails where it isn't supposed to, if the servers don't let you in, if you can't understand the responses: use the
-v
flag to get verbose fetching. Curl will output lots of info and what it sends and receives in order to let the user see all client-server interaction (but it won't show you the actual data).To get even more details and information on what curl does, try using the
--trace
or --trace-ascii
options with a given file name to log to, like this:Detailed Information
Different protocols provide different ways of getting detailed information about specific files/documents. To get curl to show detailed information about a single file, you should use
-I
/--head
option. It displays all available info on a single file for HTTP and FTP. The HTTP information is a lot more extensive.For HTTP, you can get the header information (the same as
-I
would show) shown before the data by using -i
/--include
. Curl understands the -D
/--dump-header
option when getting files from both FTP and HTTP, and it will then store the headers in the specified file.Store the HTTP headers in a separate file (headers.txt in the example):
Note that headers stored in a separate file can be very useful at a later time if you want curl to use cookies sent by the server. More about that in the cookies section.
POST (HTTP)
It's easy to post data using curl. This is done using the
-d <data>
option. The post data must be urlencoded.Post a simple 'name' and 'phone' guestbook.
How to post a form with curl, lesson #1:
Dig out all the
<input>
tags in the form that you want to fill in.If there's a 'normal' post, you use
-d
to post. -d
takes a full 'post string', which is in the formatThe 'variable' names are the names set with
'name='
in the <input>
tags, and the data is the contents you want to fill in for the inputs. The data must be properly URL encoded. That means you replace space with + and that you replace weird letters with %XX where XX is the hexadecimal representation of the letter's ASCII code.Example:
(page located at
http://www.formpost.com/getthis/
)We want to enter user 'foobar' with password '12345'.
To post to this, you enter a curl command line like:
While
-d
uses the application/x-www-form-urlencoded mime-type, generally understood by CGI's and similar, curl also supports the more capable multipart/form-data type. This latter type supports things like file upload.-F
accepts parameters like -F 'name=contents'
. If you want the contents to be read from a file, use @filename
as contents. When specifying a file, you can also specify the file content type by appending ;type=<mime type>
to the file name. You can also post the contents of several files in one field. For example, the field name 'coolfiles' is used to send three files, with different content types using the following syntax:If the content-type is not specified, curl will try to guess from the file extension (it only knows a few), or use the previously specified type (from an earlier file if several files are specified in a list) or else it will use the default type 'application/octet-stream'.
Emulate a fill-in form with
-F
. Let's say you fill in three fields in a form. One field is a file name which to post, one field is your name and one field is a file description. We want to post the file we have written named 'cooltext.txt'. To let curl do the posting of this data instead of your favourite browser, you have to read the HTML source of the form page and find the names of the input fields. In our example, the input field names are 'file', 'yourname' and 'filedescription'.To send two files in one post you can do it in two ways:
Send multiple files in a single 'field' with a single field name:
Send two fields with two field names
To send a field value literally without interpreting a leading
@
or <
, or an embedded ;type=
, use --form-string
instead of -F
. This is recommended when the value is obtained from a user or some other unpredictable source. Under these circumstances, using -F
instead of --form-string
could allow a user to trick curl into uploading a file.Referrer
An HTTP request has the option to include information about which address referred it to the actual page. Curl allows you to specify the referrer to be used on the command line. It is especially useful to fool or trick stupid servers or CGI scripts that rely on that information being available or contain certain data.
User Agent
An HTTP request has the option to include information about the browser that generated the request. Curl allows it to be specified on the command line. It is especially useful to fool or trick stupid servers or CGI scripts that only accept certain browsers.
Example:
Other common strings:
Mozilla/3.0 (Win95; I)
- Netscape Version 3 for Windows 95Mozilla/3.04 (Win95; U)
- Netscape Version 3 for Windows 95Mozilla/2.02 (OS/2; U)
- Netscape Version 2 for OS/2Mozilla/4.04 [en] (X11; U; AIX 4.2; Nav)
- Netscape for AIXMozilla/4.05 [en] (X11; U; Linux 2.0.32 i586)
- Netscape for Linux
Note that Internet Explorer tries hard to be compatible in every way:
Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)
- MSIE for W95
Mozilla is not the only possible User-Agent name:
Konqueror/1.0
- KDE File Manager desktop clientLynx/2.7.1 libwww-FM/2.14
- Lynx command line browser
Cookies
Cookies are generally used by web servers to keep state information at the client's side. The server sets cookies by sending a response line in the headers that looks like
Set-Cookie: <data>
where the data part then typically contains a set of NAME=VALUE
pairs (separated by semicolons ;
like NAME1=VALUE1; NAME2=VALUE2;
). The server can also specify for what path the 'cookie' should be used for (by specifying path=value
), when the cookie should expire (expire=DATE
), for what domain to use it (domain=NAME
) and if it should be used on secure connections only (secure
).If you've received a page from a server that contains a header like:
it means the server wants that first pair passed on when we get anything in a path beginning with '/foo'.
Example, get a page that wants my name passed in a cookie:
Curl also has the ability to use previously received cookies in following sessions. If you get cookies from a server and store them in a file in a manner similar to:
.. you can then in a second connect to that (or another) site, use the cookies from the 'headers' file like:
While saving headers to a file is a working way to store cookies, it is however error-prone and not the preferred way to do this. Instead, make curl save the incoming cookies using the well-known netscape cookie format like this:
Note that by specifying
-b
you enable the 'cookie awareness' and with -L
you can make curl follow a location: (which often is used in combination with cookies). So that if a site sends cookies and a location, you can use a non-existing file to trigger the cookie awareness like:The file to read cookies from must be formatted using plain HTTP headers OR as netscape's cookie file. Curl will determine what kind it is based on the file contents. In the above command, curl will parse the header and store the cookies received from www.example.com. curl will send to the server the stored cookies which match the request as it follows the location. The file 'empty.txt' may be a nonexistent file.
To read and write cookies from a netscape cookie file, you can set both
-b
and -c
to use the same file:Progress Meter
The progress meter exists to show a user that something actually is happening. The different fields in the output have the following meaning:
From left-to-right:
- % - percentage completed of the whole transfer
- Total - total size of the whole expected transfer
- % - percentage completed of the download
- Received - currently downloaded amount of bytes
- % - percentage completed of the upload
- Xferd - currently uploaded amount of bytes
- Average Speed Dload - the average transfer speed of the download
- Average Speed Upload - the average transfer speed of the upload
- Time Total - expected time to complete the operation
- Time Current - time passed since the invoke
- Time Left - expected time left to completion
- Curr.Speed - the average transfer speed the last 5 seconds (the first 5 seconds of a transfer is based on less time of course.)
The
-#
option will display a totally different progress bar that doesn't need much explanation!Speed Limit
Curl allows the user to set the transfer speed conditions that must be met to let the transfer keep going. By using the switch
-y
and -Y
you can make curl abort transfers if the transfer speed is below the specified lowest limit for a specified time.To have curl abort the download if the speed is slower than 3000 bytes per second for 1 minute, run:
This can very well be used in combination with the overall time limit, so that the above operation must be completed in whole within 30 minutes:
Forcing curl not to transfer data faster than a given rate is also possible, which might be useful if you're using a limited bandwidth connection and you don't want your transfer to use all of it (sometimes referred to as 'bandwidth throttle').
Make curl transfer data no faster than 10 kilobytes per second:
or
Or prevent curl from uploading data faster than 1 megabyte per second:
When using the
--limit-rate
option, the transfer rate is regulated on a per-second basis, which will cause the total transfer speed to become lower than the given number. Sometimes of course substantially lower, if your transfer stalls during periods.Config File
Curl automatically tries to read the
.curlrc
file (or _curlrc
file on Microsoft Windows systems) from the user's home dir on startup.The config file could be made up with normal command line switches, but you can also specify the long options without the dashes to make it more readable. You can separate the options and the parameter with spaces, or with
=
or :
. Comments can be used within the file. If the first letter on a line is a #
-symbol the rest of the line is treated as a comment.If you want the parameter to contain spaces, you must enclose the entire parameter within double quotes (
'
). Within those quotes, you specify a quote as '
.NOTE: You must specify options and their arguments on the same line.
Example, set default time out and proxy in a config file:
Whitespaces ARE significant at the end of lines, but all whitespace leading up to the first characters of each line are ignored.
Prevent curl from reading the default file by using -q as the first command line parameter, like:
Force curl to get and display a local help page in case it is invoked without URL by making a config file similar to:
You can specify another config file to be read by using the
-K
/--config
flag. If you set config file name to -
it'll read the config from stdin, which can be handy if you want to hide options from being visible in process tables etc:Extra Headers
When using curl in your own very special programs, you may end up needing to pass on your own custom headers when getting a web page. You can do this by using the
-H
flag.Torch.min(input, dim, keepdim=False,., out=None) - (Tensor, LongTensor) Returns a namedtuple (values, indices) where values is the minimum value of each row of the input tensor in the given dimension dim. And indices is the index location of each minimum value found (argmin). Torch min. Tensorminexample = torch.tensor ( 1,-10, 1, 2, 2, 2, 3, 3, 3 4, 4, 4, 5,50, 5, 6, 6, 6 ) We use torch.tensor to create a floating point tensor. We pass in our data structure which is going to be 2x3x3, and we assign it to the Python variable tensorminexample. Utilising Lithium technology, the FlashTorch Mini is now lighter and has a longer run time. With 3 power selection modes, the brightness of the FlashTorch Mini can be tailored to your own needs. It can also be recharged without having to remove the battery, and the new. Clamp all elements in input into the range min, max. Alias for torch.clamp. Computes the element-wise conjugate of the given input tensor. Create a new floating-point tensor with the magnitude of input and the sign of other, elementwise. Torch.minimum(input, other,., out=None) → Tensor Computes the element-wise minimum of input and other.
Example, send the header
X-you-and-me: yes
to the server when getting a page:This can also be useful in case you want curl to send a different text in a header than it normally does. The
-H
header you specify then replaces the header curl would normally send. If you replace an internal header with an empty one, you prevent that header from being sent. To prevent the Host:
header from being used:FTP and Path Names
Do note that when getting files with a
ftp://
URL, the given path is relative the directory you enter. To get the file README
from your home directory at your ftp site, do:But if you want the README file from the root directory of that very same site, you need to specify the absolute file name:
(I.e with an extra slash in front of the file name.)
SFTP and SCP and Path Names
With sftp: and scp: URLs, the path name given is the absolute name on the server. To access a file relative to the remote user's home directory, prefix the file with
/~/
, such as:FTP and Firewalls
The FTP protocol requires one of the involved parties to open a second connection as soon as data is about to get transferred. There are two ways to do this.
The default way for curl is to issue the PASV command which causes the server to open another port and await another connection performed by the client. This is good if the client is behind a firewall that doesn't allow incoming connections.
If the server, for example, is behind a firewall that doesn't allow connections on ports other than 21 (or if it just doesn't support the
PASV
command), the other way to do it is to use the PORT
command and instruct the server to connect to the client on the given IP number and port (as parameters to the PORT command).The
-P
flag to curl supports a few different options. Your machine may have several IP-addresses and/or network interfaces and curl allows you to select which of them to use. Default address can also be used:Download with
PORT
but use the IP address of our le0
interface (this does not work on windows):Download with
PORT
but use 192.168.0.10 as our IP address to use:Network Interface
Get a web page from a server using a specified port for the interface:
or
HTTPS
Secure HTTP requires a TLS library to be installed and used when curl is built. If that is done, curl is capable of retrieving and posting documents using the HTTPS protocol.
Example:
curl is also capable of using client certificates to get/post files from sites that require valid certificates. The only drawback is that the certificate needs to be in PEM-format. PEM is a standard and open format to store certificates with, but it is not used by the most commonly used browsers. If you want curl to use the certificates you use with your (favourite) browser, you may need to download/compile a converter that can convert your browser's formatted certificates to PEM formatted ones.
Example on how to automatically retrieve a document using a certificate with a personal password:
If you neglect to specify the password on the command line, you will be prompted for the correct password before any data can be received.
Many older HTTPS servers have problems with specific SSL or TLS versions, which newer versions of OpenSSL etc use, therefore it is sometimes useful to specify what SSL-version curl should use. Use -3, -2 or -1 to specify that exact SSL version to use (for SSLv3, SSLv2 or TLSv1 respectively):
Otherwise, curl will attempt to use a sensible TLS default version.
Resuming File Transfers
To continue a file transfer where it was previously aborted, curl supports resume on HTTP(S) downloads as well as FTP uploads and downloads.
Continue downloading a document:
Continue uploading a document:
Continue downloading a document from a web server
Time Conditions
HTTP allows a client to specify a time condition for the document it requests. It is
If-Modified-Since
or If-Unmodified-Since
. curl allows you to specify them with the -z
/--time-cond
flag.For example, you can easily make a download that only gets performed if the remote file is newer than a local copy. It would be made like:
Or you can download a file only if the local file is newer than the remote one. Do this by prepending the date string with a
-
, as in:You can specify a 'free text' date as condition. Tell curl to only download the file if it was updated since January 12, 2012:
Curl will then accept a wide range of date formats. You always make the date check the other way around by prepending it with a dash (
-
).DICT
For fun try
Aliases for 'm' are 'match' and 'find', and aliases for 'd' are 'define' and 'lookup'. For example,
Commands that break the URL description of the RFC (but not the DICT protocol) are
Authentication support is still missing
LDAP
If you have installed the OpenLDAP library, curl can take advantage of it and offer
ldap://
support. On Windows, curl will use WinLDAP from Platform SDK by default.Default protocol version used by curl is LDAPv3. LDAPv2 will be used as fallback mechanism in case if LDAPv3 will fail to connect.
LDAP is a complex thing and writing an LDAP query is not an easy task. I do advise you to dig up the syntax description for that elsewhere. One such place might be: RFC 2255, The LDAP URL Format
To show you an example, this is how I can get all people from my local LDAP server that has a certain sub-domain in their email address:
Anydesk register. AnyDesk ensures secure and reliable remote desktop connections for IT professionals and on-the-go individuals alike. Start your 14 day trial today. Work from Home Learn more. 300+ million downloads worldwide. 400+ million sessions per month.
If I want the same info in HTML format, I can get it by not using the
-B
(enforce ASCII) flag.You also can use authentication when accessing LDAP catalog:
By default, if user and password provided, OpenLDAP/WinLDAP will use basic authentication. On Windows you can control this behavior by providing one of
--basic
, --ntlm
or --digest
option in curl command lineOn Windows, if no user/password specified, auto-negotiation mechanism will be used with current logon credentials (SSPI/SPNEGO).
Environment Variables
Curl reads and understands the following environment variables:
They should be set for protocol-specific proxies. General proxy should be set with
A comma-separated list of host names that shouldn't go through any proxy is set in (only an asterisk,
*
matches all hosts)If the host name matches one of these strings, or the host is within the domain of one of these strings, transactions with that node will not be proxied. When a domain is used, it needs to start with a period. A user can specify that both www.example.com and foo.example.com should not use a proxy by setting
NO_PROXY
to .example.com
. By including the full name you can exclude specific host names, so to make www.example.com
not use a proxy but still have foo.example.com
do it, set NO_PROXY
to www.example.com
.The usage of the
-x
/--proxy
flag overrides the environment variables.Netrc
Unix introduced the
.netrc
concept a long time ago. It is a way for a user to specify name and password for commonly visited FTP sites in a file so that you don't have to type them in each time you visit those sites. You realize this is a big security risk if someone else gets hold of your passwords, so therefore most unix programs won't read this file unless it is only readable by yourself (curl doesn't care though).Curl supports
.netrc
files if told to (using the -n
/--netrc
and --netrc-optional
options). This is not restricted to just FTP, so curl can use it for all protocols where authentication is used.A very simple
.netrc
file could look something like:Custom Output
To better allow script programmers to get to know about the progress of curl, the
-w
/--write-out
option was introduced. Using this, you can specify what information from the previous transfer you want to extract.To display the amount of bytes downloaded together with some text and an ending newline:
Kerberos FTP Transfer
Curl supports kerberos4 and kerberos5/GSSAPI for FTP transfers. You need the kerberos package installed and used at curl build time for it to be available.
First, get the krb-ticket the normal way, like with the kinit/kauth tool. Then use curl in way similar to:
There's no use for a password on the
-u
switch, but a blank one will make curl ask for one and you already entered the real password to kinit/kauth.TELNET
The curl telnet support is basic and very easy to use. Curl passes all data passed to it on stdin to the remote server. Connect to a remote telnet server using a command line similar to:
And enter the data to pass to the server on stdin. The result will be sent to stdout or to the file you specify with
-o
.You might want the
-N
/--no-buffer
option to switch off the buffered output for slow connections or similar.Pass options to the telnet protocol negotiation, by using the
-t
option. To tell the server we use a vt100 terminal, try something like:Other interesting options for it
-t
include:XDISPLOC=<X display>
Sets the X display location.NEW_ENV=<var,val>
Sets an environment variable.
NOTE: The telnet protocol does not specify any way to login with a specified user and password so curl can't do that automatically. To do that, you need to track when the login prompt is received and send the username and password accordingly.
Persistent Connections
Specifying multiple files on a single command line will make curl transfer all of them, one after the other in the specified order.
libcurl will attempt to use persistent connections for the transfers so that the second transfer to the same host can use the same connection that was already initiated and was left open in the previous transfer. This greatly decreases connection time for all but the first transfer and it makes a far better use of the network.
Note that curl cannot use persistent connections for transfers that are used in subsequence curl invokes. Try to stuff as many URLs as possible on the same command line if they are using the same host, as that'll make the transfers faster. If you use an HTTP proxy for file transfers, practically all transfers will be persistent.
Multiple Transfers With A Single Command Line
As is mentioned above, you can download multiple files with one command line by simply adding more URLs. If you want those to get saved to a local file instead of just printed to stdout, you need to add one save option for each URL you specify. Note that this also goes for the
-O
option (but not --remote-name-all
).For example: get two files and use
-O
for the first and a custom file name for the second:You can also upload multiple files in a similar fashion:
Curl Post Data From File
IPv6
curl will connect to a server with IPv6 when a host lookup returns an IPv6 address and fall back to IPv4 if the connection fails. The
--ipv4
and --ipv6
options can specify which address to use when both are available. IPv6 addresses can also be specified directly in URLs using the syntax:When this style is used, the
-g
option must be given to stop curl from interpreting the square brackets as special globbing characters. Link local and site local addresses including a scope identifier, such as fe80::1234%1
, may also be used, but the scope portion must be numeric or match an existing network interface on Linux and the percent character must be URL escaped. The previous example in an SFTP URL might look like:IPv6 addresses provided other than in URLs (e.g. to the
--proxy
, --interface
or --ftp-port
options) should not be URL encoded.Metalink
Curl supports Metalink (both version 3 and 4 (RFC 5854) are supported), a way to list multiple URIs and hashes for a file. Curl will make use of the mirrors listed within for failover if there are errors (such as the file or server not being available). It will also verify the hash of the file after the download completes. The Metalink file itself is downloaded and processed in memory and not stored in the local file system.
Example to use a remote Metalink file:
To use a Metalink file in the local file system, use FILE protocol (
file://
):Please note that if FILE protocol is disabled, there is no way to use a local Metalink file at the time of this writing. Also note that if
--metalink
and --include
are used together, --include
will be ignored. This is because including headers in the response will break Metalink parser and if the headers are included in the file described in Metalink file, hash check will fail.Mailing Lists
For your convenience, we have several open mailing lists to discuss curl, its development and things relevant to this. Get all info at https://curl.se/mail/.
Please direct curl questions, feature requests and trouble reports to one of these mailing lists instead of mailing any individual.
Available lists include:
curl-users
Users of the command line tool. How to use it, what doesn't work, new features, related tools, questions, news, installations, compilations, running, porting etc.
curl-library
Developers using or developing libcurl. Bugs, extensions, improvements.
curl-announce
Low-traffic. Only receives announcements of new public versions. At worst, that makes something like one or two mails per month, but usually only one mail every second month.
curl-and-php
Using the curl functions in PHP. Everything curl with a PHP angle. Or PHP with a curl angle.
curl-and-python
Python hackers using curl with or without the python binding pycurl.
Docs OverviewProject
Protocols
Releases
Tool
Related:
curl man page
Manual
FAQ
curl man page
Manual
FAQ
Background
This document assumes that you're familiar with HTML and general networking.
The increasing amount of applications moving to the web has made 'HTTP Scripting' more frequently requested and wanted. To be able to automatically extract information from the web, to fake users, to post or upload data to web servers are all important tasks today.
Curl is a command line tool for doing all sorts of URL manipulations and transfers, but this particular document will focus on how to use it when doing HTTP requests for fun and profit. I will assume that you know how to invoke
curl --help
or curl --manual
to get basic information about it.Curl is not written to do everything for you. It makes the requests, it gets the data, it sends data and it retrieves the information. You probably need to glue everything together using some kind of script language or repeated manual invokes.
The HTTP Protocol
HTTP is the protocol used to fetch data from web servers. It is a very simple protocol that is built upon TCP/IP. The protocol also allows information to get sent to the server from the client using a few different methods, as will be shown here.
HTTP is plain ASCII text lines being sent by the client to a server to request a particular action, and then the server replies a few text lines before the actual requested content is sent to the client.
The client, curl, sends a HTTP request. The request contains a method (like GET, POST, HEAD etc), a number of request headers and sometimes a request body. The HTTP server responds with a status line (indicating if things went well), response headers and most often also a response body. The 'body' part is the plain data you requested, like the actual HTML or the image etc.
See the Protocol
Using curl's option
--verbose
(-v
as a short option) will display what kind of commands curl sends to the server, as well as a few other informational texts.--verbose
is the single most useful option when it comes to debug or even understand the curl<->server interaction.Sometimes even
--verbose
is not enough. Then --trace
and [--trace-ascii
]((https://curl.se/docs/manpage.html#--trace-ascii) offer even more details as they show everything curl sends and receives. Use it like this:See the Timing
Many times you may wonder what exactly is taking all the time, or you just want to know the amount of milliseconds between two points in a transfer. For those, and other similar situations, the [
--trace-time
]((https://curl.se/docs/manpage.html#--trace-time) option is what you need. It'll prepend the time to each trace output line:See the Response
By default curl sends the response to stdout. You need to redirect it somewhere to avoid that, most often that is done with
-o
or -O
.Spec
The Uniform Resource Locator format is how you specify the address of a particular resource on the Internet. You know these, you've seen URLs like https://curl.se or https://yourbank.com a million times. RFC 3986 is the canonical spec. And yeah, the formal name is not URL, it is URI.
Curl Post Data
Host
The host name is usually resolved using DNS or your /etc/hosts file to an IP address and that's what curl will communicate with. Alternatively you specify the IP address directly in the URL instead of a name.
For development and other trying out situations, you can point to a different IP address for a host name than what would otherwise be used, by using curl's
--resolve
option:Port number
Each protocol curl supports operates on a default port number, be it over TCP or in some cases UDP. Normally you don't have to take that into consideration, but at times you run test servers on other ports or similar. Then you can specify the port number in the URL with a colon and a number immediately following the host name. Like when doing HTTP to port 1234:
The port number you specify in the URL is the number that the server uses to offer its services. Sometimes you may use a local proxy, and then you may need to specify that proxy's port number separately for what curl needs to connect to locally. Like when using a HTTP proxy on port 4321:
User name and password
Some services are setup to require HTTP authentication and then you need to provide name and password which is then transferred to the remote site in various ways depending on the exact authentication protocol used.
You can opt to either insert the user and password in the URL or you can provide them separately:
or
You need to pay attention that this kind of HTTP authentication is not what is usually done and requested by user-oriented websites these days. They tend to use forms and cookies instead.
Path part
The path part is just sent off to the server to request that it sends back the associated response. The path is what is to the right side of the slash that follows the host name and possibly port number.
GET
The simplest and most common request/operation made using HTTP is to GET a URL. The URL could itself refer to a web page, an image or a file. The client issues a GET request to the server and receives the document it asked for. If you issue the command line
you get a web page returned in your terminal window. The entire HTML document that that URL holds.
All HTTP replies contain a set of response headers that are normally hidden, use curl's
--include
(-i
) option to display them as well as the rest of the document.HEAD
You can ask the remote server for ONLY the headers by using the
--head
(-I
) option which will make curl issue a HEAD request. In some special cases servers deny the HEAD method while others still work, which is a particular kind of annoyance.The HEAD method is defined and made so that the server returns the headers exactly the way it would do for a GET, but without a body. It means that you may see a
Content-Length:
in the response headers, but there must not be an actual body in the HEAD response.Multiple URLs in a single command line
A single curl command line may involve one or many URLs. The most common case is probably to just use one, but you can specify any amount of URLs. Yes any. No limits. You'll then get requests repeated over and over for all the given URLs.
Example, send two GETs:
If you use
--data
to POST to the URL, using multiple URLs means that you send that same POST to all the given URLs.Example, send two POSTs:
Multiple HTTP methods in a single command line
Sometimes you need to operate on several URLs in a single command line and do different HTTP methods on each. For this, you'll enjoy the
--next
option. It is basically a separator that separates a bunch of options from the next. All the URLs before --next
will get the same method and will get all the POST data merged into one.When curl reaches the
--next
on the command line, it'll sort of reset the method and the POST data and allow a new set.Perhaps this is best shown with a few examples. To send first a HEAD and then a GET:
Curl Post Data From Stdin
To first send a POST and then a GET:
Forms explained
Forms are the general way a website can present a HTML page with fields for the user to enter data in, and then press some kind of 'OK' or 'Submit' button to get that data sent to the server. The server then typically uses the posted data to decide how to act. Like using the entered words to search in a database, or to add the info in a bug tracking system, display the entered address on a map or using the info as a login-prompt verifying that the user is allowed to see what it is about to see.
Of course there has to be some kind of program on the server end to receive the data you send. You cannot just invent something out of the air.
GET
Eliese goldbach clasen painting. A GET-form uses the method GET, as specified in HTML like:
In your favorite browser, this form will appear with a text box to fill in and a press-button labeled 'OK'. If you fill in '1905' and press the OK button, your browser will then create a new URL to get for you. The URL will get
junk.cgi?birthyear=1905&press=OK
appended to the path part of the previous URL.If the original form was seen on the page
www.example.com/when/birth.html
, the second page you'll get will become www.example.com/when/junk.cgi?birthyear=1905&press=OK
.Most search engines work this way.
To make curl do the GET form post for you, just enter the expected created URL:
POST
The GET method makes all input field names get displayed in the URL field of your browser. That's generally a good thing when you want to be able to bookmark that page with your given data, but it is an obvious disadvantage if you entered secret information in one of the fields or if there are a large amount of fields creating a very long and unreadable URL.
The HTTP protocol then offers the POST method. This way the client sends the data separated from the URL and thus you won't see any of it in the URL address field.
The form would look very similar to the previous one:
And to use curl to post this form with the same data filled in as before, we could do it like:
This kind of POST will use the Content-Type `application/x-www-form-urlencoded' and is the most widely used POST kind.
The data you send to the server MUST already be properly encoded, curl will not do that for you. For example, if you want the data to contain a space, you need to replace that space with %20 etc. Failing to comply with this will most likely cause your data to be received wrongly and messed up.
Recent curl versions can in fact url-encode POST data for you, like this:
If you repeat
--data
several times on the command line, curl will concatenate all the given data pieces - and put a &
symbol between each data segment.File Upload POST
Back in late 1995 they defined an additional way to post data over HTTP. It is documented in the RFC 1867, why this method sometimes is referred to as RFC1867-posting.
This method is mainly designed to better support file uploads. A form that allows a user to upload a file could be written like this in HTML:
This clearly shows that the Content-Type about to be sent is
multipart/form-data
.To post to a form like this with curl, you enter a command line like:
Hidden Fields
A very common way for HTML based applications to pass state information between pages is to add hidden fields to the forms. Hidden fields are already filled in, they aren't displayed to the user and they get passed along just as all the other fields.
A similar example form with one visible field, one hidden field and one submit button could look like:
To POST this with curl, you won't have to think about if the fields are hidden or not. To curl they're all the same:
Curl Post Data-raw
Figure Out What A POST Looks Like
When you're about fill in a form and send to a server by using curl instead of a browser, you're of course very interested in sending a POST exactly the way your browser does.
An easy way to get to see this, is to save the HTML page with the form on your local disk, modify the 'method' to a GET, and press the submit button (you could also change the action URL if you want to).
You will then clearly see the data get appended to the URL, separated with a
?
-letter as GET forms are supposed to.PUT
Perhaps the best way to upload data to a HTTP server is to use PUT. Then again, this of course requires that someone put a program or script on the server end that knows how to receive a HTTP PUT stream.
Curl Post Data Form
Put a file to a HTTP server with curl:
Basic Authentication
HTTP Authentication is the ability to tell the server your username and password so that it can verify that you're allowed to do the request you're doing. The Basic authentication used in HTTP (which is the type curl uses by default) is plain text based, which means it sends username and password only slightly obfuscated, but still fully readable by anyone that sniffs on the network between you and the remote server.
To tell curl to use a user and password for authentication:
Other Authentication
The site might require a different authentication method (check the headers returned by the server), and then
--ntlm
, --digest
, --negotiate
or even --anyauth
might be options that suit you.Proxy Authentication
Sometimes your HTTP access is only available through the use of a HTTP proxy. This seems to be especially common at various companies. A HTTP proxy may require its own user and password to allow the client to get through to the Internet. To specify those with curl, run something like:
If your proxy requires the authentication to be done using the NTLM method, use
--proxy-ntlm
, if it requires Digest use --proxy-digest
.If you use any one of these user+password options but leave out the password part, curl will prompt for the password interactively.
Hiding credentials
Do note that when a program is run, its parameters might be possible to see when listing the running processes of the system. Thus, other users may be able to watch your passwords if you pass them as plain command line options. There are ways to circumvent this.
It is worth noting that while this is how HTTP Authentication works, very many websites will not use this concept when they provide logins etc. See the Web Login chapter further below for more details on that.
Referer
A HTTP request may include a 'referer' field (yes it is misspelled), which can be used to tell from which URL the client got to this particular resource. Some programs/scripts check the referer field of requests to verify that this wasn't arriving from an external site or an unknown page. While this is a stupid way to check something so easily forged, many scripts still do it. Using curl, you can put anything you want in the referer-field and thus more easily be able to fool the server into serving your request.
Use curl to set the referer field with:
User Agent
Very similar to the referer field, all HTTP requests may set the User-Agent field. It names what user agent (client) that is being used. Many applications use this information to decide how to display pages. Silly web programmers try to make different pages for users of different browsers to make them look the best possible for their particular browsers. They usually also do different kinds of javascript, vbscript etc.
At times, you will see that getting a page with curl will not return the same page that you see when getting the page with your browser. Then you know it is time to set the User Agent field to fool the server into thinking you're one of those browsers.
To make curl look like Internet Explorer 5 on a Windows 2000 box:
Or why not look like you're using Netscape 4.73 on an old Linux box:
Redirects
Location header
When a resource is requested from a server, the reply from the server may include a hint about where the browser should go next to find this page, or a new page keeping newly generated output. The header that tells the browser to redirect is
Location:
.Curl does not follow
Location:
headers by default, but will simply display such pages in the same manner it displays all HTTP replies. It does however feature an option that will make it attempt to follow the Location:
pointers.To tell curl to follow a Location:
If you use curl to POST to a site that immediately redirects you to another page, you can safely use
--location
(-L
) and --data
/--form
together. curl will only use POST in the first request, and then revert to GET in the following operations.Other redirects
Browser typically support at least two other ways of redirects that curl doesn't: first the html may contain a meta refresh tag that asks the browser to load a specific URL after a set number of seconds, or it may use javascript to do it.
Cookie Basics
The way the web browsers do 'client side state control' is by using cookies. Cookies are just names with associated contents. The cookies are sent to the client by the server. The server tells the client for what path and host name it wants the cookie sent back, and it also sends an expiration date and a few more properties.
When a client communicates with a server with a name and path as previously specified in a received cookie, the client sends back the cookies and their contents to the server, unless of course they are expired.
Many applications and servers use this method to connect a series of requests into a single logical session. To be able to use curl in such occasions, we must be able to record and send back cookies the way the web application expects them. The same way browsers deal with them.
Cookie options
The simplest way to send a few cookies to the server when getting a page with curl is to add them on the command line like:
Cookies are sent as common HTTP headers. This is practical as it allows curl to record cookies simply by recording headers. Record cookies with curl by using the
--dump-header
(-D
) option like:(Take note that the
--cookie-jar
option described below is a better way to store cookies.)Curl has a full blown cookie parsing engine built-in that comes in use if you want to reconnect to a server and use cookies that were stored from a previous connection (or hand-crafted manually to fool the server into believing you had a previous connection). To use previously stored cookies, you run curl like:
Curl's 'cookie engine' gets enabled when you use the
--cookie
option. If you only want curl to understand received cookies, use --cookie
with a file that doesn't exist. Example, if you want to let curl understand cookies from a page and follow a location (and thus possibly send back cookies it received), you can invoke it like:Curl has the ability to read and write cookie files that use the same file format that Netscape and Mozilla once used. It is a convenient way to share cookies between scripts or invokes. The
--cookie
(-b
) switch automatically detects if a given file is such a cookie file and parses it, and by using the --cookie-jar
(-c
) option you'll make curl write a new cookie file at the end of an operation:HTTPS is HTTP secure
There are a few ways to do secure HTTP transfers. By far the most common protocol for doing this is what is generally known as HTTPS, HTTP over SSL. SSL encrypts all the data that is sent and received over the network and thus makes it harder for attackers to spy on sensitive information.
SSL (or TLS as the latest version of the standard is called) offers a truckload of advanced features to allow all those encryptions and key infrastructure mechanisms encrypted HTTP requires.
Curl supports encrypted fetches when built to use a TLS library and it can be built to use one out of a fairly large set of libraries -
curl -V
will show which one your curl was built to use (if any!). To get a page from a HTTPS server, simply run curl like:Certificates
In the HTTPS world, you use certificates to validate that you are the one you claim to be, as an addition to normal passwords. Curl supports client- side certificates. All certificates are locked with a pass phrase, which you need to enter before the certificate can be used by curl. The pass phrase can be specified on the command line or if not, entered interactively when curl queries for it. Use a certificate with curl on a HTTPS server like:
curl also tries to verify that the server is who it claims to be, by verifying the server's certificate against a locally stored CA cert bundle. Failing the verification will cause curl to deny the connection. You must then use
--insecure
(-k
) in case you want to tell curl to ignore that the server can't be verified.More about server certificate verification and ca cert bundles can be read in the SSLCERTS document.
At times you may end up with your own CA cert store and then you can tell curl to use that to verify the server's certificate:
Modify method and headers
Doing fancy stuff, you may need to add or change elements of a single curl request.
For example, you can change the POST request to a PROPFIND and send the data as
Content-Type: text/xml
(instead of the default Content-Type) like this:You can delete a default header by providing one without content. Like you can ruin the request by chopping off the Host: header:
You can add headers the same way. Your server may want a
Destination:
header, and you can add it:Curl Post Data
More on changed methods
It should be noted that curl selects which methods to use on its own depending on what action to ask for.
-d
will do POST, -I
will do HEAD and so on. If you use the --request
/ -X
option you can change the method keyword curl selects, but you will not modify curl's behavior. This means that if you for example use -d 'data' to do a POST, you can modify the method to a PROPFIND
with -X
and curl will still think it sends a POST . You can change the normal GET to a POST method by simply adding -X POST
in a command line like:.. but curl will still think and act as if it sent a GET so it won't send any request body etc.
Some login tricks
While not strictly just HTTP related, it still causes a lot of people problems so here's the executive run-down of how the vast majority of all login forms work and how to login to them using curl.
It can also be noted that to do this properly in an automated fashion, you will most certainly need to script things and do multiple curl invokes etc.
First, servers mostly use cookies to track the logged-in status of the client, so you will need to capture the cookies you receive in the responses. Then, many sites also set a special cookie on the login page (to make sure you got there through their login page) so you should make a habit of first getting the login-form page to capture the cookies set there.
Some web-based login systems feature various amounts of javascript, and sometimes they use such code to set or modify cookie contents. Possibly they do that to prevent programmed logins, like this manual describes how to.. Anyway, if reading the code isn't enough to let you repeat the behavior manually, capturing the HTTP requests done by your browsers and analyzing the sent cookies is usually a working method to work out how to shortcut the javascript need.
In the actual
<form>
tag for the login, lots of sites fill-in random/session or otherwise secretly generated hidden tags and you may need to first capture the HTML code for the login form and extract all the hidden fields to be able to do a proper login POST. Remember that the contents need to be URL encoded when sent in a normal POST.Some debug tricks
Many times when you run curl on a site, you'll notice that the site doesn't seem to respond the same way to your curl requests as it does to your browser's.
Then you need to start making your curl requests more similar to your browser's requests:
- Use the
--trace-ascii
option to store fully detailed logs of the requests for easier analyzing and better understanding - Make sure you check for and use cookies when needed (both reading with
--cookie
and writing with--cookie-jar
) - Set user-agent (with
-A
) to one like a recent popular browser does - Set referer (with
-E
) like it is set by the browser - If you use POST, make sure you send all the fields and in the same order as the browser does it.
Check what the browsers do
A very good helper to make sure you do this right, is the web browsers' developers tools that let you view all headers you send and receive (even when using HTTPS).
A more raw approach is to capture the HTTP traffic on the network with tools such as Wireshark or tcpdump and check what headers that were sent and received by the browser. (HTTPS forces you to use
SSLKEYLOGFILE
to do that.)