From Andrew Gierth ( andrewg@microlise.co.uk):
AFAIK:
If the peer calls close()
or exits, without having messed with
SO_LINGER
,
then our calls to read()
should return 0. It is less clear what happens
to write()
calls in this case; I would expect EPIPE
, not on the
next call, but the one after.
If the peer reboots, or sets l_onoff = 1, l_linger = 0
and then closes,
then we should get ECONNRESET
(eventually) from read()
, or
EPIPE
from write()
.
I should also point out that when write()
returns EPIPE
, it also
raises the SIGPIPE
signal - you never see the EPIPE
error
unless you handle or ignore the signal.
If the peer remains unreachable, we should get some other error.
I don't think that write()
can legitimately return 0. read()
should
return 0 on receipt of a FIN from the peer, and on all following calls.
So yes, you must expect read()
to return 0.
As an example, suppose you are receiving a file down a TCP link; you
might handle the return from read()
like this:
rc = read(sock,buf,sizeof(buf));
if (rc > 0)
{
write(file,buf,rc);
/* error checking on file omitted */
}
else if (rc == 0)
{
close(file);
close(sock);
/* file received successfully */
}
else /* rc < 0 */
{
/* close file and delete it, since data is not complete
report error, or whatever */
}
The man page shows it as "struct sockaddr *my_addr
". The
sockaddr struct
though is just a place holder for the structure it really wants. You have
to pass different structures depending on what kind of socket you have.
For an AF_INET
socket, you need the sockaddr_in structure. It has three
fields of interest:
Set this to AF_INET
.
The network byte-ordered 16 bit port number
The host's ip number. This is a struct in_addr
, which
contains only one field, s_addr
which is a u_long
.
Use the getservbyname()
routine. This will return a pointer to a
servent
structure. You are interested in the s_port
field, which contains the
port number, with correct byte ordering (so you don't need to call
htons()
on it). Here is a sample routine:
/* Take a service name, and a service type, and return a port number. If the
service name is not found, it tries it as a decimal number. The number
returned is byte ordered for the network. */
int atoport(char *service, char *proto) {
int port;
long int lport;
struct servent *serv;
char *errpos;
/* First try to read it from /etc/services */
serv = getservbyname(service, proto);
if (serv != NULL)
port = serv->s_port;
else { /* Not in services, maybe a number? */
lport = strtol(service,&errpos,0);
if ( (errpos[0] != 0) || (lport < 1) || (lport > 5000) )
return -1; /* Invalid port address */
port = htons(lport);
}
return port;
}
If you are exiting, I have been assured by Andrew that all unixes will
close open file descriptors on exit. If you are not exiting though, you
can just close it with a regular close()
call.
This question is usually asked by people who try close()
, because they
have seen that that is what they are supposed to do, and then run netstat
and see that their socket is still active. Yes, close()
is the correct
method. To read about the TIME_WAIT state, and why it is important,
refer to
2.7 Please explain the TIME_WAIT state..
From Michael Hunter ( mphunter@qnx.com):
shutdown()
is useful for deliniating when you are done providing a
request
to a server using TCP. A typical use is to send a request to a server
followed by a shutdown()
. The server will read your request
followed by
an EOF
(read of 0 on most unix implementations). This tells the server
that it has your full request. You then go read blocked on the socket.
The server will process your request and send the necessary data back to
you followed by a close. When you have finished reading all of the
response to your request you will read an EOF
thus signifying that you
have the whole response. It should be noted the TTCP (TCP for
Transactions -- see R. Steven's home page) provides for a better method of
tcp transaction management.
Remember that TCP guarantees all data transmitted will be delivered, if at all possible. When you close a socket, the server goes into a TIME_WAIT state, just to be really really sure that all the data has gone through. When a socket is closed, both sides agree by sending messages to each other that they will send no more data. This, it seemed to me was good enough, and after the handshaking is done, the socket should be closed. The problem is two-fold. First, there is no way to be sure that the last ack was communicated successfully. Second, there may be "wandering duplicates" left on the net that must be dealt with if they are delivered.
Andrew Gierth ( andrewg@microlise.co.uk) helped to explain the closing sequence in the following usenet posting:
Assume that a connection is in ESTABLISHED state, and the client is about to do an orderly release. The client's sequence no. is Sc, and the server's is Ss. The pipe is empty in both directions.
Client Server
====== ======
ESTABLISHED ESTABLISHED
(client closes)
ESTABLISHED ESTABLISHED
<CTL=FIN+ACK><SEQ=Sc><ACK=Ss> ------->>
FIN_WAIT_1
<<-------- <CTL=ACK><SEQ=Ss><ACK=Sc+1>
FIN_WAIT_2 CLOSE_WAIT
<<-------- <CTL=FIN+ACK><SEQ=Ss><ACK=Sc+1> (server closes)
LAST_ACK
<CTL=ACK>,<SEQ=Sc+1><ACK=Ss+1> ------->>
TIME_WAIT CLOSED
(2*msl elapses...)
CLOSED
Note: the +1 on the sequence numbers is because the FIN counts as one byte of data. (The above diagram is equivalent to fig. 13 from RFC 793).
Now consider what happens if the last of those packets is dropped in the network. The client has done with the connection; it has no more data or control info to send, and never will have. But the server does not know whether the client received all the data correctly; that's what the last ACK segment is for. Now the server may or may not care whether the client got the data, but that is not an issue for TCP; TCP is a reliable rotocol, and must distinguish between an orderly connection close where all data is transferred, and a connection abort where data may or may not have been lost.
So, if that last packet is dropped, the server will retransmit it (it is, after all, an unacknowledged segment) and will expect to see a suitable ACK segment in reply. If the client went straight to CLOSED, the only possible response to that retransmit would be a RST, which would indicate to the server that data had been lost, when in fact it had not been.
(Bear in mind that the server's FIN segment may, additionally, contain data.)
DISCLAIMER: This is my interpretation of the RFCs (I have read all the TCP-related ones I could find), but I have not attempted to examine implementation source code or trace actual connections in order to verify it. I am satisfied that the logic is correct, though.
More commentarty from Vic:
The second issue was addressed by Richard Stevens ( rstevens@noao.edu, author of "Unix Network Programming", see 1.5 Where can I get source code for the book [book title]?). I have put together quotes from some of his postings and email which explain this. I have brought together paragraphs from different postings, and have made as few changes as possible.
From Richard Stevens ( rstevens@noao.edu):
If the duration of the TIME_WAIT state were just to handle TCP's full-duplex close, then the time would be much smaller, and it would be some function of the current RTO (retransmission timeout), not the MSL (the packet lifetime).
A couple of points about the TIME_WAIT state.
A wandering duplicate is a packet that appeared to be lost and was retransmitted. But it wasn't really lost ... some router had problems, held on to the packet for a while (order of seconds, could be a minute if the TTL is large enough) and then re-injects the packet back into the network. But by the time it reappears, the application that sent it originally has already retransmitted the data contained in that packet.
Because of these potential problems with TIME_WAIT assassinations, one
should
not avoid the TIME_WAIT state by setting the SO_LINGER
option to send an
RST instead of the normal TCP connection termination
(FIN/ACK/FIN/ACK).
The TIME_WAIT state is there for a reason; it's your friend and it's
there to help you :-)
I have a long discussion of just this topic in my just-released "TCP/IP Illustrated, Volume 3". The TIME_WAIT state is indeed, one of the most misunderstood features of TCP.
I'm currently rewriting "Unix Network Programming" (see 1.5 Where can I get source code for the book [book title]?). and will include lots more on this topic, as it is often confusing and misunderstood.
An additional note from Andrew:
Closing a socket: if SO_LINGER
has not been called on a socket,
then
close()
is not supposed to discard data. This is true on SVR4.2
(and,
apparently, on all non-SVR4 systems) but apparently not on
SVR4; the
use of either shutdown()
or SO_LINGER
seems to be required to
guarantee delivery of all data.
From Andrew Gierth ( andrewg@microlise.co.uk):
Because by default, no packets are sent on the TCP connection unless there is data to send or acknowledge.
So, if you are simply waiting for data from the peer, there is no way to tell if the peer has silently gone away, or just isn't ready to send any more data yet. This can be a problem (especially if the peer is a PC, and the user just hits the Big Switch...).
One solution is to use the SO_KEEPALIVE
option. This option enables
periodic probing of the connection to ensure that the peer is still present.
BE WARNED: the default timeout for this option is AT LEAST 2 HOURS.
This timeout can often be altered (in a system-dependent fashion) but not
normally on a per-connection basis (AFAIK).
RFC1122 specifies that this timeout (if it exists) must be configurable. On the majority of Unix variants, this configuration may only be done globally, affecting all TCP connections which have keepalive enabled. The method of changing the value, moreover, is often difficult and/or poorly documented, and in any case is different for just about every version in existence.
If you must change the value, look for something resembling tcp_keepidle
in your kernel configuration or network options configuration.
If you're sending to the peer, though, you have some better guarantees; since sending data implies receiving ACKs from the peer, then you will know after the retransmit timeout whether the peer is still alive. But the retransmit timeout is designed to allow for various contingencies, with the intention that TCP connections are not dropped simply as a result of minor network upsets. So you should still expect a delay of several minutes before getting notification of the failure.
The approach taken by most application protocols currently in use on the Internet (e.g. FTP, SMTP etc.) is to implement read timeouts on the server end; the server simply gives up on the client if no requests are received in a given time period (often of the order of 15 minutes). Protocols where the connection is maintained even if idle for long periods have two choices:
SO_KEEPALIVE
Using non-blocking I/O means that you have to poll sockets to see if there is data to be read from them. Polling should usually be avoided since it uses more CPU time than other techniques.
Using SIGIO
allows your application to do what it does and have the
operating system tell it (with a signal) that there is data waiting for it
on a socket. The only drawback to this soltion is that it can be
confusing, and if you are dealing with multiple sockets you will have to
do a select()
anyway to find out which one(s) is ready to be read.
Using select()
is great if your application has to accept data from more
than one socket at a time since it will block until any one of a number of
sockets is ready with data. One other advantage to select()
is that you
can set a time-out value after which control will be returned to you
whether any of the sockets have data for you or not.
From Steve Rago ( sar@plc.com):
EPROTO
means that the protocol encountered an unrecoverable error
for that endpoint. EPROTO
is one of those catch-all error codes
used by STREAMS-based drivers when a better code isn't available.
From Richard Stevens ( rstevens@noao.edu):
You can't force it. Period. TCP makes up its own mind as to when
it can send data. Now, normally when you call write()
on a TCP
socket, TCP will indeed send a segment, but there's no guarantee
and no way to force this. There are lots of reasons why TCP
will not send a segment: a closed window and the Nagle algorithm
are two things to come immediately to mind.
(Snipped suggestion from Andrew Gierth to use TCP_NODELAY
)
Setting this only disables one of the many tests, the Nagle algorithm. But if the original poster's problem is this, then setting this socket option will help.
A quick glance at tcp_output() shows around 11 tests TCP has to make as to whether to send a segment or not.
Now from Dr. Charles E. Campbell Jr. ( cec@gryphon.gsfc.nasa.gov):
As you've surmised, I've never had any problem with disabling Nagle's algorithm. Its basically a buffering method; there's a fixed overhead for all packets, no matter how small. Hence, Nagle's algorithm collects small packets together (no more than .2sec delay) and thereby reduces the amount of overhead bytes being transferred. This approach works well for rcp, for example: the .2 second delay isn't humanly noticeable, and multiple users have their small packets more efficiently transferred. Helps in university settings where most folks using the network are using standard tools such as rcp and ftp, and programs such as telnet may use it, too.
However, Nagle's algorithm is pure havoc for real-time control and not much better for keystroke interactive applications (control-C, anyone?). It has seemed to me that the types of new programs using sockets that people write usually do have problems with small packet delays. One way to bypass Nagle's algorithm selectively is to use "out-of-band" messaging, but that is limited in its content and has other effects (such as a loss of sequentiality) (by the way, out-of-band is often used for that ctrl-C, too).
More from Vic:
So to sum it all up, if you are having trouble and need to flush the
socket, setting the TCP_NODELAY
option will usually solve the problem.
If it doesn't, you will have to use out-of-band messaging, but according
to Andrew, "out-of-band data has its own problems, and I don't think it
works well as a solution to buffering delays (haven't tried it though).
It is not 'expedited data' in the sense that exists in some other
protocols; it is transmitted in-stream, but with a pointer to indicate
where it is."
I asked Andrew something to the effect of "What promises does TCP make about when it will get around to writing data to the network?" I thought his reply should be put under this question:
Not many promises, but some.
I'll try and quote chapter and verse on this:
References:
RFC 1122, "Requirements for Internet Hosts" (also STD 3)
RFC 793, "Transmission Control Protocol" (also STD 7)
write()
call must be delivered to
the peer within a finite time, unless prevented by protocol
considerations.
The first of the interesting cases is "window closed" (ie. there is no buffer space at the receiver; this can delay data indefinitely, but only if the receiving process is not actually reading the data that is available)
Vic asks:
OK, it makes sense that if the client isn't reading, the data isn't going to make it across the connection. I take it this causes the sender to block after the recieve queue is filled?
The sender blocks when the socket send buffer is full, so buffers will be full at both ends.
While the window is closed, the sending TCP sends window probe packets. This ensures that when the window finally does open again, the sending TCP detects the fact. [RFC1122, ss 4.2.2.17]
The second interesting case is "Nagle algorithm" (small segments, e.g.
keystrokes, are delayed to
form larger segments if ACKs are expected from the peer; this
is what is disabled with TCP_NODELAY
)
Vic Asks:
Does this mean that my tcpclient sample should set TCP_NODELAY to ensure that the end-of-line code is indeed put out onto the network when sent?
No. tcpclient.c is doing the right thing as it stands; trying to write
as much data as possible in as few calls to write()
as is feasible.
Since the amount of data is likely to be small relative to the socket
send buffer, then it is likely (since the connection is idle at that
point) that the entire request will require only one call to write()
,
and that the TCP layer will immediately dispatch the request as a
single segment (with the PSH flag, see point 2.2 above).
The Nagle algorithm only has an effect when a second write()
call is
made while data is still unacknowledged. In the normal case, this data
will be left buffered until either: a) there is no unacknowledged data;
or b) enough data is available to dispatch a full-sized segment. The
delay cannot be indefinite, since condition (a) must become true within
the retransmit timeout or the connection dies.
Since this delay has negative consequences for certain applications, generally those where a stream of small requests are being sent without response, e.g. mouse movements, the standards specify that an option must exist to disable it. [RFC1122, ss 4.2.3.4]
Additional note: RFC1122 also says:
When the PUSH flag is not implemented on SEND calls, i.e., when the application/TCP interface uses a pure streaming model, responsibility for aggregating any tiny data fragments to form reasonable sized segments is partially borne by the application layer.
So programs should avoid calls to write()
with small data lengths
(small relative to the MSS, that is); it's better to build up a
request in a buffer and then do one call to sock_write()
or
equivalent.
The other possible sources of delay in the TCP are not really controllable by the program, but they can only delay the data temporarily.
Vic asks:
By temporarily, you mean that the data will go as soon as it can, and I won't get stuck in a position where one side is waiting on a response, and the other side hasn't recieved the request? (Or at least I won't get stuck forever)
You can only deadlock if you somehow manage to fill up all the buffers in both directions... not easy.
If it is possible to do this, (can't think of a good example though), the solution is to use nonblocking mode, especially for writes. Then you can buffer excess data in the program as necessary.
There is the Simple Sockets Library by Charles E. Campbell, Jr. PhD. and Terry McRoberts. The file is called ssl.tar.gz, and you can download it from this faq's home page. For c++ there is the Socket++ library which is supposed to be on ftp://ftp.virginia.edu somewhere. There is also C++ Wrappers, but I can't find this package anywhere. The file is called C++_wrappers.tar.gz. I have asked the people where it used to be stored where I can find it now, but I never heard back. From http://www.cs.wustl.edu/~schmidt you should be able to find the ACE toolkit.
I don't have any experience with any of these libraries, so I can't recomend one over the other.
The data that causes select to return is the EOF because the other side has closed the connection. This causes read to return zero. For more information see 2.1 How can I tell when a socket is closed on the other end?
From Richard Stevens ( rstevens@noao.edu):
The basic difference is that select()
's fd_set
is a bit mask and
therefore
has some fixed size. It would be possible for the kernel to not limit
this size when the kernel is compiled, allowing the application to define
FD_SETSIZE
to whatever it wants (as the comments in the system header
imply today) but it takes more work. 4.4BSD's kernel and the Solaris
library function both have this limit. But I see that BSD/OS 2.1 has
now been coded to avoid this limit, so it's doable, just a small matter
of programming. :-) Someone should file a Solaris bug report on this,
and see if it ever gets fixed.
With poll()
, however, the user must allocate an array of pollfd
structures,
and pass the number of entries in this array, so there's no fundamental
limit. As Casper notes, fewer systems have poll()
than select
, so
the latter
is more portable. Also, with original implementations (SVR3) you could not
set the descriptor to -1 to tell the kernel to ignore an entry in the
pollfd
structure, which made it hard to remove entries from the array; SVR4 gets
around this. Personally, I always use select()
and rarely poll()
,
because I
port my code to BSD environments too. Someone could write an implementation
of poll()
that uses select()
, for these environments, but I've
never
seen one. Both select()
and poll()
are being standardized by POSIX
1003.1g.
Anything other than single bytes of data will probably get mangled unless
you take care. For integer values you can use htons()
and friends, and
strings are really just a bunch of single bytes, so those should be OK.
Be careful not to send a pointer to a string though, since the pointer
will be meaningless on another machine. If you need to send a struct,
you should write sendthisstruct()
and readthisstruct()
functions
for it
that do all the work of taking the structure apart on one side, and
putting it back together on the other. If you need to send floats, you
may have a lot of work ahead of you. You should read RFC 1014 which is
about portable ways of getting data from one machine to another (thanks
to Andrew Gabriel for pointing this out).
First off, be sure you really want to use it in the first place. It will
disable the Nagle algorithm
(see
2.11 How can I force a socket to send the data in it's buffer?),
which will cause network traffic
to increase, with smaller than needed packets wasting bandwidth. Also,
from what I have been able to tell, the speed increase is very small, so
you should probably do it without TCP_NODELAY
first, and only turn
it on if there is a problem.
Here is a code example, with a warning about using it from Andrew Gierth:
int flag = 1;
int result = setsockopt(sock, /* socket affected */
IPPROTO_TCP, /* set option at TCP level */
TCP_NODELAY, /* name of option */
(char *) &flag, /* the cast is historical
cruft */
sizeof(int)); /* length of option value */
if (result < 0)
... handle the error ...
TCP_NODELAY
is for a specific purpose; to disable the Nagle
buffering
algorithm. It should only be set for applications that send frequent
small bursts of information without getting an immediate response, where
timely delivery of data is required (the canonical example is mouse
movements).
It groups together as much data as it can between ACK's from the other end of the connection. I found this really confusing until Andrew Gierth ( andrewg@microlise.co.uk) drew the following diagram, and explained:
This diagram is not intended to be complete, just to illustrate the point better...
Case 1: client writes 1 byte per write()
call. The program on
host B is tcpserver.c from the FAQ examples.
CLIENT SERVER
APP TCP TCP APP
[connection setup omitted]
"h" ---------> [1 byte]
------------------>
-----------> "h"
[ack delayed]
"e" ---------> [Nagle alg. .
now in effect] .
"l" ---------> [ditto] .
"l" ---------> [ditto] .
"o" ---------> [ditto] .
"\n"---------> [ditto] .
.
.
[ack 1 byte]
<------------------
[send queued
data]
[5 bytes]
------------------>
------------> "ello\n"
<------------ "HELLO\n"
[6 bytes, ack 5 bytes]
<------------------
"HELLO\n" <----
[ack delayed]
.
.
. [ack 6 bytes]
------------------>
Total segments: 5. (If TCP_NODELAY
was set, could have been up to 10.)
Time for response: 2*RTT, plus ack delay.
Case 2: client writes all data with one write()
call.
CLIENT SERVER
APP TCP TCP APP
[connection setup omitted]
"hello\n" ---> [6 bytes]
------------------>
------------> "hello\n"
<------------ "HELLO\n"
[6 bytes, ack 6 bytes]
<------------------
"HELLO\n" <----
[ack delayed]
.
.
. [ack 6 bytes]
------------------>
Total segments: 3.
Time for response = RTT (therefore minimum possible).
Hope this makes things a bit clearer...
Note that in case 2, you don't want the implementation to gratuitously delay sending the data, since that would add straight onto the response time.
From Andrew Gierth ( andrewg@microlise.co.uk):
read()
is equivalent to recv()
with a flags
parameter of 0.
Other values
for the flags
parameter change the behaviour of recv()
. Similarly,
write()
is equivalent to send()
with flags
== 0.
It is unlikely that send()/recv() would be dropped; perhaps someone with a copy of the POSIX drafts for socket calls can check...
Portability note: non-unix systems may not allow read()/write()
on
sockets, but recv()/send()
are usually ok. This is true on
Windows and OS/2, for example.
From Andrew Gierth ( andrewg@microlise.co.uk):
In general, the only parameter passed to a signal handler is the signal number that caused it to be invoked. Some systems have optional additional parameters, but they are no use to you in this case.
My advice is to just ignore SIGPIPE
as you suggest. That's what I do in
just about all of my socket code; errno values are easier to handle than
signals (in fact, the first revision of the FAQ failed to mention
SIGPIPE
in that context; I'd got so used to ignoring it...)
There is one situation where you should not ignore SIGPIPE
; if
you are
going to exec()
another program with stdout redirected to a socket. In
this case it is probably wise to set SIGPIPE
to SIG_DFL
before
doing the exec()
.
From Andrew Gierth ( andrewg@microlise.co.uk):
On systems where sockets are implemented on top of Streams (e.g. all
SysV-based systems, presumably including Solaris), the socket()
function
will actually be opening certain special files in /dev. You will need to
create a /dev directory under your fake root and populate it with the
required device nodes (only).
Your system documentation may or may not specify exactly which device nodes are required; I can't help you there (sorry).
This isn't really so much an error as an exit condition. It means that
the call was interrupted by a signal. Any call that might block should
be wrapped in a loop that checkes for EINTR
, as is done in the example
code (See
6. Sample Source Code).
From Richard Stevens ( rstevens@noao.edu):
Very simple: with TCP you get SIGPIPE
if your end of the connection
has received an RST from the other end. What this also means is that
if you were using select
instead of write, the select would have
indicated the socket as being readable, since the RST is there for
you to read (read will return an error with errno
set to
ECONNRESET
).
Basically an RST is TCP's response to some packet that it doesn't expect
and has no other way of dealing with. A common case is when the peer closes
the connection (sending you a FIN) but you ignore it because you're writing
and not reading. (You should be using select
.) So you write to a
connection
that has been closed by the other end and the oether end's TCP responds with
an RST.
Unlike exceptions in C++, socket exceptions do not indicate that an error has occured. Socket exceptions usually refer to the notification that out-of-band data has arrived. Out-of-band data (called "urgent data" in TCP) looks to the application like a separate stream of data from the main data stream. This can be useful for separating two different kinds of data. Note that just because it is called "urgent data" does not mean that it will be delivered any faster, or with higher priorety than data in the in-band data stream. Also beware that unlike the main data stream, the out-of-bound data may be lost if your application can't keep up with it.
From Richard Stevens ( rstevens@noao.edu):
Some systems set the hostname to the FQDN and others set it to just the unqualified host name. I know the current BIND FAQ recommends the FQDN, but most Solaris systems, for example, tend to use only the unqualified host name.
Regardless, the way around this is to first get the host's name (perhaps
an FQDN, perhaps unaualified). Most systems support the Posix way to do
this using uname()
, but older BSD systems only provide
gethostname()
.
Call gethostbyname()
to find your IP address. Then take the IP address
and call gethostbyaddr()
. The h_name
member of the hostent{}
should
then be your FQDN.