If you are reading a host's address from the command line, you may not know if you have an aaa.bbb.ccc.ddd style address, or a host.domain.com style address. What I do with these, is first try to use it as a aaa.bbb.ccc.ddd type address, and if that fails, then do a name lookup on it. Here is an example:
/* Converts ascii text to in_addr struct. NULL is returned if the
address can not be found. */
struct in_addr *atoaddr(char *address) {
struct hostent *host;
static struct in_addr saddr;
/* First try it as aaa.bbb.ccc.ddd. */
saddr.s_addr = inet_addr(address);
if (saddr.s_addr != -1) {
return &saddr;
}
host = gethostbyname(address);
if (host != NULL) {
return (struct in_addr *) *host->h_addr_list;
}
return NULL;
}
If you are running through separate proxies for each service, you shouldn't need to do anything. If you are working through sockd, you will need to "socksify" your application. Details for doing this can be found in the package itself, which is available at:
ftp://ftp.net.com/socks.cstc/socks.cstc.4.2.tar.gz
you can get the socks faq at:
ftp://coast.cs.purdue.edu/pub/tools/unix/socks/FAQ
From Andrew Gierth ( andrewg@microlise.co.uk):
Once you have done a listen()
call on your socket, the kernel is
primed to
accept connections on it. The usual UNIX implementation of this works by
immediately completing the SYN handshake for any incoming valid SYN
segments (connection attempts), creating the socket for the new connection,
and keeping this new socket on an internal queue ready for the accept()
call. So the socket is fully open before the accept is done.
The other factor in this is the 'backlog' parameter for listen()
; that
defines how many of these completed connections can be queued at one time.
If the specified number is exceeded, then new incoming connects are simply
ignored (which causes them to be retried).
From Andrew Gierth ( andrewg@microlise.co.uk):
Take a careful look at struct hostent. Notice that almost everything in it is a pointer? All these pointers will refer to statically allocated data.
For example, if you do:
struct hostent *host = gethostbyname(hostname);
then (as you should know) a subsequent call to gethostbyname()
will
overwrite the structure pointed to by 'host'.
But if you do:
struct hostent myhost;
struct hostent *hostptr = gethostbyname(hostname);
if (hostptr) myhost = *host;
to make a copy of the hostent
before it gets overwritten, then it
still
gets clobbered by a subsequent call to gethostbyname()
, since although
myhost
won't get overwritten, all the data it is pointing to
will be.
You can get round this by doing a proper 'deep copy' of the hostent
structure, but this is tedious. My recommendation would be to extract
the needed fields of the hostent
and store them in your own way.
From Richard Stevens ( rstevens@noao.edu):
Normally you cannot change this. Solaris does let you do this, on a
per-kernel basis with the ndd tcp_ip_abort_cinterval
parameter.
The easiest way to shorten the connect time is with an alarm()
around
the call to connect()
. A harder way is to use select()
, after
setting the socket nonblocking. Also notice that you can only shorten the
connect time, there's normally no way to lengthen it.
From Andrew Gierth ( andrewg@microlise.co.uk):
** Let the system choose your client's port number **
The exception to this, is if the server has been written to be picky about what client ports it will allow connections from. Rlogind and rshd are the classic examples. This is usually part of a Unix-specific (and rather weak) authentication scheme; the intent is that the server allows connections only from processes with root privilege. (The weakness in the scheme is that many O/Ss (e.g. MS-DOS) allow anyone to bind any port.)
The rresvport()
routine exists to help out clients that are using this
scheme. It basically does the equivalent of socket()
+ bind()
,
choosing a port number in the range 512..1023.
If the server is not fussy about the client's port number, then
don't try
and assign it yourself in the client, just let connect()
pick it for
you.
If, in a client, you use the naive scheme of starting at a fixed port number
and calling bind()
on consecutive values until it works, then you buy
yourself a whole lot of trouble:
The problem is if the server end of your connection does an active close. (E.G. client sends 'QUIT' command to server, server responds by closing the connection). That leaves the client end of the connection in CLOSED state, and the server end in TIME_WAIT state. So after the client exits, there is no trace of the connection on the client end.
Now run the client again. It will pick the same port number, since as far as
it can see, it's free. But as soon as it calls connect()
, the server
finds
that you are trying to duplicate an existing connection (although one in
TIME_WAIT). It is perfectly entitled to refuse to do this, so you get, I
suspect, ECONNREFUSED
from connect()
. (Some systems may sometimes
allow the connection anyway, but you can't rely on it.)
This problem is especially dangerous because it doesn't show up unless the client and server are on different machines. (If they are the same machine, then the client won't pick the same port number as before). So you can get bitten well into the development cycle (if you do what I suspect most people do, and test client & server on the same box initially).
Even if your protocol has the client closing first, there are still ways to produce this problem (e.g. kill the server).
The connect()
call will only block while it is waiting to establish a
connection. When there is no server waiting at the other end, it gets
notified that the connection can not be established, and gives up with the
error message you see. This is a good thing, since if it were not the
case clients might wait for ever for a service which just doesn't exist.
Users would think that they were only waiting for the connection to be
established, and then after a while give up, muttering something about
crummy software under their breath.
This question asked by Niranjan Perera ( perera@mindspring.com).
When the size of the incoming data is unknown, you can either make the
size of the buffer as big as the largest possible (or likely) buffer, or
you can re-size the buffer on the fly during your read. When you
malloc()
a large buffer, most (if not all) varients of unix will only allocate
address space, but not physical pages of ram. As more and more of the
buffer is used, the kernel allocates physical memory. This means that
malloc'ing a large buffer will not waste resources unless that memory is
used, and so it is perfectly acceptable to ask for a meg of ram when you
expect only a few K.
On the other hand, a more elegant solution that does not depend on the inner workings of the kernel is to use realloc() to expand the buffer as required in say 4K chunks (since 4K is the size of a page of ram on most systems). I may add something like this to sockhelp.c in the example code one day.