Crossfire Mailing List Archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: A few thoughts on client/server in multi-player games



Peter Mardahl <peterm@soda.berkeley.edu> writes:
> I'm no expert on protocols, BUT: sending things as ascii any more
> than necessary is a mistake, I think. Binary is very concise.

Not really.  A typical int has the same length has four ASCII  
characters.  That is certainly not much shorter than the average word  
length used for an english command, in particular as you would chose  
relatively short words for the most common objects and commands.

Add to that the fact that an ASCII command is just as long as it needs  
to be even if different commands vary greatly in length while binary  
commands tend to be fixed in size and you may very well end up with  
ASCII commands using less bandwidth.  Also remember that most really  
slow connections use some form of compression which very likely will  
compress a binary and an ASCII format to pretty much the same tokens.   
Considering that these compression algorithms have been optimized for  
text, the ASCII stream may actually work even better here.

This discussion is moot in any case as most of the user visible delays  
will happen when transferring pictures and sounds which haven't been  
cached and that would happen in a binary format in any case.  


> I think if the coding of the protocol is accessible and
> straightforward, it will still be accessible enough for people on
> other platforms to use.
> 

> For example, you could distribute the core of a protocol, which could
> be portable to any machine whatsoever....  Define a few standard
> structs, etc.  perhaps even some higher level functions for
> interpreting packets.

It is just not worth it.

1.  If you chose a binary standard for structures to transmit over the  
net, you'll have to deal with the fact that ints on different target  
machines are 16, 32, and 64 bits large.  You'll have to deal with the  
fact that different machines have different byte order.  You'll have to  
compensate for the fact that different compilers pad the same structure  
in different ways.  All of these things can and have been dealt with on  
UN*X machines in the past.  But even on them it is a lot of hassle.  If  
you go to other popular machines on which the client at least should be  
able to run like Macs and PCs, you are even worse off as there is  
virtually no developer support for dealing with these issues.  I can  
_guarantee_ to you today that if you chose a binary protocol, dozens if  
not hundreds of programmer hours will be spent fixing bugs caused by  
these differences.

2.  Dont' underestimate the value of a human readable, understandable  
and writable format.  It is a boon during debugging if you can read and  
understand the traffic between client and server.  If you can't, very  
likely you'll end up having to write a tool which does the decoding for  
you.  That will of course just be another program which has to be kept  
up to date with every protocol change.  It is also very useful if both  
the client and server can just punt on messages which they don't  
understand and print them out to the user/programmer.  Also it helps  
that on occasion users can just type in the odd message which their  
client hasn't learned to generate yet (e.g. to test a new option).

3.  If there are several quasi-independent teams of programmers  
continuing to extend the protocol (as was the case with crossfire),  
refering to things by name and not by number (as a binary protocol  
would) avoids many conflicts.  For example, there are quickly going to  
be a dozen different assignments for the first couple command numbers  
you don't assign in your protocol.  How is a client or server to deal  
with that when connected to a slightly different strain ?  And how are  
the various improvements going to merged together again if that  
involves changing all but one of the clients/servers at the same time ?   
This really is just the same problem just on a different level as we  
had with the old map numbering scheme.  You will agree that going to a  
name scheme turned out to be a great improvement.

> You gain a lot in simplicity using ascii, but it may lose you a
> factor of two in net performance, a very  big deal on 14.4k.....

Not really as I explained above.

Please believe me on this point.  As you may have guessed from my  
intensity on this matter, I've written binary protocols for use on the  
net before and have lived to regret it. ** :-) 


	Carl Edman
	
** And not only me.  For another case just consider that otalk/ntalk  
incompatibility which plagues the net.  With an ASCII interface that  
would never have happend and the net would have a universal, reliable  
way to communicate in real time today.