I recently attended a Linux SA (the S.A. Linux Users Group) meeting where Glen Turner, AARNet, presented on TCP Performance and the Web 100 project.
You can download his slides a little down the page here www.linuxsa.org.au/meetings![]()
Concept 1 : Long Fat Pipes
One of the things he pointed out was that certain Linux (and Windows) TCP default parameters weren't optimal, particularly for long fat pipes.
A classical long fat pipe is a satellite link, which has a large latency eg 500 milliseconds, and running at something like say 2 Mbps. It is Long, and (relatively) Fat.
For TCP to best utilise this link, it needs to be able to completely fill the long fat pipe. To work out how much data the long fat pipe requires to be filled, you calculate a figure known as the bandwidth delay product or bdp.
The formula for bdp is (smallest link in path bits per second / 8) * path round trip time in seconds ie. the ping time to the destination.
For example, our 2 Mbps link, with a 500 milliseconds latency works out as
bdp = (2 048 000 / 8) * 0.5
bdp = 128 000 bytes or 128 KiB.
So, to get TCP to best perform over that link, the sending TCP end has to be able to send at least 128KB of data at once (spread across a number of packets), and the receiving TCP end has to be able to receive up to 128KB of data at once (spread across a number of packets).
Onto the next topic.
Concept 2 : TCP host based flow control
Host based flow control serves the purpose of stopping a fast sender from over-running a slow receiver.
Imagine a 3 Ghz Pentium 4 sending data via TCP to a poor old 20 Mhz 386. The Pentium 4 would easily over-run the 386 with data, unless the 386 had some way of telling the Pentium 4 to slow down. Obviously the 386 is likely to be using a 10Mbps ethernet card, plugged into an ISA slot. Even if it could handle a 100 Mbps PCI card, the CPU itself is not likely to be able to process all the packets, as well as run the OS and applications.
So, TCP uses a method of flow control called sliding windows.
Basically, in the TCP Acknowledgement messages the 386 sends back to the Pentium 4, when the 386 is receiving data, it also indicates how much data it can handle. This value is called the "window size" in the TCP header.
When you run tcpdump you can see these window advertisements eg.
<source IP> > <dest IP> F 571:571(0) ack 170 win 57600 <nop,nop,timestamp 266484478 69831241> (DF)
There is a default limit to how big this window can be. By default, under the Linux 2.4 kernel it is 64 KiB. Apparently, under Windows XP and earlier Windows OSs, it is around 17 KiB.
Hmm, so a Linux box will only advertise a window up to 64 KiB to a remote device, saying that it can receive up to 64 KiB at once, yet to best take advantage of a 2 Mbps satellite link, the sender needs to be able to send 128 KiB. O'oh. With the default Linux 2.4 parameters, we will only be using around 50% of the very expensive satellite link's capacity.
Even worse with Windows XP, with a maximum TCP receive window size of 17 KiB.
So where do we change it ?
Under Linux, there are two parameters that need to be changed.
The maximum TCP send window size - /proc/sys/net/core/wmem_max
eg
echo 131072 > /proc/sys/net/core/wmem_max to set the max TCP send window size to 128 KiB.
The maximum TCP receive window size - /proc/sys/net/core/rmem_max
echo 131072 > /proc/sys/net/core/rmem_max to set the max TCP send window size to 128 KiB.
After doing that, the devices should now get much better, if not optimal performance out of the satellite link.
Great. How many people are using a satellite link. Not many really.
Where it has started to matter is in LANs, specifically LANs of the Fast Ethernet speed or greater.
When a pipe is really FAT, the LONGness isn't as important anymore in the bandwidth delay product.
According to Glen's presentation, a 100 Mbps LAN has a latency of 5.2 milliseconds or 0.0052 seconds.
Lets calculate the bdp, indicating how much data needs to be pushed into the really fat, but not-very-long LAN pipe to get optimal TCP performance.
bdp = (100 000 000 / 8) * 0.0052
bdp = 65000 bytes
Ok, just under 64 KiB (64 KiB = 65535 bytes). So Linux should work optimally for a 100Mbps LAN.
Too bad about Windows :-), with its 17KiB default. As Glen said, Windows is optimised for a 10Mbps LAN link. This is probably the main reason why Linux is so much faster transferring files on 100 Mbps LANs.
So what if you go and buy a GigE network for your Linux box. Bandwidth up by 10, delay stays pretty constant (due to speed of electricity staying constant).
bdp = (1 000 000 000 / 8) * 0.0052
bdp = 650000 bytes.
Hmm, that is around 640 KiB ! Looks like you won't be getting GigE performance with your new GigE hardware, unless you tune your Linux parameters. Since these parameters need to be in powers of 2, 2^20 or 1048576 would be the right value to put into rmem_max and wmem_max.
And those people using Windows, they are around four times worse off.
Keep in mind this GigE calculation is for a LAN. Imagine if you had a GigE to the US, around a 200 millisecond delay.
bps = (1 000 000 000 / 8 ) * 0.2
bps = 25 000 000
Around 25 MiB !!!
Not many people have GigE's to the US, but some organisations have 100Mbps links.
bps = (100 000 000 / 8 ) * 0.2
bps = 2 500 000
Much smaller, at 2.5 MiB.
However, Linux would not get anywhere near the link's optimal performance, with default values of 64 KiB. Again, neither would Windows users.
OK so stupid question time now.
echo number > /proc/sys/net/core/rmem_xxx
Does this stay once entered or does it need to be done every boot (i.e. can it be automated if it needs to be done on boot).
Cheers
Good read, very impressive summary, would have been a good presentation to see first hand.
I've never had any experience with satellite links but wondered if they could be better optimised at a tcp level due to their high latency - decent bandwidth properties, no more pondering about that now!
As for windows being optimised for 10Mbps it certainly answers some questions of why it only performs decently on powerful machines and brings up some other questions at the same time. I've never thought much of the windows tcp/ip stack...
Reaper writes...
echo number > /proc/sys/net/core/rmem_xxx
Does this stay once entered or does it need to be done every boot (i.e. can it be automated if it needs to be done on boot).
I think it would need to be done every boot, the same as echo "1" > /proc/sys/net/ipv4/ip_forward for example. (I think the reason being because /proc is a virual directory??)
Would be as simple as throwing it in a startup script though.
Works like a treat. My BSD (proxy) box has never run better :-)
That's a nice little bit of information. Would be nice if made a sticky.
Thanks for the compliments about the post.
I'm no expert on this, but I'm a bit embaressed. I was aware of the issue, but never thought about it applying to LANs.
The long fat pipe stuff is described in a classic networking book called TCP/IP Illustrated, Volume 1, by W. Richard Stevens. However, it only talks about satellite links, or high bandwidth links that run nationally eg long and fat. It never occured to me that the as links get fatter, the length doesn't matter as much, and now that we have 100Mbps+ ethernet, we are at the point where tuning TCP even for LAN performance is worth doing.
If you are running a big web hosting complex, with fat and long links, particularly to the US, it would be well and truely worth the time to look into tuning TCP performance.
If you have a file called /etc/sysctl, you can put the values in there, in a slightly modified format eg. for 512 KiB I would add
--
net.core.rmem_max = 524288
net.core.wmem_max = 524288
--
I wasn't aware of this until the same presentation, I'd just thrown together a script with echoed the values as in my first post, and made it execute on boot.
Some other links on the topic :
www.psc.edu/networking/perf_tune.html
- "Enabling High Performance Data Transfers"
and for the Windows users (found using google, can't vouch for whether it is correct, don't use Windows any more)
rdweb.cns.vt.edu/public/.../win2k-tcpip.htm
- "Windows 2000 TCP Performance Tuning Tips"
btw, the TCP window thing is also the reason why if you pause a download within an application, say a browser, and then re-start it, the KB/s value bursts up a bit, before returning to the normal speed.
When you pushed pause, the local TCP still had/has room left in its window, which it advertised to the remote TCP sender. The sender keeps sending until it receives a TCP Ack with a window = 0, a signal to stop sending completely.
When you un-pause it, the local TCP dumps that buffered data into the application, and the application gets a burst of data in a hurry, warping the initial download speed calculations. The local TCP will then send a TCP Ack with a non-zero window, causing the remote TCP sender to start sending again.
Another trick I like to use, relying on that behaviour.
Within a terminal window, most applications will respond to XON (or Ctrl S) or XOFF (or Ctrl Q). These are the human / dumb terminal flow control characters.
Say you are downloading a file using wget or curl, and want to pause it for a moment, because you want to browse the web quickly, or think you might have made a mistake with the file, and want to check it.
Rather than aborting the download, just press ^S. The application will stop, causing TCP to stop, once the advertised window is full. ^Q will start it again.
^S actually corresponds to the -STOP signal, ^Q corresponds to the -CONT signal. You can use kill, or killall to send those signals to any applications, from the command line, or within scripts. Even X windows apps will pause, be aware that they won't update their screen image when you move a window over the top of them, until you start them again using -CONT.
Mr Zippy writes...
If you have a file called /etc/sysctl, you can put the values in there, in a slightly modified format eg. for 512 KiB I would add
--
net.core.rmem_max = 524288
net.core.wmem_max = 524288
Yep. That's what I did for my FreeBSD server :)
Will do the same for my Mandrake client.
Be careful allocating specifying such large amounts for non-Linux OSs.
Apparently Linux only allocates RAM up to that level as the remote device sends data, filling up the local window.
However, other OSs may allocate that amount of RAM as soon as the socket is created. Not really a big deal these days, although pre-allocating 10MB of RAM for 20 TCP connections could be quite a lot on a box with 32MB or less. You don't want the box to become RAM staved, without any real benefit.
It is best to calculate your own value, specific to your own connectivity options eg. are you running GigE on your network, how fat is your pipe to the Internet, what is the average delay to the furthest Internet destination.
btw, a reasonable delay figure for the Internet calculation would be 0.3 seconds or 300 milliseconds. That is allowing for around 200 milliseconds to get to the US, and 100 milliseconds to get across it. If you go to other countries often, measure the delay towards them using ping, and pick the worsed average value for your bandwidth delay product calculation.
Mr Zippy writes...
Be careful allocating specifying such large amounts for non-Linux OSs.
AS long as you set a higher buffer in sysctl.conf then it is fine. You will run out of buffer if you don't do so with grave consequences :)
Good site here proj.sunet.se/E2E/tcptune.html![]()
Mr Zippy writes...
^S actually corresponds to the -STOP signal, ^Q corresponds to the -CONT signal.
I don't think this is correct. ^S does not send a signal, it is simply interpreted by the terminal as a flow control character. So ^S will stop the terminal producing output, but it won't pause the program's execution. You can check this in vim, run it and do a ps x | grep vim , you'll see its Status as 'S' because it's sleeping waiting for user input. Now go into Insert mode, press ^S and then 'abcde' and you won't see the output because the terminal is paused. Do a ps x | grep vim and you'll see vim's status is still S. Now either press ^Z to background it, or send it the STOP signal using kill -STOP `pidof vim`. Do the ps again and you'll notice its status is 'T' now that vim itself has been stopped.
If you have a CPU intensive program to run that takes about 30 seconds, you can try it with that as well. Start it going, press ^S after 5 seconds and the terminal will be frozen. I'd expect though, that if you waited about 30 seconds, then press ^Q, the program will already be finished, and it won't have 25 seconds to go when you unpause the terminal.
Reaper writes...
That's a nice little bit of information. Would be nice if made a sticky.
Agreed. :)
^S does not send a signal, it is simply interpreted by the terminal as a flow control character. So ^S will stop the terminal producing output, but it won't pause the program's execution.
With a bit of experimentation, it looks like you're right about the signal part.
However, at least for applications that are using standard input / standard output, it does appear to be pausing application execution, or pauses certain applications, such as curl and wget (the ones I've used this trick on).
Here is an example.
I've run curl, asking it to download openoffice
--
curl "http://public.www.planetmirror. com/pub/openoffice/stable/1.1rc4 /OOo_1.1rc4_LinuxIntel_install.t ar.gz"
* About to connect() to public.www.planetmirror.com:80
* Connected to chaos.planetmirror.com (203.16.234.19) port 80
> GET /pub/openoffice/stable/1.1rc4/OO o_1.1rc4_LinuxIntel_install.tar. gz HTTP/1.1
User-Agent: Mozilla/5.0 (X11; U; CPM-80 Z80; en-US; rv:1.2b) Gecko/20021029 Phoenix/0.4
Host: public.www.planetmirror.com
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
< HTTP/1.1 200 OK
< Date: Sun, 21 Sep 2003 13:22:43 GMT
< Server: Apache/1.3.27 (Unix) mod_throttle/3.1.2 mod_layout/3.2 PHP/4.3.2
< Last-Modified: Thu, 04 Sep 2003 17:50:00 GMT
< ETag: "1544c-4ab8ac3-3f577b48"
< Accept-Ranges: bytes
< Content-Length: 78351043
< Content-Type: application/octet-stream
% Total % Received % Xferd Average Speed Time Curr.
Dload Upload Total Current Left Speed
0 74.7M 0 85372 0 0 35556 0 0:36:43 0:00:02 0:36:41 52932
--
I then pushed ^S.
curl then not only stops outputting to the screen / terminal, it also stops reading from the network stack, as tcpdump shows my end shrinking the receive window to zero ie. it is saying to the other end slow down sending, until the point where it stops. This usually means the application at my end, using TCP, has stopped reading data from the TCP connection.
--
22:53:31.407163 203.103.158.37.35207 > 203.16.234.19.80: . ack 216349 win 58 (DF)
22:53:31.410444 203.16.234.19.80 > 203.103.158.37.35207: . 216349:217801(1452) ack 292 win 1608 (DF)
22:53:31.436478 203.16.234.19.80 > 203.103.158.37.35207: . 217801:219253(1452) ack 292 win 1608 (DF)
22:53:31.463037 203.16.234.19.80 > 203.103.158.37.35207: . 219253:220705(1452) ack 292 win 1608 (DF)
22:53:31.487162 203.103.158.37.35207 > 203.16.234.19.80: . ack 220705 win 24 (DF)
22:53:31.496290 203.16.234.19.80 > 203.103.158.37.35207: . 220705:222157(1452) ack 292 win 1608 (DF)
22:53:31.522657 203.16.234.19.80 > 203.103.158.37.35207: . 222157:223609(1452) ack 292 win 1608 (DF)
22:53:31.577172 203.103.158.37.35207 > 203.16.234.19.80: . ack 223609 win 1 (DF)
22:53:32.153137 203.16.234.19.80 > 203.103.158.37.35207: P 223609:223737(128) ack 292 win 1608 (DF)
22:53:32.153231 203.103.158.37.35207 > 203.16.234.19.80: . ack 223737 win 0 (DF)
22:53:32.691342 203.16.234.19.80 > 203.103.158.37.35207: . ack 292 win 1608 (DF)
--
The curl process under top has a status of "S" or sleeping.
--
10909 mark 9 0 1300 1300 1028 S 0.0 0.3 0:00 curl
--
Sending curl a -STOP does change the status to T
--
10909 mark 9 0 1300 1300 1028 T 0.0 0.3 0:00 curl
--
Sending curl a -CONT returns its status to S, without causing it to restart the download.
I've used the -STOP / -CONT trick to pause GUI based programs TCP downloads, and had assumed that -STOP / -CONT was really what was happening behind the scenes when ^S / ^Q were being pressed, as the effect on TCP was the same. That doesn't seem to be the case.
Mr Zippy writes...
I was aware of the issue, but never thought about it applying to LANs
Well, I'm no expert either, but some of the comments seem a bit at odds with other things I remember reading.
On my simple home LAN ping round trips give 0.5 ms when the system is quiet. Do you think the figure of 5ms that you quoted might be more appropriate to a big corporate lan, where packets have to traverse various routers and switches?
I don't think you should be using the 5ms value for a heavily loaded LAN either, because in that case your share of the bandwidth is probably cut by a similar ratio.
But back to my original point - I had thought that the value of TCP window was that you could keep the pipeline full - i.e. have maximum amount of data in transit while the sender is waiting for the ACKs to return. I don't understand how that could work in a simple lan, since there is no transit space for a pipeline full of data - wouldn't it just fill the senders buffers? Perhaps there's a bit of store-and-forward in the switch but I can't see there being much chance of having more than 3 or 4 packets waiting for ACKs.
Put it another way... if it takes 2.5ms for any packet to get from one end of your cable to the other then you cannot get 1MB/sec through anyway. If the 5ms was related to time taken to process the packets then the transfer would also crawl.
Finally, I think the "big difference between Linux and Windows" is generally a non-issue. It was certainly the case around win9x that file transfers were horribly inefficient, but I have noticed no significant difference between XP (or 2k) and Linux on the same machine in terms of network xfer rates.
On my simple home LAN ping round trips give 0.5 ms when the system is quiet. Do you think the figure of 5ms that you quoted might be more appropriate to a big corporate lan, where packets have to traverse various routers and switches?
I'm not sure what you are saying here - you seem to be disputing the 5.2 ms (or 0.0052 second) figure I've used, and then state that that is around what you receive on your home LAN ?
I don't think you should be using the 5ms value for a heavily loaded LAN either, because in that case your share of the bandwidth is probably cut by a similar ratio.
With switched LAN infrastructure, latency isn't usually significantly influenced by the load on the network.
As long as the backplane of the switch(es) isn't congested, there is generally a dedicated 10, 100 or 1000 Mbps pipe between the communicating nodes.
I don't understand how that could work in a simple lan, since there is no transit space for a pipeline full of data
The bandwidth delay product is a "product", meaning that the result is the bandwidth multiplied by the delay.
Increasing either the delay or the bandwidth will change the result, and that is what TCP has to fill to best utilise the network capacity.
Imagine two square water pipes
(a) one 2 meters square, and a 100 meters long. To best utilise that water pipe, it would have to be filled with 2 x 2 x 100 meters or 400 cubic meters of water.
now imagine
(b) one 100 meters square, but only 2 meters long. To best utilise that water pipe, you would still need 400 cubic meters of water.
(a) is analogous to a satellite link, while (b) is analogous to a GigE LAN.
Note that the bandwidth delay product is including both the delays introduced by the packet buffers in the involved device, but also inherent delay in the wire itself, due to its length.
In the case of a GigE LAN, very little delay is added by the wire, as it isn't very long. However, there are small send packet buffers in the GigE card at the source, packet buffers in the GigE switch itself, and small receive packet buffers in the GigE card at the destination. The majority of the 5 or so milliseconds delay on 100 Mbps or 1000Mbps LAN would be contributed by these buffers.
Finally, I think the "big difference between Linux and Windows" is generally a non-issue. It was certainly the case around win9x that file transfers were horribly inefficient, but I have noticed no significant difference between XP (or 2k) and Linux on the same machine in terms of network xfer rates.
Possibly because the bottleneck may not be the network. It could be the PCI interface the network card is plugged into, or the IO throughput of the HDD, or possibly the PCI bus contention between the network card and the HDD IO controller (which may be on the motherboard, but still attached via PCI, just without a physical PCI interface plug).
Using file transfers, at least on a LAN, is not necessarily a useful network performance test, as it is testing *both* the network IO subsystem, and the HDD IO subsystem.
It is impossible to remove all bottlenecks from a system, you can only shift them around. When testing network performance, you need to be sure that your results are not influenced by bottlenecks unrelated to what you are testing, such as HDD IO.
You might say what is the point of testing network IO, if the HDD doesn't deliver enough performance. But that is where you can, via technologies and techniques such as HW RAID5, 64 bit PCI, multiple IO cards and PCI busses, remove HDD IO as the bottleneck from a real world scenario. Once your network has become the bottleneck, that is where TCP tuning becomes very useful.
You might also say that that is unrealistic, but it pretty much describes classical file and print sharing environments, where all the user data is (supposed to be) on the server, and only the OS and possibly the applications come from the workstations' HDDs. High throughput server IO, both HDD and network, then becomes critical.
Oops, just re-read this, noticed I'd mis-placed a decimal point
On my simple home LAN ping round trips give 0.5 ms when the system is quiet.
Possibly you're right about the larger delay being because of a larger network eg. a multi-user LAN.
I can't perform my own test, I don't have access to a 100Mbps LAN at the moment.
I've looked at Glen's presentation slides again, in one place he says <2ms rtt for 100 Mbps, in another place he says 5.2 ms rtt for 100 Mbps.
These figures are dependant on the network topology, utilisation and buffering delays. As I understand it, the purpose of the Web100 project he is involved in is to have the TCP stack adjust things like TCP receive window size dynamically, based on network characteristics.
allanf writes...
My ping times are 4.7ms on quiet times
So that's why you always win with the railgun...
Mr Zippy writes...
However, at least for applications that are using standard input / standard output, it does appear to be pausing application execution, or pauses certain applications, such as curl and wget (the ones I've used this trick on).
Interesting. I think what is happening is that these programs are doing blocking writes when they write to the terminal. Because we've pressed ^S and the terminal is stopped, when curl tries to write to the terminal to update the download progress, it gets blocked (presumably it doesn't do non-blocking writes). Since it's blocked waiting to finish writing to the terminal, that means it can't be reading data from the socket, so the receive window fills up.
In contrast though, try something like ssh-keygen with a large keylength (eg. 4096 bits). On my Athlon 1700, if I run it using ssh-keygen -f test.key -b 4096 -t rsa it'll take a while to generate the key, before asking me for a passphrase. Just press Enter, Enter, and then remove the key.
Now try again, but this time, as soon as the program is started, press ^S. However it's still generating the key, it's not blocked since it isn't writing to the terminal. If you wait more time than it actually takes to generate the key, then press ^Q, you'll instantly get the passphrase prompt. A telltate sign here actually, is that even after pressing ^S, gkrellm is still showing me heavy CPU usage, which eventually drops away once ssh-keygen is done generating the key.
Sounds like a handy thing to know though, I must say I've never really "paused" programs like wget and curl in this way. I'd probably just end up stopping them with a ^Z.
PS. Sorry for the very very belated reply... heh.
Sounds like a handy thing to know though, I must say I've never really "paused" programs like wget and curl in this way. I'd probably just end up stopping them with a ^Z.
Heh, tried this trick (ie ^S/^Q aka XON/XOFF) out a while ago, and it worked. Comes from having done some COBOL programming a number of years ago, around 1990/91, on a DG MV4000, with real dumb terminals.
PS. Sorry for the very very belated reply... heh.
No worries, pushed it back to the top of the sticky list :-)
I was wondering if anyone could recommend Recieve Window settings.
I'm on the BigPond ADSL 512/128 Unlimited* plan using a D-Link DSL-302G in bridged mode and a Smoothwall Express 2.0 (fixes 1, 2 and 3) box.
Something along the lines of.... (example only)
# My customisations
echo 8388608 > /proc/sys/net/core/rmem_max
echo 8388608 > /proc/sys/net/core/wmem_max
echo 131072 > /proc/sys/net/core/rmem_default
echo 131072 > /proc/sys/net/core/wmem_default
echo "4096 87380 8388608" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 65536 8388608" > /proc/sys/net/ipv4/tcp_wmem
echo "8388608 8388608 8388608" > /proc/sys/net/ipv4/tcp_mem
echo 2500 > /proc/sys/net/core/netdev_max_back log
echo 0 > /proc/sys/net/ipv4/tcp_ecn
echo 0 > /proc/sys/net/ipv4/ip_no_pmtu_disc
echo 1 > /proc/sys/net/ipv4/tcp_sack
echo 1 > /proc/sys/net/ipv4/tcp_window_scal ing
echo 1 > /proc/sys/net/ipv4/tcp_timestamps
echo 128 > /proc/sys/net/ipv4/ip_default_ttl
deepspring writes...
echo 0 > /proc/sys/net/ipv4/tcp_ecn
TCP Explicit Congestion Notification is worth switching on, as it can help TCP find out more about the congestion state of the network, allowing TCP to better adjust.
It can encounter problems with broken firewalls, however, patches to support it have been around for 3 years now at least. The only web site I've had problems with in the last few years was www.news.com.au
, and most recently the IT section of that web site. I've just noticed over the last week that that has been fixed as well (maybe they realised that running a 3 year old firewall was not a good idea ?)
echo 128 > /proc/sys/net/ipv4/ip_default_ttl
This won't really be of any benefit and will end up probably creating more unnecessary traffic on the Internet rather than less.
The TTL is there to prevent packets endlessly looping when there happens to be forwarding loop. For example, imagine two routers directly attached to each other, routers A and B. For a particular destination, Router A lists router B as its next hop. For the same destination, Router B lists router A as its next hop. Packets that match this destination will the be forwarded back and forth between the two routers. As each router forward the packet, the TTL field will be reduced by 1, until the TTL field reaches the value of zero, and the packet is dropped.
The initial value of the TTL field has to be large enough that it doesn't prevent the packet reaching its actual destination, yet small enough so that it prevents packets being caught in forwarding loops for too long. The current default initial TTL field value is 64, and doesn't currently benefit from being set to any larger value.
Even worse with Windows XP, with a maximum TCP receive window size of 17 KiB.
Windows 2000 use default value of 16
Windows XP use a default value of 32
For Windows XP
HKEY_LOCAL_MACHINE\SYSTEM\CurrentC ontrolSet\Services\Tcpip\Parameter s
create a new DWORD called TcpWindowSize
and use a decimal value of 131072 (100 MB LAN optimised)
Then try video streaming (eg: google videos) and you will notice a huge difference ;)
There's a couple of neat TCP stack tweakers around for Windows at least:
TCPOptimizer - see www.speedguide.net/downloads.php![]()
DRTCP - see www.dslreports.com/tools
- these guys have some handy test stuff too.
Over potentially lossy links like satellite, packet loss handling can become critical to overall throughput. I recall some special apps that would spoof the handling over the satellite link & provide std TCP interfacing at each end.
Now either press ^Z to background it, or send it the STOP signal using kill -STOP `pidof vim`.
I thought I'd just point something out here. You'll notice that ^Z already "STOP"s the process. It does not background it. If you want to background it, then you'd have to run "bg" after this ^Z. You can then bring it to the foreground with "fg".
Also some shells don't respond to job control, and therefore wouldn't accept the "STOP". An example of this is the original Bourne Shell. Don't mistake this for the bash (Bourne Again SHell). You can find this in any version of Solaris "/sbin/sh", and I'm sure other Unices.
Anyways I know this is off topic, I'm just pedantic like that...