[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[cobalt-users] erors found in cache.log .....cobalt cacheraq2



hello

my name is kamal working in nettlinx .. internet service provider.recently
we brought two cacheraq2 for our setup.before that we are using cisco cache
engine .but we are facing lot of problems .for we replaced cisco cache
engines with cobalt cacheraq2 boxes .instalation done very smooth and it is
working smooth and we got bandwidth saving of 50% .

but we getting errors in cache;log file .these error making me worry.
and it installed at our place 5 days back and in 5days it is asking for
reboot 3 times .
"saying that some of the services or not started and saying reboot the
cobalt raq2"

major errors are

1)  2000/01/05 10:32:56| WARNING! Your cache is running out of
filedescriptors
     2000/01/05 10:33:12| WARNING! Your cache is running out of
filedescriptors

2) 2000/01/05 12:02:11| urlParse: URI has whitespace:
http://ads.123india.com:80/cgi-bin/adserver/impresscooknew.pl.fcgi?SpaceID=C
3&Mode=Burst&Samosa=Millennium : Society and Culture}

3) 2000/01/05 11:04:56| clientReadRequest: FD 288 Invalid Request
    2000/01/05 11:05:03| parseHttpRequest: Unsupported method 'PROPFIND'
    2000/01/05 11:05:03| clientReadRequest: FD 537 Invalid Request
    2000/01/05 11:05:04| parseHttpRequest: Unsupported method 'PROPFIND'
    2000/01/05 11:05:04| clientReadRequest: FD 160 Invalid Request
    2000/01/05 11:05:10| parseHttpRequest: Unsupported method 'PROPFIND'
    2000/01/05 11:05:10| clientReadRequest: FD 814 Invalid Request
    2000/01/05 11:05:10| parseHttpRequest: Unsupported method 'PROPFIND'
    2000/01/05 11:05:10| clientReadRequest: FD 351 Invalid Request
    2000/01/05 11:06:10| parseHttpRequest: Unsupported method 'PROPFIND'
    2000/01/05 11:06:10| clientReadRequest: FD 37 Invalid Request
   2000/01/05 11:06:10| parseHttpRequest: Unsupported method 'PROPFIND'
   2000/01/05 11:06:10| clientReadRequest: FD 37 Invalid Request
  2000/01/05 11:06:13| parseHttpRequest: Unsupported method 'PROPFIND'
  2000/01/05 11:06:13| clientReadRequest: FD 307 Invalid Request
  2000/01/05 11:06:14| parseHttpRequest: Unsupported method 'PROPFIND'
  2000/01/05 11:06:14| clientReadRequest: FD 57 Invalid Request

4) 2000/01/05 15:34:40| parseHttpRequest: Unsupported method 'OPTIONS'
    2000/01/05 15:34:40| clientReadRequest: FD 546 Invalid Request

5) 2000/01/05 12:18:23| Starting Squid Cache version 2.1.PATCH2 for
mips-pc-linux-gnu...
    2000/01/05 12:18:23| Process ID 4498
   2000/01/05 12:18:23| With 1024 file descriptors available
   2000/01/05 12:18:23| helperOpenServers: Starting 5 'dnsserver' processes
   2000/01/05 12:18:26| Swap maxSize 9543680 KB, estimated 867607 objects
   2000/01/05 12:18:26| Target number of buckets: 17352
   2000/01/05 12:18:26| Using 32768 Store buckets, replacement runs every 2
seconds
   2000/01/05 12:18:26| Max Mem  size: 65536 KB
   2000/01/05 12:18:26| Max Swap size: 9543680 KB
   2000/01/05 12:18:26| Store logging disabled
   2000/01/05 12:18:26| Rebuilding storage in Cache Dir #0 (DIRTY)
   2000/01/05 12:18:26| Loaded Icons.
   2000/01/05 12:18:29| commBind: Cannot bind socket FD 35 to *:3128: (125)
Address    already in use
   FATAL: Cannot open HTTP Port
   Squid Cache (Version 2.1.PATCH2): Terminated abnormally

" This is happend contionusly 3 times after that"

6)  2000/01/05 12:30:05| safeunlink: Couldn't delete
/home/squid2/cache/00/1B/00001B6F. Unknown

7) 2000/01/05 13:28:56| WARNING: Forwarding loop detected for:
    GET / HTTP/1.0
    Accept: */*
   Accept-Language: en-us
   Accept-Encoding: gzip, deflate
   User-Agent: Mozilla/4.0 (compatible; MSIE 5.0; Windows 95)
   Host: 202.53.82.19
   Via: 1.1 cobalt:3128 (Squid/2.1.PATCH2)
   X-Forwarded-For: 203.197.21.134
   Cache-Control: max-age=172800
    Connection: keep-alive

" The above 7 errors ars are coming " pls help in this issue to solve the
problems .
pls give some solutions .

and i also gone threw some of squid documentation i found some solutions
.but for that also i require some help because in cache raq2 all are  menu
driven but if we want to do some kernel settings can give me the procedure .

i am also including squid doc

solution according to squid :

1)  Running out of filedescriptors

If you see the Too many open files error message, you are most likely
running out of file descriptors. This may be due to running Squid on an
operating system with a low filedescriptor limit. This limit is often
configurable in the kernel or with other system tuning tools. There are two
ways to run out of file descriptors: first, you can hit the per-process
limit on file descriptors. Second, you can hit the system limit on total
file descriptors for allprocesses.

Linux

Start with Dancer's Mini-'Adding File-descriptors-to-linux for squid' HOWTO,
but realize that this information is specific to the Linux 2.0.36 kernel.

You also might want to have a look at filehandle patch by Michael O'Reilly

If your kernel version is 2.2.x or greater, you can read and write the
maximum number of file handles and/or inodes simply by accessing the special
files:

        /proc/sys/fs/file-max
        /proc/sys/fs/inode-max

So, to increase your file descriptor limit:

        echo 3072 > /proc/sys/fs/file-max

If your kernel version is between 2.0.35 and 2.1.x (?), you can read and
write the maximum number of file handles and/or inodes simply by accessing
the special files:

        /proc/sys/kernel/file-max
        /proc/sys/kernel/inode-max

While this does increase the current number of file descriptors, Squid's
configure script probably won't figure out the new value unless you also
update the include files, specifically the value of

OPEN_MAX in /usr/include/linux/limits.h.

"2nd problem solution "


2) I get a lot of ``URI has whitespace'' error messages in my cache log,what
should I do?

Whitespace characters (space, tab, newline, carriage return) are not allowed
in URI's and URL's. Unfortunately, a number of Web services generate URL's
with whitespace. Of course your favorite browser silently accomodates these
bad URL's. The
servers (or people) that generate these URL's are in violation of Internet
standards. The whitespace characters should be encoded.

If you want Squid to accept URL's with whitespace, you have to decide how to
handle them. There are four choices that you can set with the uri_whitespace
option:

   1.DENY: The request is denied with an ``Invalid Request'' message. This
      is the default.
   2.ALLOW: The request is allowed and the URL remains unchanged.
   3.ENCODE: The whitespace characters are encoded according to RFC 1738.
      This can be considered a violation of the HTTP specification.
   4.CHOP: The URL is chopped at the first whitespace character and then
       processed normally. This also can be considered  a violation of HTTP.

and also

other solution

Mini-'Adding File-descriptors-to-linux for squid' HOWTO

      site name :          http://www2.simegen.com/~dancer/minihowto.html
anothere site			squid.nlanr.net ..... faqs and troubleshooting section



          I am going to keep this simple. And you are going to help
yourself. You will need the following things:

           An ability to reconfigure and recompile and reinstall squid from
source.
           An ability to do the same with the linux kernel.The source for
both tasks.
          The tools for both tasks.

          Get the picture? If you can't do these things, find someone who
          will teach you, or someone who can do it for you. Right now, I'm
only going to           cover 2.0.36. While large numbers of
file-descriptors are available (and more                 practical) with
linux 2.2.x, some versions of that kernel are highly unstable, and
the exact sequence of steps is still being debated from the anecdotal
information           available. Watch for more information on this soon.

          You need to do these things:

   1.Get a patched linux 2.0.36, like this one. This is a gzipped tar
archive. It is the    kernel  source code, and you will have to build and
install it. If I've lost you already,    stop now, before it's too late.
Find someone else to do this for you.

   2.Worship the name of Oskar Pearson from whom the patches flow. His work.
   Not mine.

   3.Configure it to your taste with drivers and options. See that it is
   configured to 3000  file-descriptors. Remember that number. If you change
it, you
   will need to do some arithmetic and code-reading on your own.

   4.Build it.

   5.Install it.

   6.Thank Linus Torvalds that it exists.

   7.Reboot.

   8.Edit /usr/include/gnu/types.h and change __FD_SETSIZE to 3072 (this is
   where the arithmetic comes in. I don't recall how this number is
   generated from the 3000, but I can reasonably bet you that for 4000 it
would not be    4072.  Got it? If you dick with these numbers without
understanding the arithmetic   (which entails being able to find  it) you
are buying a heap of trouble)

   9.How many file-descriptors do you have set? Round that up to the next
   power of 2 and then add a couple powers of two: 3000 -> 4096 ->
   16384...let's call that number   $X

  10.echo $X > /proc/sys/file-max    # To increase the system-wide
   file-descriptor  limit.

  11.Increase your shell's current working resource limit: ulimit -n 3000
  (look it up in the manpages)

  12.Reconfigure squid. If it doesn't see the expanded number, go back and
   figure out which of these steps you didn't get right.

  13.Recompile squid.

  14.Reinstall squid

  15.Modify your squid startup script to do the echo and the ulimit
mentioned earlier,    before starting squid.

  16.Start it up. Check your cache-log. All should be well.
  17.If not, then you screwed up. Fix it.

"The above are given in squid docs "


pls help and waiting for reply


from
kamal
systems administartor
nettlinx limited

voice +90-040-3232200