[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [cobalt-users] Newbie Questions (put away those paddles!)
- Subject: Re: [cobalt-users] Newbie Questions (put away those paddles!)
- From: Kris Dahl <krislists@xxxxxxxxxxxxx>
- Date: Wed Jul 19 13:41:50 2000
on 7/10/00 9:24 PM, Paul Sherrard at psherrard@xxxxxxxxxxxx wrote:
>
>> POP, IMAP or web-based? POP users tend to use less space than
>> IMAP and web-based and
>> POP uses *a lot* less system resources than the latter two. Tell
>> us a little about
>> your typical mail user. If you're going to use a RaQ3 as a mail
>
> POP3 based.. mostly business users, none with incredibly high demands. Our
> customers do want up to 200 email addresses for each domain, however. Can a
> single RaQ3 (able to support ~200 sites) handle the daily requirements of
> 40,000 mail users?? I do have other options (Have some nice Dual PIII 600's
> with a lot more to em than the RaQ's)..
The main problem I see with using Cobalt gear is that just don't get the
availability options. And I am very skeptical of the software raid in the
new Raq4s.
If I were you I'd purchase a couple of really beefy servers to use only for
email purposes, and only email. These should have Raid 5 as a minimum,
plenty of memory, disk space, etc. I see absolutely no advantage, and a
whole lot of disadvantages to running 400 mail servers over a couple of much
more powerful ones. Obviously administration would be much easier if mail
was hosted on a few servers.
>> What will these 400 servers be used for and how did you decide
>> 400 was the right
>> number? RaQs are good basic web hosting appliances, are
>
> Well, I've got 11 hardware racks to fill for a server farm, and the small
> footprint of the RaQ,
> along with its GUI (ease of use for the customer, ease of implementation for
> us) made it a standout candidate. We've purchased a bundle of servers for
> testing so far, and are definitely leaning in this direction. They'll be
> used for both shared and dedicated webhosting, with a focus on a 1-client
> per server setup for businesses, and as many shared individual clients as
> can be put on a machine while maintaining high levels of performance. We're
> looking at a large number of customers, so 400 was our first "ballpark"
> figure. I imagine we'll have to expand as we go.
Well, as far as webservers are concerned, they are pretty decent machines.
However, if I was going to do it, I would offer the Raq's as dedicated
servers, but purchase different machines for the shared servers. If one
dedicated server fails (say a hard drive), no problem--restore from tape to
a spare--sure the client isn't going to be happy about it, but it happens.
If one shared server fails, you have to do the same thing, but instead of
one client being unhappy you have 200.
You'd be amazed on what density you can get, BTW on like a Dual PIII with a
gig of memory and 4 30GB UW SCSI 10K RPM drives (RAID 5 with one hot spare).
Seriously, I am aware of a large provider that has approximately 1000-1500
users on a similarly configured system, while maintaining very reasonable
performance. That is > five times the density of a Raq3, in a 2U case, well
less than 5x the price. Call it $10000 a piece. That is roughly 2-4 times
the density per U, at 3/4ths the price, with a lot less of a chance at
failure then going with 5 Raq3. Hell it can survive two disk failures.
Additionally you can off-load the database applications to one or two
database servers.
Obviously you'd need to invest some time in developing some tools to help
automate administration--which you are going to need to do regardless what
equipment you choose.
> I've still got all my other RedHat boxes and an NT box or two also being
> used for webhosting if need-be. Some clients may also have Databasing
> demands, and I was wondering how the RaQs "stacked up" as it were.. We're
> running Oracle internally, but I don't think any of our customers have that
> sort of death wish, so PostGre should be fine for them.
A very cost effective way of offering database connectivity is to have a
single (or a couple, depending on your needs) servers. Run Oracle if you
want, or even MySQL. Just make sure the machine has RAID 5 or 50, hot
spares, lots of mem, plenty of bandwidth to the switches.
Another logistical problem with getting 400 1u machines is the number of
switches your're going to need. Say you get a HP 4000M chassis, with 3-4
extra 8 port cards and an extra power supply which should fill it up That
will give you at most 64 ports per switch. That is 4U per switch You'll
need about 6-7 of them, which is 24U, or more than half a standard height
rack. You'll be spending about $90 per port ($35,000/400--and that is
pretty good) in initial outlay, plus about $700+ a month for just the cost
of renting the space to hold them.
Although the Catalyst 4000 with Supervisor series may be more cost
effective, you still need some serious switching hardware for the machines.
Anyway, that all is just from my experience. I'd rather go with higher end
equipment and develop my own management tools.
-k