How to check resources and processes

you can find the process ID (PID) of the parent process by using a sequence of ps -aux | grep pattern. If your life is going to be easy, there’ll always only be one match, but if there are also child processes, it’s a bit more tricky.

In general, modern versions of ps let you specify exactly what fields you want to have output with the ‘-o’ flag, so here’s how I’d approach that for a process called “httpsd”:

$ $ ps -o user,pid,ppid,command -ax | grep httpsd
root 47248 1 /usr/local/apache/bin/httpsd
www 47249 47248 /usr/local/apache/bin/httpsd
www 47250 47248 /usr/local/apache/bin/httpsd
www 47251 47248 /usr/local/apache/bin/httpsd
www 47252 47248 /usr/local/apache/bin/httpsd
www 47253 47248 /usr/local/apache/bin/httpsd
www 92713 47248 /usr/local/apache/bin/httpsd

—————
n this output you can see that the first column shows the userID, the second shows the PID, and the third the PPID. Notice that the parent process not only has PPID of 1, but also is the only one running as root. This makes it easy to mask that one out and identify all the processes that are spawned by the “httpsd” program:

ps -o user,pid,ppid,command -ax | grep httpd | grep -v root

But that’s not exactly what you’re asking. Instead, you’re asking me how to identify processes based on their PPID. To do that, let’s drop the PID of the parent into a variable first:

pid=$(ps -o user,pid,ppid,command -ax | grep httpd | \
grep root | awk ‘{print $2}’)

Now I have a variable “pid” that has the process ID of the parent application. To find all the child apps, I just search for those that have that particular PID as their PPID.

That’s actually just a slightly more complex awk script needed, as demonstrated in this snippet:

for child in $(ps -o pid,ppid -ax | \
awk “{ if ( \$2 == $pid ) { print \$1 }}”)
do
echo “Killing child process $child because ppid = $pid”
kill $child
done

Thanks to [http://www.askdavetaylor.com/how_do_i_find_all_child_processes_in_unix.html|Ask Dave]

—————–

lsof

The lsof tool is used to list all the files open on a Linux system. Remember that in true Unix spirit, almost everything is a file. You access your hardware through files located in /dev, information about CPU, memory, and other devices is located in files on /proc, and network connections, a.k.a. sockets, are also sometimes represented as files.

lsof becomes really handy when you want to know what files a process has currently opened, or which processes are currently acting on a certain file:

$lsof
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
init 1 root cwd DIR 8,1 4096 2 /
init 1 root rtd DIR 8,1 4096 2 /
init 1 root txt REG 8,1 533224 1658100 /sbin/init
init 1 root 10u FIFO 0,14 2941 /dev/initctl
migration 2 root cwd DIR 8,1 4096 2 /
migration 2 root rtd DIR 8,1 4096 2 /

lsof lists the running command, its process ID, the user to whom the process belongs, file descriptor of the opened file, type of the file opened, major and minor device numbers of the file, size of the file, node number of its inode, and the name of the file opened or the mount point of the device being acted on.

To list files opened by process belonging to a particular user, use:

$lsof -u user

To see a list of files opened by a particular process, use:

$lsof -p pid

Sometimes, you are unable to unmount a particular device because the system reports it as busy, even though you think it is not used by any process. To see what process is still using it, use:

$lsof /dev/mount-point

This will give you the list of processes using the device. Kill them, and you are ready to unmount the device.

————-
Kill Command
Kill a Process

This section is rated “R” for violence (just kidding).

When you need to terminate a process, you use the kill
command. The kill command sends a termination signal to the
process or group specified. The default is to send a TERM (15)
signal to the process(es). However, a process can be
programmed to trap signals and perform specific functions, or
ignore the signal entirely.

Below we see the xscreensaver process (2609) which we want to
terminate.

pgrep xscreen

2609

kill 2609

When you have a stubborn process which is trapping your kill
command and refuses to terminate, use the -9 signal to kill the
process. The -9 signal cannot be trapped by a process and
ignored. If the xscreensaver process did not terminate after the
last kill command, we would then enter:

kill -9 2609

ps -u username

if apache is using alot of resources you can increase the maxlients in the httpd.conf file

Optimize and Tweak High-Traffic Servers

Focus: Linux, Apache 1.3+, [PHP], [MySQL]
Notes: Use at your own risk. If this has any errors, please let me know and I will correct them.

Summary
If you are reaching the limits of your server running Apache serving a lot of dynamic content, you can either spend thousands on new equipment or reduce bloat to increase your server capacity by anywhere from 2 to 10 times. This article concentrates on important and poorly-documented ways of increasing capacity without additional hardware.

Problems
There are a few common things that can cause server load problems, and a thousand uncommon. Let’s focus on the common:
Drive Swapping – too many processes (or runaway processes) using too much RAM
CPU – poorly optimized DB queries, poorly optimized code, runaway processes
Network – hardware limits, moron attacks

Solutions: The Obvious
Briefly, and for completeness, here are the most obvious solutions:

Use “TOP” and “PS axu” to check for processes that are using too much CPU or RAM.
Use “netstat -anp | sort -u” to check for network problems.

Solutions: Apache’s RAM Usage
First and most obvious, Apache processes use a ton a RAM. This minor issue becomes a major issue when you realize that after each process has done its job, the bloated process sits and spoon-feed data to the client, instead of moving on to bigger and better things. This is further compounded by a bit of essential info that should really be more common knowledge:

If you serve 100% static files with Apache, each httpd process will use around 2-3 megs of RAM.
If you serve 99% static files & 1% dynamic files with Apache, each httpd process will use from 3-20 megs of RAM (depending on your MOST complex dynamic page).

This occurs because a process grows to accommodate whatever it is serving, and NEVER decreases again unless that process happens to die. Quickly, unless you have very few dynamic pages and major traffic fluctuation, most of your httpd processes will take up an amount of RAM equal to the largest dynamic script on your system. A smart web server would deal with this automatically. As it is, you have a few options to manually improve RAM usage.

Reduce wasted processes by tweaking KeepAlive
This is a tradeoff. KeepAliveTimeout is the amount of time a process sits around doing nothing but taking up space. Those seconds add up in a HUGE way. But using KeepAlive can increase speed for both you and the client – disable KeepAlive and the serving of static files like images can be a lot slower. I think it’s best to have KeepAlive on, and KeepAliveTimeout very low (like 1-2 seconds).

Limit total processes with MaxClients
If you use Apache to serve dynamic content, your simultaneous connections are severely limited. Exceed a certain number, and your system begins cannibalistic swapping, getting slower and slower until it dies. IMHO, a web server should automatically take steps to prevent this, but instead they seem to assume you have unlimited resources. Use trial & error to figure out how many Apache processes your server can handle, and set this value in MaxClients. Note: the Apache docs on this are misleading – if this limit is reached, clients are not “locked out”, they are simply queued, and their access slows. Based on the value of MaxClients, you can estimate the values you need for StartServers, MinSpareServers, & MaxSpareServers.

Force processes to reset with MaxRequestsPerChild
Forcing your processes to die after a while makes them start over with low RAM usage, and this can reduce total memory usage in many situations. The less dynamic content you have, the more useful this will be. This is a game of catch-up, with your dynamic files constantly increasing total RAM usage, and restarting processes constantly reducing it. Experiment with MaxRequestsPerChild – even values as low as 20 may work well. But don’t set it too low, because creating new processes does have overhead. You can figure out the best settings under load by examining “ps axu –sort:rss”. A word of warning, using this is a bit like using heroin. The results can be impressive, but are NOT consistent – if the only way you can keep your server running is by tweaking this, you will eventually run into trouble. That being said, by tweaking MaxRequestsPerChild you may be able to increase MaxClients as much as 50%.

Apache Further Tweaking
For mixed purpose sites (say image galleries, download sites, etc.), you can often improve performance by running two different apache daemons on the same server. For example, we recently compiled apache to just serve up images (gifs,jpegs,png etc). This way for a site that has thousands of stock photos. We put both the main apache and the image apache on the same server and noticed a drop in load and ram usage. Consider a page had about 20-50 image calls — the were all off-loaded to the stripped down apache, which could run 3x more servers with the same ram usage than the regular apache on the server.

Finally, think outside the box: replace or supplement Apache

Use a 2nd server
You can use a tiny, lightning fast server to handle static documents & images, and pass any more complicated requests on to Apache on the same machine. This way Apache won’t tie up its multi-megabyte processes serving simple streams of bytes. You can have Apache only get used, for example, when a php script needs to be executed. Good options for this are:

TUX / “Red Hat Content Accelerator” – http://www.redhat.com/docs/manuals/tux/
kHTTPd – http://www.fenrus.demon.nl/
thttpd – http://www.acme.com/software/thttpd/

Try lingerd
Lingerd takes over the job of feeding bytes to the client after Apache has fetched the document, but requires kernel modification. Sounds pretty good, haven’t tried it. lingerd – http://www.iagora.com/about/software/lingerd/

Use a proxy cache
A proxy cache can keep a duplicate copy of everything it gets from Apache, and serve the copy instead of bothering Apache with it. This has the benefit of also being able to cache dynamically generated pages, but it does add a bit of bloat.

Replace Apache completely
If you don’t need all the features of Apache, simply replace it with something more scalable. Currently, the best options appear to be servers that use a non-blocking I/O technology and connect to all clients with the same process. That’s right – only ONE process. The best include:

thttpd – http://www.acme.com/software/thttpd/
Caudium – http://caudium.net/index.html
Roxen – http://www.roxen.com/products/webserver/
Zeus ($$) – http://www.zeus.co.uk

Solutions: PHP’s CPU & RAM Usage
Compiling PHP scripts is usually more expensive than running them. So why not use a simple tool that keeps them precompiled? I highly recommend Turck MMCache. Alternatives include PHP Accelerator, APC, & Zend Accelerator. You will see a speed increase of 2x-10x, simple as that. I have no stats on the RAM improvement at this time.

Solutions: Optimize Database Queries
This is covered in detail everywhere, so just keep in mind a few important notes: One bad query statement running often can bring your site to its knees. Two or three bad query statements don’t perform much different than one. In other words, if you optimize one query you may not see any server-wide speed improvement. If you find & optimize ALL your bad queries you may suddenly see a 5x server speed improvement. The log-slow-queries feature of MySQL can be very helpful.

How to log slow queries:

# vi /etc/rc.d/init.d/mysqld

Find this line:
SAFE_MYSQLD_OPTIONS=”–defaults-file=/etc/my.cnf”

change it to:
SAFE_MYSQLD_OPTIONS=”–defaults-file=/etc/my.cnf –log-slow-queries=/var/log/slow-queries.log”

As you can see, we added the option of logging all slow queries to /var/log/slow-queries.log
Close and save mysqld. Shift + Z + Z

touch /var/log/slow-queries.log
chmod 644 /var/log/slow-queries.log

restart mysql
service myslqd restart
mysqld will log all slow queries to this file.

References
These sites contain additional, more well known methods for optimization.

Tuning Apache and PHP for Speed on Unix – http://php.weblogs.com/tuning_apache_unix
Getting maximum performance from MySQL – http://www.f3n.de/doku/mysql/manual_10.html
System Tuning Info for Linux Servers – http://people.redhat.com/alikins/system_tuning.html
mod_perl Performance Tuning (applies outside perl) – http://perl.apache.org/docs/1.0/guide/performance.html

Once again, if this has any errors or important omissions, please let me know and I will correct them.
If you experience a capacity increase on your server after trying the optimizations, let me know!

Found from http://www.crucialp.com/resources/tutorials/server-administration/optimize-tweak-high-traffic-servers-apache-load.php

———————————————————

netstat -plant

Chown Command explained

Chowm most common usage
chown user1 folder/file name — this will give user1 rights to the file or folder

chown user1:user1 folder/file name – this will give user1 and group user1 rights to the file or folder

chown user1 folder/file name -R — this will give user1 rights to the folder and everything in the folder

SMTP ERROR CODES

211 System status, or system help reply
214 Help message
220 Service ready
221 Service closing transmission channel
250 Requested mail action okay, completed
251 User not local; will forward to
354 Start mail input; end with .
421 Service not available, closing transmission channel
450 Requested mail action not taken: mailbox unavailable
451 Requested action aborted: local error in processing
452 Requested action not taken: insufficient system storage
500 Syntax error, command unrecognized
501 Syntax error in parameters or arguments
502 Command not implemented
503 Bad sequence of commands
504 Command parameter not implemented
550 Requested action not taken: mailbox unavailable [E.g., mailbox not found, no access]
551 User not local; please try
552 Requested mail action aborted: exceeded storage allocation
553 Requested action not taken: mailbox name not allowed [E.g., mailbox syntax incorrect]
554 Transaction failed

Installing coldfusion with Apache

Compiling and Installing the Apache module for CFMX from source code
This example was tested on a server running Redhat Linux 8 and Apache 2.0.46 built from source. Click on the link for instructions on installing from source.

Install updater 3 or greater
The source code for the ColdFusion Apache module was included in CFMX updater 3, so step one is to install Updater 3 or higher from Macromedia. In this particular case CFMX was already installed using the standalone server, so if your are installing from scratch choose that option.

Extract the module source code
The source is located in the coldfusionmx/runtime/lib/ directory inside the wsconfig.jar file. Jar files are Java Archive files, and they use the same compression as zip files, so you can treat them like zip files.

cp /opt/coldfusionmx/runtime/lib/wsconfig.jar .
unzip wsconfig.jar

This creates several directories but in connectors/src/ the source for the Apache module resides. Unzip the file ApacheModule.zip
cd connectors/src
unzip ApacheModule.zip

Compile the Apache Module
In the src directory is a file called ApacheBuildInstructions.txt, read this file. This file is the basis of our instructions for this step.

cat ApacheBuildInstructions.txt

We have crafted a build script that does most of the work for you, you just need to make sure the the paths are correct in the build script:
#!/bin/bash
#CFMX path eg: /opt/coldfusionmx
export CFMX=/opt/coldfusionmx

#apache path eg: /usr/local/apache2
export APACHE_PATH=/usr/local/apache2

#apache bin path eg: $APACHE_PATH/bin
export APACHE_BIN=$APACHE_PATH/bin

#CFMX connector path eg $CFMX/runtime/lib/wsconfig/1
export CFMX_CONNECTOR=$CFMX/runtime/lib/wsconfig/1

#stop apache
$APACHE_BIN/apachectl stop

${APACHE_BIN}/apxs -c -Wc,-w -n jrun20 -S LIBEXECDIR=${CFMX_CONNECTOR} mod_jrun20.c \
jrun_maptable_impl.c jrun_property.c jrun_session.c platform.c \
jrun_mutex.c jrun_proxy.c jrun_ssl.c

${APACHE_BIN}/apxs -i -n jrun20 -S LIBEXECDIR=${CFMX_CONNECTOR} mod_jrun20.la

strip $CFMX_CONNECTOR/mod_jrun20.so

Before you run this script (note: you can also just type it in by hand) make sure that the directory for the CFMX_CONNECTOR exists (runtime/lib/wsconfig/1). You will probably need to make this directory:
mkdir /opt/coldfusionmx/runtime/lib/wsconfig/1

If the directory already exists create a directory called 2 instead of 1, and update the CFMX_CONNECTOR variable in the script. Now save the script above in a file, we assume you called it build.sh. You need to mark it as executable with chmod, and then run it.
chmod u+x build.sh
./build.sh

Now you have built the mod_jrun20.so file, and it resides in your CFMX_CONNECTOR directory.

Configure ColdFusion MX to work with Apache First stop ColdFusion MX:

service coldfusionmx stop

Now edit the file /opt/coldfusionmx/runtime/servers/default/SERVER-INF/jrun.xml it may be a good idea to keep a backup of this file before you edit it. Below is the settings you will want to change, with changes in Bold (approx line 350):
8500
*
true
10
10
300
1000

51010
false
10
10
300

false
500
0

The cacheRealPath attribute may be left to true if your only running one web site on the apache server, but if you are running multiple sites you will want to set it to false.

Configure apache for ColdFusion
The Apache httpd.conf file needs to be told to load the module, and also told that index.cfm should be used as a directory index. To accomplish this, I like to create a directory called conf.d in my apache directory /usr/local/apache2/conf.d/ and then create a file called coldfusion.conf. If I have other modules such as php, I create a php.conf file. This allows you to easily edit module specific settings. So to do this I need to tell httpd.conf about my conf.d directory, I do this by adding the following line to the httpd.conf file:

Include conf.d/*.conf

you will want to make sure that your httpd.conf file does not already have this line in it, to search your file run:
grep conf.d httpd.conf

It will not output anything if it does not find the string conf.d in httpd.conf

Now lets create conf.d/coldfusion.conf with the following contents:

LoadModule jrun_module “/opt/coldfusionmx/runtime/lib/wsconfig/1/mod_jrun20.so”

JRunConfig Verbose false
JRunConfig Apialloc false
JRunConfig Ssl false
JRunConfig Ignoresuffixmap false
JRunConfig Serverstore “/opt/coldfusionmx/runtime/lib/wsconfig/1/jrunserver.store”
JRunConfig Bootstrap 127.0.0.1:51010
#JRunConfig Errorurl optionally redirect to this URL on errors
AddHandler jrun-handler .cfm .cfc .cfml .jsp .jws

DirectoryIndex index.cfm

Start ColdFusion and Apache

service coldfusionmx start

service apache start

Password protect web folders on linux- htaccess

create a file called htaccess and type in the following

AuthUserFile /usr/local/apache/htdocs/admin/.htpasswd
AuthName “Authorization Required”
AuthType Basic
require valid-user

the htpasswd file can be create using the following method

from ssh _ shell access needed

htpasswd -c /home/user/.htpasswd admin

the above will create a password file with the user admin .. the password will be encryted

in summary , the htaccess file goes nto the folder you wish to protect . the htpasswd file will go into a none public folder ..

——-

# mod_rewrite
<Directory “/homedir/htdocs”>
Options +FollowSymLinks
AllowOverride All
</Directory>

Computer , server issues and solutions