Basic I/O Monitoring on Linux
This is my fourth week at Pythian and in Canada and I'm starting to get back to my normal life cycle --- my personal things are getting sorted and my working environment is set. Here at Pythian I'm in a team of four people together with Christo, Joe, and Virgil. (I should write another post about beginning at Pythian --- will do one day.) Yesterday, I asked Christo to show me how he monitors I/O on Linux. I needed to collect statistics on a large Oracle table on a production box, and wanted to keep an eye on the impact. So we grabbed Joe as well and sat all three around my PC. While we were discussing, Paul was around and showed some interest in the topic we discussed --- otherwise, why would we all three be involved?. Anyway, Dave and Paul thought that this would be a nice case for a blog post. So here we are... Indeed, while the technique we discuss here is basic, it gives a good overview and is very easy to use. So let get focused... We will use
iostat
utility. In case you need you know where to find more about it --- right, man pages. So we will use the following form of the command: [code light="true"]iostat -x [-d] <interval>[/code]
- -x option displays extended statistics. You definitely want it.
- -d is optional. It removes CPU utilization to avoid cluttering the output. If you leave it out, you will get the following couple lines in addition: [code light="true"]avg-cpu: %user %nice %sys %iowait %idle 6.79 0.00 3.79 16.97 72.46[/code]
- is the number of seconds
iostat
waits between each report. Without a specified interval,iostat
displays statistics since the system was up then exits, which is not useful in our case. Specifying the number of seconds causesiostat
to print periodic reports where IO statistics are averaged for the time period since previous report. I.e., specifying 5 makesiostat
dump 5 seconds of average IO characteristics, every 5 seconds until it's stopped.
r/s
andw/s
--- respectively, the number of read and write requests issued by processes to the OS for a device.rsec/s
andwsec/s
--- sectors read/written (each sector 512 bytes).rkB/s
andwkB/s
--- kilobytes read/written.avgrq-sz
--- average sectors per request (for both reads and writes). Do the math ---(rsec + wsec) / (r + w) = (134.13+178.84)/(10.18+9.78)=15.6798597
If you want it in kilobytes, divide by 2. If you want it separate for reads and writes --- do you own math usingrkB/s
andwkB/s
.avgqu-sz
--- average queue length for this device.- await --- average response time (ms) of IO requests to a device. The name is a bit confusing as this is the total response time including wait time in the requests queue (let call it
qutim
), and service time that device was working servicing the requests (see next column ---svctim
).So the formula isawait = qutim + svctim
. svctim
--- average time (ms) a device was servicing requests. This is a component of total response time of IO requests.%util
--- this is a pretty confusing value. The man page defines it as, Percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device). Device saturation occurs when this value is close to 100%. A bit difficult to digest. Perhaps it's better to think of it as percentage of time the device was servicing requests as opposed to being idle. To understand it better here is the formula:utilization = ( (read requests + write requests) * service time in ms / 1000 ms ) * 100%
or%util = ( r + w ) * svctim /10 = ( 10.18 + 9.78 ) * 8.88 = 17.72448
qutim
--- the average time a request is spending in the queue. If it's insignificant, compare it to
svctim
--- the IO device is not saturated. When it becomes comparable to
svctim
and goes above it, then requests are queued longer and a major part of response time is actually time spent waiting in the queue. The figure in the
await
column should be as close to that in the
svctim
column as possible. If
await
goes much above
svctim
, watch out! The IO device is probably overloaded. There is much to say about IO monitoring and interpreting results. Perhaps this is only the first of a series of posts about IO statistics. At Pythian we often come across different environments with specific characteristics and various requirements that our clients have. So stay tune --- more to come.
Update 12-Feb-2007: You might also find useful Oracle Disk IO Basics session of Pythian Goodies.
Share this
Previous story
← The Mysterious World of Shmmax and Shmall
Next story
Oracle block corruptions, DBV vs RMAN →
You May Also Like
These Related Stories
Debugging CBQT OR expansion with 10053 trace
Debugging CBQT OR expansion with 10053 trace
Apr 16, 2018
4
min read
Improve Oracle performance with ASM filter driver
Improve Oracle performance with ASM filter driver
Jan 25, 2018
15
min read
Google announces feature releases on BigQuery and other updates
Google announces feature releases on BigQuery and other updates
Apr 3, 2019
7
min read
No Comments Yet
Let us know what you think