VPS disk performance, Digital Ocean 2048MB, 2 core droplet testing

VPS disk performance, Digital Ocean 2048MB, 2 core droplet testing

Following yesterdays published VPS performance data and reading some of the feedback relating to testing virtualised servers I received, I wanted to a

Why should you order a VPS from Digital Ocean?
Review of GoDaddy VPS Hosting
Managed WordPress VPS Hosting

Following yesterdays published VPS performance data and reading some of the feedback relating to testing virtualised servers I received, I wanted to add and compare Digital Oceans 2 core, 2048MB droplet performance to see what impact the increased specification may have on the results. I chose this spec as it is offered at $20, the same price as Linodes base spec machine which benchmarked so well yesterday.

The test details are below but to save you scrolling through all the data to get to them, I’ve put the summary table at the top.

Test Digital Ocean 512 Linode Dig Ocean 512 (3/1/14) Dig Ocean 2048 (3/1/14)
Writing speed (MB/s) 269 1200 426 291
Unbuffered read (MB/s) 335 533 874 650
Buffered read (MB/s) 383 986 545 4983
Bonnie read (MB/s) 328 670 622 692
Bonnie write (MB/s) 154 @ 44% 558 @ 99% 371 @ 64% 322 @ 76%
Bonnie Update (MB/s) 155 @ 61% 424 @ 61% 335 @ 65% 229 @ 52%
Bonnie random seek 5756 +++++ 13361 9613
IO seek (no cache) 8524 IOPS, 33.3 MB/s 8718 IOPS, 34.1 MB/s
IO seek (cached) 214825 IOPS, 839.2 MB/s 221504 IOPS, 865.2 MB/s
IO reads (sequential) 2805 IOPS, 701.2 MB/s 2691 IOPS, 672.7MB/s
FIO / Queue Depth = 1
Completion Latency (usec) 233 114 115 106
Bandwidth (MB/s) 16.2 31.57 32.01 35.71
IO request complete @250usec 95.00% 99.97% 99.54% 99.57%
CPU Utilisation 52% 27% 48.85% 39.72%
FIO / Queue Depth = 8
Completion Latency (usec) 476 162 324 235.69
Bandwidth (MB/s) 61.0 173.40 103.89 124.26
IO request complete @250usec 15.26% 92.5% 58.67% 70.31%
CPU Utilisation 63% 80% 78.19% 69.94%

Its pretty obvious looking at the results that both these droplets are much more competitive with Linodes numbers from yesterday. However, I still noticed an increased standard deviation on Digital Oceans servers which holds back the final results. Hopefully this is something DO can tweak to improve.

Misc CPU results for comparative purposes with previous data

Sysbench CPU, 1 thread = 38.1972s
Sysbench CPU, 2 threads = 20.3343s
Sysbench Memory = 1679.95 MB/sec

Writing Speed

root@testing:~# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.56429 s, 301 MB/s
root@testing:~#
root@testing:~# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.66344 s, 293 MB/s
root@testing:~#
root@testing:~# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.83381 s, 280 MB/s

Unbuffered reading speed

root@testing:~# echo 3 > /proc/sys/vm/drop_caches
root@testing:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.50434 s, 714 MB/s
root@testing:~# echo 3 > /proc/sys/vm/drop_caches
root@testing:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.652 s, 650 MB/s
root@testing:~# echo 3 > /proc/sys/vm/drop_caches
root@testing:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.83096 s, 586 MB/s

Buffered reading speed

root@testing:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.235686 s, 4.6 GB/s
root@testing:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.213416 s, 5.0 GB/s
root@testing:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.223325 s, 4.8 GB/s

Bonnie++ benchmark

root@testing:~# bonnie++ -d /tmp -r 4096 -u root
Using uid:0, gid:0.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
testing          8G   598  99 330718  76 234768  52  2201  98 709220  81  9613 265
Latency             26201us     783ms     528ms   32834us     135ms    4731us
Version  1.96       ------Sequential Create------ --------Random Create--------
testing             -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16 24115  83 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency               669us     445us     735us     209us      26us     306us
1.96,1.96,testing,1,1391452904,8G,,598,99,330718,76,234768,52,2201,98,709220,81,9613,265,16,,,,,24115,83,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,26201us,783ms,528ms,32834us,135ms,4731us,669us,445us,735us,209us,26us,306us

I/O IOPINGS

root@testing:~# ioping /tmp -c 10
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=1 time=0.1 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=2 time=0.3 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=3 time=0.3 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=4 time=0.3 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=5 time=0.2 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=6 time=0.2 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=7 time=0.3 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=8 time=0.2 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=9 time=0.2 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=10 time=0.3 ms

I/O Seek test (no cache)

root@testing:~#  ioping /tmp -RD
--- /tmp (ext4 /dev/disk/by-label/DOROOT) ioping statistics ---
7211 requests completed in 3000.1 ms, 8718 iops, 34.1 mb/s
min/avg/max/mdev = 0.1/0.1/0.7/0.0 ms

I/O reads (cached)

root@testing:~# ioping /tmp -RC
--- /tmp (ext4 /dev/disk/by-label/DOROOT) ioping statistics ---
29435 requests completed in 3000.0 ms, 221504 iops, 865.2 mb/s
min/avg/max/mdev = 0.0/0.0/0.2/0.0 ms

I/O reads – sequential

root@testing:~# ioping /tmp -RL
--- /tmp (ext4 /dev/disk/by-label/DOROOT) ioping statistics ---
5781 requests completed in 3000.2 ms, 2691 iops, 672.7 mb/s
min/avg/max/mdev = 0.3/0.4/6.3/0.1 ms

FIO random read test – queue depth =1

root@testing:~# fio random-read-test
random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 1.59
Starting 1 process
random-read: Laying out IO file(s) (1 file(s) / 128MB)
Jobs: 1 (f=1): [r] [-.-% done] [37019K/0K /s] [9037 /0  iops] [eta 00m:00s]
random-read: (groupid=0, jobs=1): err= 0: pid=1236
  read : io=131072KB, bw=36348KB/s, iops=9087 , runt=  3606msec
clat (usec): min=64 , max=3525 , avg=106.37, stdev=44.56
 lat (usec): min=64 , max=3525 , avg=106.64, stdev=44.59
bw (KB/s) : min=33616, max=39480, per=100.83%, avg=36651.43, stdev=1860.41
  cpu          : usr=8.43%, sys=31.29%, ctx=32578, majf=0, minf=24
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
 submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 issued r/w/d: total=32768/0/0, short=0/0/0
 lat (usec): 100=33.86%, 250=65.71%, 500=0.33%, 750=0.06%, 1000=0.02%
 lat (msec): 2=0.02%, 4=0.01%

Run status group 0 (all jobs):
   READ: io=131072KB, aggrb=36348KB/s, minb=37220KB/s, maxb=37220KB/s, mint=3606msec, maxt=3606msec

Disk stats (read/write):
  vda: ios=32588/0, merge=0/0, ticks=2332/0, in_queue=2312, util=62.96%

FIO – random read test – queue depth = 8

root@testing:~# fio random-read-test-aio
random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=8
fio 1.59
Starting 1 process
Jobs: 1 (f=1)
random-read: (groupid=0, jobs=1): err= 0: pid=1264
  read : io=131072KB, bw=125789KB/s, iops=31447 , runt=  1042msec
slat (usec): min=3 , max=3317 , avg=15.28, stdev=26.24
clat (usec): min=46 , max=3807 , avg=235.69, stdev=114.89
 lat (usec): min=71 , max=3815 , avg=251.92, stdev=116.22
bw (KB/s) : min=103728, max=150768, per=101.16%, avg=127248.00, stdev=33262.30
  cpu          : usr=14.99%, sys=54.95%, ctx=2477, majf=0, minf=29
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
 submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 issued r/w/d: total=32768/0/0, short=0/0/0
 lat (usec): 50=0.01%, 100=0.75%, 250=69.55%, 500=26.38%, 750=3.02%
 lat (usec): 1000=0.19%
 lat (msec): 2=0.08%, 4=0.02%

Run status group 0 (all jobs):
   READ: io=131072KB, aggrb=125788KB/s, minb=128807KB/s, maxb=128807KB/s, mint=1042msec, maxt=1042msec

Disk stats (read/write):
  vda: ios=29013/0, merge=0/0, ticks=3700/0, in_queue=3704, util=75.60%


COMMENTS

WORDPRESS: 0
DISQUS: