您好,欢迎访问三七文档
当前位置:首页 > 电子/通信 > 综合/其它 > glusterfs-多节点读写性能测试
测试环境:glustefs端:Glusterfs版本:glusterfs3.6.1builtonNov7201415:15:50系统版本:CentOSrelease6.5(Final)内核:Linuxversion3.4.45硬件环境:CPU:Intel(R)Xeon(R)CPUE3-1230V2@3.30GHz内存:32GB硬盘:4块128GSSD11块2TSATA盘client端:Glusterfs版本:glusterfs3.6.1builtonNov7201415:15:50系统版本:CentOSrelease6.5(Final)内核:Linuxversion3.4.45硬件环境:CPU:Intel(R)Xeon(R)CPUE3-1230V2@3.30GHz内存:8GB网络环境:gluster端:服务器网络配置为bridge,单根网线接到交换机上。data-node3ip设为10.3.211.3,data-node4ip设为10.3.211.4,data-node5ip设为10.3.211.5,data-node6ip设为10.3.211.6。client端:eth0设置固定ip192.168.10.200,eth1、eth2、eth3三个网口做bond0,ip设为10.3.211.11,四个网口四根网线连接到交换机上。系统节点:IP别名Brick10.3.211.3data-node3/brick/brick1/10.3.211.4data-node4/brick/brick1/10.3.211.5data-node5/brick/brick1/10.3.211.6data-node6/brick/brick1/192.168.10.200data-client测试目的:测试不同节点数下使用glusterfs建立stripe卷挂载到3网口做bond0的client端上,测试他的读写性能测试方法测试过程测试步骤1、添加信任池glusterpeerprobedata-node3glusterpeerprobedata-node4glusterpeerprobedata-node5glusterpeerprobedata-node62、创建stripe卷两个节点glustervolumecreateteststripe2data-node4:/brick/brick1/testwdata-node3:/brick/brick1/testw三个节点glustervolumecreatetest1stripe3data-node4:/brick/brick1/testdata-node3:/brick/brick1/testdata-node5:/brick/brick1/test四个节点glustervolumecreatetest2stripe4data-node4:/brick/brick1/testedata-node3:/brick/brick1/testedata-node5:/brick/brick1/testedata-node6:/brick/brick1/teste3、启动stripe卷glustervolumestarttestglustervolumestarttest1glustervolumestarttest25、挂载到client端mount.glusterfsdata-node3:/test/mnt6、使用dd命令测试吞吐率ddif=/dev/zeroof=/mnt/dd.imgbs=1Mcount=600007、使用fio测试iops测试工具:ddfio测试方法:不同节点数下建立stripe卷,挂载到client端使用dd命令测试他的吞吐量,fio测试iops测试脚本#!/bin/bashcd/mntsleep10rm-rftest*sleep30cd/sleep10/usr/local/bin/fio--name=/mnt/test--direct=0--iodepth=96--rw=randread--ioengine=posixaio--bs=4k--size=16G--numjobs=20--runtime=3600--group_reporting/fiotest/fio.logsleep100cd/mntsleep60rm-rftest*sleep300cd/sleep60umounttestpoolsleep100mount-tglusterfsdata-node3:/test/mnt/sleep60/usr/local/bin/fio--name=/mnt/test--direct=0--iodepth=96--rw=randwrite--ioengine=posixaio--bs=4k--size=16G--numjobs=20--runtime=3600--group_reporting/fiotest/fio.logsleep100cd/mntsleep60rm-rftest*sleep300cd/sleep60umounttestpoolsleep100mount-tglusterfsdata-node3:/test/mnt/sleep60/usr/local/bin/fio--name=/mnt/test--direct=0--iodepth=96--rw=randread--ioengine=posixaio--bs=8k--size=16G--numjobs=20--runtime=3600--group_reporting/fiotest/fio.logsleep100cd/mntsleep60rm-rftest*sleep300cd/sleep60umounttestpoolsleep100mount-tglusterfsdata-node3:/test/mnt/sleep60/usr/local/bin/fio--name=/mnt/test--direct=0--iodepth=96--rw=randwrite--ioengine=posixaio--bs=8k--size=16G--numjobs=20--runtime=3600--group_reporting/fiotest/fio.logsleep100cd/mntsleep60rm-rftest*sleep300cd/sleep60umounttestpoolsleep100mount-tglusterfsdata-node3:/test/mnt/sleep60/usr/local/bin/fio--name=/mnt/test--direct=0--iodepth=96--rw=randread--ioengine=posixaio--bs=16k--size=16G--numjobs=20--runtime=3600--group_reporting/fiotest/fio.logsleep100cd/mntsleep60rm-rftest*sleep300cd/sleep60umounttestpoolsleep100mount-tglusterfsdata-node3:/test/mnt/sleep60/usr/local/bin/fio--name=/mnt/test--direct=0--iodepth=96--rw=randwrite--ioengine=posixaio--bs=16k--size=16G--numjobs=20--runtime=3600--group_reporting/fiotest/fio.logsleep100cd/mntsleep60rm-rftest*sleep300cd/sleep60umounttestpoolsleep100mount-tglusterfsdata-node3:/test/mnt/sleep60/usr/local/bin/fio--name=/mnt/test--direct=0--iodepth=96--rw=randread--ioengine=posixaio--bs=32k--size=16G--numjobs=20--runtime=3600--group_reporting/fiotest/fio.logsleep100cd/mntsleep60rm-rftest*sleep300cd/sleep60umounttestpoolsleep100mount-tglusterfsdata-node3:/test/mnt/sleep60/usr/local/bin/fio--name=/mnt/test--direct=0--iodepth=96--rw=randwrite--ioengine=posixaio--bs=32k--size=16G--numjobs=20--runtime=3600--group_reporting/fiotest/fio.logsleep100cd/mntsleep60rm-rftest*sleep300cd/sleep60umounttestpoolsleep100mount-tglusterfsdata-node3:/test/mnt/sleep60/usr/local/bin/fio--name=/mnt/test--direct=0--iodepth=96--rw=randread--ioengine=posixaio--bs=64k--size=16G--numjobs=20--runtime=3600--group_reporting/fiotest/fio.logsleep100cd/mntsleep60rm-rftest*sleep300cd/sleep60umounttestpoolsleep100mount-tglusterfsdata-node3:/test/mnt/sleep60测试结果使用dd命令测试结果:ddif=/dev/zeroof=/mnt/dd.imgbs=1Mcount=60000节点数234吞吐率61.5MB/s135MB/s335MB/s使用fio的测试结果:、1)随机读写的iops节点数234读写类型块大小iops吞吐率iops吞吐率iops吞吐率randread4k2801122.3KB/s3601442.7KB/s7222888.1KB/s8k2181750.9KB/s3532825.9KB/s6004802.6KB/s16k2103369.5KB/s3635818.6KB/s5598948.4KB/s32k1936193.7KB/s35011221KB/s54917581KB/s64k19512483KB/s35522782KB/s56536204KB/s节点数234读写类型块大iops吞吐率iops吞吐率iops吞吐率/usr/local/bin/fio--name=/mnt/test--direct=0--iodepth=96--rw=randwrite--ioengine=posixaio--bs=64k--size=16G--numjobs=20--runtime=3600--group_reporting/fiotest/fio.logsleep100cd/mntsleep60rm-rftest*sleep300cd/sleep60umounttestpoolsleep100mount-tglusterfsdata-node3:/test/mnt/sleep60小randwrite4k4141459.1KB/s6742698.5KB/s13285314.7KB/s8k5564451.8KB/s11048835.4KB/s206616532KB/s16k3896234.7KB/s74611945KB/s130320851KB/s32k2788920.7KB/s4391406
本文标题:glusterfs-多节点读写性能测试
链接地址:https://www.777doc.com/doc-5195093 .html