樽中酒不空
分类: 云计算
2015-08-03 11:20:59
1. 简介
本文档详细描述了使用两台pc部署一个小型swift集群的过程,并给出一些简单的使用实例。本文档假定如下前提条件:
阅读本文档前,可以先阅读文档《swift all in one安装部署流程》,学习swift单机部署的相关知识。
2. 安装部署
2.1 准备环境
|
pc 1 |
pc 2 |
机器类型: |
pc物理机 |
pc物理机 |
操作系统: |
ubuntu-12.04-desktop-64位 |
ubuntu-12.04-desktop-64位 |
用户类型: |
root |
root |
数据库: |
sqlite3 |
sqlite3 |
ip地址: |
192.168.3.52(局域网) |
192.168.3.53(局域网) |
proxy server: |
是 |
是 |
storage server: |
是 |
是 |
auth: |
tempauth |
tempauth |
token缓存: |
memcached |
memcached |
2.2 版本说明
本文档基于:
请确保安装的swift版本与本文档中的版本相同。如有问题,请参考凯发k8官网下载客户端中心官网的更新文档。
2.3 安装软件环境
首先,在pc1和pc2上安装swift所需的软件环境(确保你的机器可以访问互联网),例如,sqlite3作为本地数据库,memcached作为token缓存。ubuntu-12.04已自带rsync工具,因此不用另行安装。
# apt-get install python-software-properties # add-apt-repository ppa:swift-core/release # apt-get update # apt-get install curl gcc git-core memcached python-coverage python-dev python-nose python-setuptools python-simplejson python-xattr sqlite3 xfsprogs python-eventlet python-greenlet python-pastedeploy python-netifaces python-pip # pip install mock |
在pc1和pc2上执行以下操作,安装swift服务:
1. 在主目录(root用户)下创建swift目录。然后在该下创建bin目录,用于存放我们手动创建的swift相关脚本文件。
# mkdir ~/swift # mkdir –p ~/swift/bin |
2. 进入~/swift目录,然后从git上获取swift和python-swiftclient源代码,下载到本地。当然也可以使用以前下载的1.7.6版本的swift代码和1.2.0版本的python-swiftclient代码,将代码目录放至~/swift目录下即可。
# cd ~/swift # git clone # git clone |
3. 然后使用上述代码以开发的方式安装swift和python-swiftclient(假设swift的代码目录为~/swift/swift_1.7.6,python-swiftclient的代码目录为~/swift/python-swiftclient_1.2.0)。最终,两者都会被安装到python的dist-packages中。
# cd ~/swift/swift_1.7.6 # python setup.py develop # cd ~/swift/python-swiftclient_1.2.0 # python setup.py develop |
4. 安装过程中,会自动检查其所需的依赖项,并自动进行下载安装。文件~/swift/swift_1.7.6/tools/pip-requires中(内容如下所示)记录了swift所需的依赖项,setup.py就是根据该文件来检查依赖项的。
eventlet>=0.9.15 greenlet>=0.3.1 netifaces>=0.6 pastedeploy>=1.3.3 simplejson>=2.0.9 xattr>=0.4 python-swiftclient |
5. 类似的,文件~/swift/python-swiftclient_1.2.0/tools/pip-requires中(内容如下所示)记录了python-swiftclient所需的依赖项。
simplejson |
6. 修改~/.bashrc文件,在文件尾部添加如下内容:(该文件包含当前用户bash shell的环境变量信息,用以标明swift测试配置文件路径和启动程序路径)
export swift_test_config_file=/etc/swift/test.conf export path=${path}:~/swift/bin |
7. 然后执行如下命令,以使修改生效。一旦生效,终生有效哦亲!。
# . ~/.bashrc |
8. 创建/var/run/swift目录,并修改其权限。该目录是swift运行时所需的,用于存放各个服务进程的pid文件等内容。
# mkdir -p /var/run/swift # chown root:root /var/run/swift |
9. /var/run/swift目录在操作系统关闭后会消失,因此需要在操作系统再次启动时进行创建。我们可以编辑/etc/rc.local文件,在exit 0 之前添加如下内容来实现该目录的自动创建。
mkdir -p /var/run/swift chown root:root /var/run/swift |
swift能够运行在任何支持扩展属性的现代文件系统之上,swift官方推荐用户使用xfs文件系统。经过官方的验证,认为xfs文件系统能为swift的用例提供最佳的性能,并且通过了完整的稳定性测试。
对于任何一台pc,我们可以选择使用一个分区作为存储(using a partition for storage),也可以使用一个回环设备作为存储(using a loopback device for storage)。由于实验环境所限,本文档使用回环设备作为存储。若希望使用独立分区作为存储,请参考官方文档。我们需要在每一台pc上创建回环设备,作为每一个swift节点的数据存储空间。在pc1和pc2上执行以下操作:
1. 选择一个位置创建存储文件夹。
# mkdir /srv |
2. 在存储文件夹中创建xfs格式的回环设备,即/srv/swift-disk文件。
# dd if=/dev/zero of=/srv/swift-disk bs=1024 count=0 seek=50000000 # mkfs.xfs -f -i size=1024 /srv/swift-disk |
3. 编辑/etc/fstab文件,在文件末尾添加如下内容:
/srv/swift-disk /srv/node/sdb1 xfs loop,noatime,nodiratime,nobarrier,logbufs=8 0 0 |
4. 创建回环设备挂载点文件夹,并执行挂载。
# mkdir -p /srv/node/sdb1 # mount /srv/node/sdb1 |
5. 改变挂载点文件夹的权限。
# chown -r root:root /srv/node |
在pc1和pc2上创建swift的配置文件目录。
# mkdir -p /etc/swift # chown -r root:root /etc/swift/ |
在pc1中创建配置文件/etc/swift/swift.conf,编辑其内容(如下所示),然后复制到pc2中的/etc/swift目录下。该文件记录了swift使用的哈希后缀,用于一致性哈希计算。集群中的每个节点都必须保存该文件,并且完全相同。
[swift-hash] # random unique string that can never change (do not lose) swift_hash_path_suffix = jtangfs |
rsync是类unix系统下的数据镜像备份工具。swift对象副本的复制更新是基于推送模式的。对象的复制更新使用rsync将文件同步到对等节点,account和container的复制更新则通过http或rsync来推送数据库文件上丢失的记录。在pc1和pc2上创建rsync的配置文件/etc/rsyncd.conf,添加如下内容:(下面以pc1为例,其中的address是pc1上rsync服务端监听的ip地址,等待客户端推送复制更新,这里同样推荐设置为内网地址)
uid = root gid = root log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = 192.168.3.52
[account] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/account.lock
[container] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/container.lock
[object] max connections = 2 path = /srv/node/ read only = false lock file = /var/lock/object.lock |
为了使rsync能够开机启动,需要在pc1和pc2上编辑配置文件/etc/default/rsync,将参数rsync_enable设置为true,然后启动rsync服务。
# perl -pi -e 's/rsync_enable=false/rsync_enable=true/' /etc/default/rsync # service rsync restart |
在pc1和pc2上执行以下操作,以完成每个节点上的account、container和object存储服务的配置。下面以pc1上的操作为例。
1. 配置account存储服务,创建配置文件/etc/swift/account-server.conf,并添加以下内容:(其中,devices参数表示parent directory of where devices are mounted,默认值为/srv/node;log_facility表示日志标签,与独立日志的配置有关)
[default] devices = /srv/node mount_check = false bind_ip = 192.168.3.52 bind_port = 6002 workers = 4 user = root log_facility = log_local4
[pipeline:main] pipeline = account-server
[app:account-server] use = egg:swift#account
[account-replicator]
[account-auditor]
[account-reaper] |
2. 配置container存储服务,创建配置文件/etc/swift/container-server.conf,并添加以下内容:
[default] devices = /srv/node mount_check = false bind_ip = 192.168.3.52 bind_port = 6001 workers = 4 user = root log_facility = log_local3
[pipeline:main] pipeline = container-server
[app:container-server] use = egg:swift#container
[container-replicator]
[container-updater]
[container-auditor]
[container-sync] |
3. 配置object存储服务,创建配置文件/etc/swift/object-server.conf,并添加以下内容:
[default] devices = /srv/node mount_check = false bind_ip = 192.168.3.52 bind_port = 6000 workers = 4 user = root log_facility = log_local2
[pipeline:main] pipeline = object-server
[app:object-server] use = egg:swift#object
[object-replicator]
[object-updater]
[object-auditor] |
swift默认将日志信息输出到文件/var/log/syslog中。如果要按照个人需求设置rsyslog,生成特有的swift日志文件,则需要在pc1和pc2上执行以下操作,完成独立日志的配置。
1. 创建日志配置文件/etc/rsyslog.d/10-swift.conf,编辑内容如下:(增加account、container、object的日志配置信息)
# uncomment the following to have a log containing all logs together #local1,local2,local3,local4,local5.* /var/log/swift/all.log
# uncomment the following to have hourly proxy logs for stats processing $template hourlyproxylog,"/var/log/swift/hourly/%$year%%$month%%$day%%$hour%" #local1.*;local1.!notice ?hourlyproxylog
local2.*;local2.!notice /var/log/swift/object.log local2.notice /var/log/swift/ object.error local2.* ~
local3.*;local3.!notice /var/log/swift/container.log local3.notice /var/log/swift/ container.error local3.* ~
local4.*;local4.!notice /var/log/swift/account.log local4.notice /var/log/swift/ account.error local4.* ~ |
2. 编辑文件/etc/rsyslog.conf,更改参数$privdroptogroup为adm。
$privdroptogroup adm |
3. 创建/var/log/swift目录,用于存放独立日志。此外,上面的10-swift.conf 文件中设置了输出swift proxy server每小时的stats日志信息,于是也要创建/var/log/swift/hourly目录。
# mkdir -p /var/log/swift/hourly |
4. 更改swift独立日志目录的权限。
# chown -r syslog.adm /var/log/swift # chmod -r g w /var/log/swift |
5. 重启rsyslog服务
# service rsyslog restart |
proxy server使用memcached来缓存用户的token。我们可根据具体需求修改memcached配置文件/etc/memcached.conf。例如,考虑到安全因素,只允许memcached在局域网内被访问,则应将其监听的ip地址修改为局域网ip地址(默认为127.0.0.1,内外网通吃)。推荐配置为内部的、非公网的ip地址。将pc1上的修改为192.168.3.52,pc2上的修改为192.168.3.53,并在完成后重启memcached服务。下面以pc1上执行的命令为例:
# perl -pi -e "s/-l 127.0.0.1/-l 192.168.3.52/" /etc/memcached.conf # service memcached restart |
由于每一台pc都运行swift的所有服务,既作为proxy server,又作为storage server,上面在配置storage server时已经完成了swift的配置,此处可省略。但是,如果proxy server和storage server是分离的,那么就需要将上面的swift.conf文件复制到proxy server中的/etc/swift目录下。
2.6.3 配置proxy-server
在pc1和pc2上创建proxy-server配置文件/etc/swift/proxy-server.conf,添加以下内容:
[default] bind_port = 8080 user = root workers = 8 log_facility = log_local1
[pipeline:main] pipeline = healthcheck cache tempauth proxy-logging proxy-server
[app:proxy-server] use = egg:swift#proxy allow_account_management = true account_autocreate = true
[filter:tempauth] use = egg:swift#tempauth user_admin_admin = admin .admin .reseller_admin user_test_tester = testing .admin user_test2_tester2 = testing2 .admin user_test_tester3 = testing3 reseller_prefix = auth # account和token的命名前缀,注意此处不可以加“_”。 # 例如x-storage-url为 style="margin:0px;padding:0px;">auth_test # 例如x-auth-token为auth_tk440e9bd9a9cb46d6be07a5b6a585f7d1 token_life = 86400 # token的有效期,单位:秒。
[filter:healthcheck] use = egg:swift#healthcheck
[filter:cache] use = egg:swift#memcache memcache_servers = 192.168.3.52:11211,192.168.3.53:11211
[filter:proxy-logging] use = egg:swift#proxy_logging |
配置参数中:memcache_servers指定memcached地址,可配置为集群,以“,”分隔;workers为工作线程数,推荐配置为cpu核心数的2-4倍;“user_admin_admin = admin .admin .reseller_admin”中,user_为前缀,第一个“admin”为账户名,第二个“admin”为用户名,第三个“admin为”密码,“.admin”为角色信息,“.reseller_admin”表示超级管理员角色(可操作任何账户)。
“user_admin_admin = admin .admin .reseller_admin”后面还可以增加一项
swift同时支持http和https协议,本文档中我们使用http协议,若想使用https协议,则需要进行ssl的配置,具体操作请查看参考链接中的内容。
2.6.4 配置独立日志(可选)
上文中已经做了说明,此处可用于设置proxy server的独立日志,需要在pc1和pc2上执行以下操作,完成其配置。
1. 编辑日志配置文件/etc/rsyslog.d/10-swift.conf,增加proxy的日志配置信息:
# uncomment the following to have a log containing all logs together #local1,local2,local3,local4,local5.* /var/log/swift/all.log
# uncomment the following to have hourly proxy logs for stats processing $template hourlyproxylog,"/var/log/swift/hourly/%$year%%$month%%$day%%$hour%" #local1.*;local1.!notice ?hourlyproxylog
local1.*;local1.!notice /var/log/swift/proxy.log local1.notice /var/log/swift/ proxy.error local1.* ~
local2.*;local2.!notice /var/log/swift/object.log local2.notice /var/log/swift/ object.error local2.* ~
local3.*;local3.!notice /var/log/swift/container.log local3.notice /var/log/swift/ container.error local3.* ~
local4.*;local4.!notice /var/log/swift/account.log local4.notice /var/log/swift/ account.error local4.* ~ |
2. 重启rsyslog服务
# service rsyslog restart |
ring共有三种,分别为account ring、container ring、object ring。ring需要在整个集群中保持完全相同,因此需要在某一台pc上创建ring文件,然后复制到其他pc上。我们将在pc1上进行ring的创建,然后复制到pc2上。
首先,使用如下命令创建三个ring。其中,18表示ring的分区数为218;2表示对象副本数为2;1表示分区数据的迁移时间为1小时(这个解释有待证实)。
# cd /etc/swift # swift-ring-builder account.builder create 18 2 1 # swift-ring-builder container.builder create 18 2 1 # swift-ring-builder object.builder create 18 2 1 |
然后向三个ring中添加存储设备。其中z1和z2表示zone1和zone2;sdb1为swift使用的存储空间,即上文挂在的回环设备;100代表设备的权重。
# cd /etc/swift # swift-ring-builder account.builder add z1-192.168.3.52:6002/sdb1 100 # swift-ring-builder container.builder add z1-192.168.3.52:6001/sdb1 100 # swift-ring-builder object.builder add z1-192.168.3.52:6000/sdb1 100
# swift-ring-builder account.builder add z2-192.168.3.53:6002/sdb1 100 # swift-ring-builder container.builder add z2-192.168.3.53:6001/sdb1 100 # swift-ring-builder object.builder add z2-192.168.3.53:6000/sdb1 100 |
ring文件创建完毕后,可以通过以下命令来查看刚才添加的信息,以验证是否输入正确。若发现错误,以account ring为例,可以使用swift-ring-builder account.builder的删除方法删除已添加的设备,然后重新添加。
# cd /etc/swift # swift-ring-builder account.builder # swift-ring-builder container.builder # swift-ring-builder object.builder |
完成设备的添加后,我们还需要创建ring的最后一步,即平衡环。这个过程需要消耗一些时间。成功之后,会在当前目录生成account.ring.gz、container.ring.gz和object.ring.gz三个文件,这三个文件就是所有节点(包括proxy server和storage server)要用到的ring文件。我们需要将这三个文件拷贝到pc2中的/etc/swift目录下。
# cd /etc/swift # swift-ring-builder account.builder rebalance # swift-ring-builder container.builder rebalance # swift-ring-builder object.builder rebalance |
由于我们统一采用root用户部署,所以要确保所有节点上的/etc/swift目录都属于root用户。
# chown -r root:root /etc/swift |
为便于操作,我们可以在pc1和pc2上创建以下swift脚本。
1. 创建~/swift/bin/remakerings脚本文件,添加以下内容,即可一键完成ring的重新创建,当然具体内容需要根据实际环境进行修改。
#!/bin/bash
cd /etc/swift
rm -f *.builder *.ring.gz backups/*.builder backups/*.ring.gz
swift-ring-builder account.builder create 18 2 1 swift-ring-builder container.builder create 18 2 1 swift-ring-builder object.builder create 18 2 1
swift-ring-builder account.builder add z1-192.168.3.52:6002/sdb1 100 swift-ring-builder container.builder add z1-192.168.3.52:6001/sdb1 100 swift-ring-builder object.builder add z1-192.168.3.52:6000/sdb1 100
swift-ring-builder account.builder add z2-192.168.3.53:6002/sdb1 100 swift-ring-builder container.builder add z2-192.168.3.53:6001/sdb1 100 swift-ring-builder object.builder add z2-192.168.3.53:6000/sdb1 100
swift-ring-builder account.builder rebalance swift-ring-builder container.builder rebalance swift-ring-builder object.builder rebalance |
2. 创建~/swift/bin/resetswift脚本文件,添加以下内容,即可一键清空swift的对象数据和日志,完成重置。注意:如果使用的是独立分区存储,则需要另行处理,例如将/srv/swift-disk替换为/dev/sdb1等;如果没有使用rsyslog作为独立日志,则需要去掉“find /var/log/swift... ”和“sudo service rsyslog restart”这两行。
#!/bin/bash
swift-init all stop find /var/log/swift -type f -exec rm -f {} \; sudo umount /srv/node/sdb1 sudo mkfs.xfs -f -i size=1024 /srv/swift-disk sudo mount /srv/node/sdb1 sudo chown root:root /srv/node/sdb1 sudo rm -f /var/log/debug /var/log/messages /var/log/rsyncd.log /var/log/syslog sudo service rsyslog restart sudo service rsync restart sudo service memcached restart |
3. 创建~/swift/bin/startmain脚本文件,添加以下内容,即可一键启动swift的基本服务,包括proxy-server、account-server、container-server和object-server。
#!/bin/bash
swift-init main start |
4. 创建~/swift/bin/stopmain脚本文件,添加以下内容,即可一键关闭swift的基本服务,包括proxy-server、account-server、container-server和object-server。
#!/bin/bash
swift-init main stop |
5. 创建~/swift/bin/startall脚本文件,添加以下内容,即可一键启动swift的所有服务,包括proxy-server、account-server、account-replicator 、account-auditor、container-server、container-replicator、container-updater、container-auditor、object-server、object-replicator、object-updater、object-auditor。
#!/bin/bash
swift-init proxy start swift-init account-server start swift-init account-replicator start swift-init account-auditor start swift-init container-server start swift-init container-replicator start swift-init container-updater start swift-init container-auditor start swift-init object-server start swift-init object-replicator start swift-init object-updater start swift-init object-auditor start |
6. 创建~/swift/bin/stopall脚本文件,添加以下内容,即可一键关闭swift的所有服务,包括proxy-server、account-server、account-replicator 、account-auditor、container-server、container-replicator、container-updater、container-auditor、object-server、object-replicator、object-updater、object-auditor。
#!/bin/bash
swift-init proxy stop swift-init account-server stop swift-init account-replicator stop swift-init account-auditor stop swift-init container-server stop swift-init container-replicator stop swift-init container-updater stop swift-init container-auditor stop swift-init object-server stop swift-init object-replicator stop swift-init object-updater stop swift-init object-auditor stop |
7. 完成脚本创建后,需要更改脚本权限,使之能够执行。
# chmod x ~/swift/bin/* |
由于我们是以开发的方式安装swift的,所以能够执行功能单元测试。若提示“unable to read test config /etc/swift/test.conf – file not found”,可不必理会,或手动将配置文件~/swift/swift_1.7.6/test/sample.conf复制过去。
# cd ~/swift/swift_1.7.6 # ./.unittests |
我们需要在pc1和pc2上启动proxy server和storage server的服务。为了便于操作,我们直接使用上文中创建的脚本文件(在~/swift/bin目录下)运行swift。可以使用~/swift/bin/startmain脚本文件启动swift的基本服务;或使用~/swift/bin/startall脚本文件键启动swift的所有服务。若提示“unable to increase file descriptor limit. running as non-root?”警告为正常现象,不必理会。
# startmain 或 # startall |
同样的,我们可以使用~/swift/bin/stopmain脚本文件关闭swift的基本服务;使用~/swift/bin/stopall脚本文件键关闭swift的所有服务。
# stopmain 或 # stopall |
完成安装部署后,可以使用swift --help命令查看swift帮助信息。
# swift --help
usage: swift
commands: stat [container] [object] displays information for the account, container, or object depending on the args given (if any). list [options] [container] lists the containers for the account or the objects for a container. -p or --prefix is an option that will only list items beginning with that prefix. -d or --delimiter is option (for container listings only) that will roll up items with the given delimiter (see cloud files general documentation for what this means). upload [options] container file_or_directory [file_or_directory] [...] uploads to the given container the files and directories specified by the remaining args. -c or --changed is an option that will only upload files
that have changed since the last upload. -s and --leave-segments are options as well (see --help for more). post [options] [container] [object] updates meta information for the account, container, or object depending on the args given. if the container is not found, it will be created automatically; but this is not true for accounts and objects. containers also allow the -r (or --read-acl) and -w (or --write-acl) options. the -m or --meta option is allowed on all and used to define the user meta data items to set in the form name:value. this option can be repeated. example: post -m color:blue -m size:large download --all or download container [options] [object] [object] ... downloads everything in the account (with --all), or everything in a container, or a list of objects depending on the args given. for a single
object download, you may use the -o [--output] redirect the output to a specific file or if "-" then just redirect to stdout. delete [options] --all or delete container [options] [object] [object] ... deletes everything in the account (with --all), or everything in a container, or a list of objects depending on the args given. segments of manifest objects will be deleted as well, unless you specify the --leave-segments option.
example: swift -a -u user -k key stat
options: --version show program's version number and exit -h, --help show this help message and exit -s, --snet use servicenet internal network -v, --verbose print more info -q, --quiet suppress status output -a auth, --auth=auth url for obtaining an auth token -v auth_version, --auth-version=auth_version specify a version for authentication. defaults to 1.0. -u user, --user=user user name for obtaining an auth token. -k key, --key=key key for obtaining an auth token.
--os-username= openstack username. defaults to env[os_username].
--os-password= openstack password. defaults to env[os_password].
--os-tenant-id= openstack tenant id. defaults to env[os_tenant_id]
--os-tenant-name= openstack tenant name. defaults to env[os_tenant_name].
--os-auth-url= openstack auth url. defaults to env[os_auth_url].
--os-auth-token= openstack token. defaults to env[os_auth_token]
--os-storage-url= openstack storage url. defaults to env[os_storage_url]
--os-region-name= openstack region name. defaults to env[os_region_name]
--os-service-type= openstack service type. defaults to env[os_service_type]
--os-endpoint-type= openstack endpoint type. defaults to env[os_endpoint_type] --insecure allow swiftclient to access insecure keystone server. the keystone's certificate will not be verified. |
启动pc1和pc2上的swift服务后,我们交替地在pc1和pc2上执行操作,并随机使用两者提供的storage-url,如果用户验证和存储服务全部正确,则表明swift集群部署成功。
我们先使用curl测试几个简单的命令。
1. 在pc1上访问192.168.3.52,进行tester用户验证,获取token和storage-url。
# curl -k -v -h 'x-storage-user: test:tester' -h 'x-storage-pass: testing' style="margin:0px;padding:0px;">192.168.3.52:8080/auth/v1.0 |
* about to connect() to 192.168.3.52 port 8080 (#0) * trying 192.168.3.52... connected > get /auth/v1.0 http/1.1 > user-agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 openssl/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > host: 192.168.3.52:8080 > accept: */* > x-storage-user: test:tester > x-storage-pass: testing > < http/1.1 200 ok < x-storage-url: < x-auth-token: auth_tk440e9bd9a9cb46d6be07a5b6a585f7d1 < content-type: text/html; charset=utf-8 < x-storage-token: auth_tk440e9bd9a9cb46d6be07a5b6a585f7d1 < content-length: 0 < date: wed, 20 mar 2013 06:13:15 gmt < * connection #0 to host 192.168.3.52 left intact * closing connection #0 |
2. 在pc1上访问192.168.3.52,查看test账户的状态信息。
# curl -k -v -h 'x-auth-token: auth_tk440e9bd9a9cb46d6be07a5b6a585f7d1' style="margin:0px;padding:0px;">192.168.3.52:8080/v1/auth_test |
* about to connect() to 192.168.3.52 port 8080 (#0) * trying 192.168.3.52... connected > get /v1/auth_test http/1.1 > user-agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 openssl/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > host: 192.168.3.52:8080 > accept: */* > x-auth-token: auth_tk440e9bd9a9cb46d6be07a5b6a585f7d1 > < http/1.1 204 no content < content-length: 0 < accept-ranges: bytes < x-timestamp: 1363760036.52552 < x-account-bytes-used: 0 < x-account-container-count: 0 < content-type: text/html; charset=utf-8 < x-account-object-count: 0 < date: wed, 20 mar 2013 06:13:56 gmt < * connection #0 to host 192.168.3.52 left intact * closing connection #0 |
3. 在pc1上访问192.168.3.53,查看test账户的状态信息。
# curl -k -v -h 'x-auth-token: auth_tk440e9bd9a9cb46d6be07a5b6a585f7d1' style="margin:0px;padding:0px;">192.168.3.53:8080/v1/auth_test |
* about to connect() to 192.168.3.53 port 8080 (#0) * trying 192.168.3.53... connected > get /v1/auth_test http/1.1 > user-agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 openssl/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > host: 192.168.3.53:8080 > accept: */* > x-auth-token: auth_tk440e9bd9a9cb46d6be07a5b6a585f7d1 > < http/1.1 204 no content < content-length: 0 < accept-ranges: bytes < x-timestamp: 1363760036.52552 < x-account-bytes-used: 0 < x-account-container-count: 0 < content-type: text/html; charset=utf-8 < x-account-object-count: 0 < date: wed, 20 mar 2013 06:15:19 gmt < * connection #0 to host 192.168.3.53 left intact * closing connection #0 |
4. 在pc2上访问192.168.3.52,查看test账户的状态信息。
# curl -k -v -h 'x-auth-token: auth_tk440e9bd9a9cb46d6be07a5b6a585f7d1' style="margin:0px;padding:0px;">192.168.3.52:8080/v1/auth_test |
* about to connect() to 192.168.3.52 port 8080 (#0) * trying 192.168.3.52... connected > get /v1/auth_test http/1.1 > user-agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 openssl/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > host: 192.168.3.52:8080 > accept: */* > x-auth-token: auth_tk440e9bd9a9cb46d6be07a5b6a585f7d1 > < http/1.1 204 no content < content-length: 0 < accept-ranges: bytes < x-timestamp: 1363760036.52552 < x-account-bytes-used: 0 < x-account-container-count: 0 < content-type: text/html; charset=utf-8 < x-account-object-count: 0 < date: wed, 20 mar 2013 06:17:01 gmt < * connection #0 to host 192.168.3.52 left intact * closing connection #0 |
上述测试一切正常,表明curl测试通过。
3.2 用swift客户端测试
接着,我们使用swift客户端进行测试。
1. 在pc1上访问192.168.3.53,查看test账户的状态信息。
# swift -a style="margin:0px;padding:0px;">192.168.3.53:8080/auth/v1.0 -u test:tester -k testing stat |
account: auth_test containers: 0 objects: 0 bytes: 0 accept-ranges: bytes x-timestamp: 1363760036.52552 content-type: text/plain; charset=utf-8 |
2. 在pc1上访问192.168.3.52,在test账户下创建名为myfiles的container。
# swift -a style="margin:0px;padding:0px;">192.168.3.52:8080/auth/v1.0 -u test:tester -k testing post myfiles |
|
3. 在pc1上访问192.168.3.53,显示test账户下的container列表。
# swift -a style="margin:0px;padding:0px;">192.168.3.53:8080/auth/v1.0 -u test:tester -k testing list |
myfiles |
4. 在pc2上访问192.168.3.52,显示test账户下的container列表。
# swift -a style="margin:0px;padding:0px;">192.168.3.52:8080/auth/v1.0 -u test:tester -k testing list |
myfiles |
5. 在pc2上访问192.168.3.53,在刚才创建的container下上传文件。上传完成后,swift服务端会以完整路径作为文件名。
# swift -a style="margin:0px;padding:0px;">192.168.3.53:8080/auth/v1.0 -u test:tester -k testing upload myfiles ~/file |
root/file |
6. 在pc1上访问192.168.3.52,显示刚才创建的container下的文件列表。
# swift -a -u test:tester -k testing list myfiles |
root/file |
7. 在pc1上访问192.168.3.52,下载刚才上传的文件。给定的文件名必须是其完整路径,例如上传的文件为~/file,服务端记录的文件名为root/file,于是下载时也要给文件名root/file,而不能是file。最终,文件会被下载到~/root目录下,成为~/root/file。可以使用额外的命令参数来决定下载位置,详情参考swift --help。
# swift -a style="margin:0px;padding:0px;">192.168.3.52:8080/auth/v1.0 -u test:tester -k testing download myfiles root/file |
root/file [headers 0.041s, total 0.065s, 0.000s mb/s] |
上述测试一切正常,表明swift客户端测试通过。
4. 参考链接
http://docs.openstack.org/developer/swift/howto_installmultinode.html
http://docs.openstack.org/developer/swift/development_saio.html
http://docs.openstack.org/developer/swift/deployment_guide.html
http://blog.csdn.net/zzcase/article/details/6578520
http://www.cnblogs.com/bob-fd/archive/2012/07/25/2608413.html
http://blog.csdn.net/zoushidexing/article/details/7860226