FreeBSD下PT client选择

HDChina和HDBits上可用的client有Azureus、 BitTornado、 KTorrent、 rtorrent、 Transmission、 uTorrent。uTorrent需要Wine;Azureus(Vuze)和KTorrent需要X,因此都不考虑了。BitTornado 0.3.18 HDChina不认,Transmission 1.61 HDBits不认。rtorrent里凡是没有下载完成的任务,重启后都需要重新hash,Transmission则没有该问题,而且且速度比rtorrent要快。难道用uTorrent + samba?

FreeBSD使用VMware Tools无法关闭电源

  FreeBSD安装了VMware Tools以后,可以通过VI关闭客户机,但是系统停留在“The operating system has halted.Please press any key to reboot.”不能够自动关闭电源。解决方法为
ee /usr/local/etc/rc.d/vmware-tools.sh
  查找vmware_start_guestd()可见
vmware_start_guestd() {
cd "$vmdb_answer_SBINDIR" && "$vmdb_answer_SBINDIR"/vmware-guestd \
--background "$GUESTD_PID_FILE"
}

  在vmware-guestd命令下增加参数–halt-command “/sbin/shutdown -p now”,修改为
vmware_start_guestd() {
cd "$vmdb_answer_SBINDIR" && "$vmdb_answer_SBINDIR"/vmware-guestd \
--background "$GUESTD_PID_FILE" --halt-command "/sbin/shutdown -p now"
}

  保存退出。执行 /usr/local/etc/rc.d/vmware-tools.sh restart,重启vmware tools即可。

FreeBSD下ZFS在线替换硬盘扩容实践

替换前
test# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zfspool 9.94G 1.96G 7.98G 19% ONLINE –

用da2替换da1,都是/dev/下的设备
test# zpool replace zfspool da1 da2

开始替换了
test# zpool status
pool: zfspool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 15.24% done, 0h4m to go
config:

NAME STATE READ WRITE CKSUM
zfspool ONLINE 0 0 0
replacing ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0

errors: No known data errors

替换完成了
test# zpool status
pool: zfspool
state: ONLINE
scrub: resilver completed with 0 errors on Sat May 9 16:49:35 2009
config:

NAME STATE READ WRITE CKSUM
zfspool ONLINE 0 0 0
da2 ONLINE 0 0 0

errors: No known data errors

容量增加了
test# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zfspool 17.9G 1.96G 16.0G 10% ONLINE –

整个过程中应用没有中断

使用PF实现基于来源的策略路由

  FreeBSD做服务器,要实现从哪个网卡进来的连接请求,返回时还从哪个网卡出去。这样可以让客户端自己选择线路,无需收集路由表。

ee /etc/rc.conf
pf_enable="YES"
pf_rules="/etc/pf.conf"

//启用PF
defaultrouter="192.168.1.1"
//这个是本机自己发起连接的默认路由

ee /etc/pf.conf

if_cernet="em0"
if_ct="em1"
gw_cernet="192.168.1.1"
gw_ct="192.168.0.1"
block all
pass quick on lo0 all
pass in quick on $if_cernet reply-to ( $if_cernet $gw_cernet ) proto {tcp,udp,icmp} to any keep state
pass in quick on $if_ct reply-to ( $if_ct $gw_ct ) proto {tcp,udp,icmp} to any keep state
pass out keep state

VMware ESXi 挂载 iSCSI 和 NFS 性能测试

  iSCSI-target和NFS Server由一台Raid10(4*2.5′ 10Krpm 146GB)的VMware ESXi 3.5里的FreeBSD服务机提供,在另一台Raid1(2*3.5′ 15Krpm 146GB)的VMware ESXi 3.5里挂载iSCSI和NFS,然后分别以虚拟磁盘添加入FreeBSD测试机中。使用/usr/local/bin/iozone -i 0 -i 1 -i 2 -r 1024 -s 1G -t 2 -C测试。测试结果如下:
  iSCSI测试:
Initial write = 5443.42 KB/sec
Rewrite = 4840.85 KB/sec
Read = 19823.13 KB/sec
Re-read = 19298.97 KB/sec
Random read = 44114.65 KB/sec
Random write = 4024.72 KB/sec

  NFS测试:
Initial write = 952.76 KB/sec
Rewrite = 975.36 KB/sec
Read = 14782.20 KB/sec
Re-read = 16085.16 KB/sec
Random read = 41878.42 KB/sec
Random write = 794.31 KB/sec

  CPU占用率上NFS只有iSCSI的一半,服务机和测试机都差不多。iSCSI时CPU占用率为15%左右,中间还有一段是30%多。NFS时基本都8%左右。两台机器均为2*Intel E5405,分配给虚拟机2个核。

  测试机直接加载NFS测试:
Initial write = 2361.99 KB/sec
Rewrite = 2130.92 KB/sec
Read = 17595.85 KB/sec
Re-read = 18904.29 KB/sec
Random read = 13139.79 KB/sec
Random write = 2001.82 KB/sec

  测试机本地测试:
Initial write = 8233.32 KB/sec
Rewrite = 12511.68 KB/sec
Read = 34969.73 KB/sec
Re-read = 34179.26 KB/sec
Random read = 82272.52 KB/sec
Random write = 4620.50 KB/sec

  服务机本地测试:
Initial write = 6236.64 KB/sec
Rewrite = 9016.30 KB/sec
Read = 47051.42 KB/sec
Re-read = 47444.12 KB/sec
Random read = 27243.86 KB/sec
Random write = 3251.88 KB/sec

FreeBSD 配置 SNMP

cd /usr/ports/net-mgmt/net-snmp
make install clean
ee /usr/local/etc/snmp/snmpd.conf
rocommunity abc 192.168.1.0/24 设置只读community为abc,只允许192.168.1.0/24网段访问
ee /etc/rc.conf
增加snmpd_enable=”YES”