足球投&#...

美梦时代第一卷 第六百三十八章 要净化校园风气手打文字版-美梦时代-精彩东方文学
美梦时代 美梦时代第一卷 第六百三十八章 要净化校园风气
&&&&一个小时之后,当雷天睁开眼睛时,看到的是萧奇和那个川音校花小美人儿的脸。&&
&&&&“哇塞,萧奇,你怎么来了?”雷天沙哑着嗓子,话语还很不容易让人听清楚。
&&&&“你冒死救了小雨,我自然是来道谢的,却没想到是你啊。”萧奇感慨的道。
&&&&医生说了,除了手上的两个伤口缝针了之外,雷天多是软组织损伤,还有点脑震荡,大致上是没有什么问题的,只用修养一段时间就好。
&&&&也因为如此,雷天才能这么快的醒过来。大主宰
&&&&听到萧奇的说话,雷天微微一愣,转而看着和萧奇并肩站着的贾雨玟,脸上表情不觉垮了下来。
&&&&nǎinǎi的,本想着英雄救美,却没想到这美人儿已经是有主了的!
&&&&可是转而他又想起了另外一个问题,“萧奇,贾同学就是你的女朋友?不可能啊!她……”
&&&&他本想说贾雨玟肯定不是那种暴力女,可当着朋友的面说这话,难免有拆墙脚的嫌疑。
&&&&别说萧奇救过他一命,就算没有救过他,雷天也不能这么无耻啊。
&&&&“雷大哥,你误会了,萧奇不是我男朋友……”贾雨玟忽然开口道。
&&&&但无论从她哀怨的表情和幽怨的语气,都能让人听出,两人之间的关系绝对不简单,还很有可能是女追男的剧目。
&&&&这下子雷天忍不住了,轻咳了一声,拼命对萧奇使着眼sè。
&&&&萧奇笑了笑。“小雨,珊珊姐。你们都先出去一下吧。”
&&&&“好~~”
&&&&听到回答的声音除了贾雨玟,还有另一个女人时,雷天挣扎着看了一眼,当他看到高挑xìng感的美人儿时,眼睛都直了,险些掉进去就出不来。
&&&&都是门被关上,他才缓缓的回了神。
&&&&“萧奇啊,我就说嘛。你这么好的相貌,这么好的家世,怎么可能没有其她女孩子喜欢你?”雷天语重心长的道,“刚才川音的小美人儿,还有那个大洋马,无不是人间极品啊……你赶紧甩掉你的暴力女友,随便选择她们中的一个就好了……要是实在你的本钱雄厚。享受一下齐人之福,也未尝不可嘛!”
&&&&萧奇哭笑不得,却记起了一笔账:“我说雷天,是谁告诉你我女朋友是暴力女的?你还给我传了出去,你知道上学的这一个月来,我被多少人问候过吗?大家现在看着我。都是抱着同情和怜悯的态度,前两天才有几个大姐姐,想要把我救出火海呢!”
&&&&“哈哈,那敢情好!”雷天大笑了起来,却拉扯到了伤口。不觉倒吸了一口冷气。
&&&&“好了,暂时别动弹。要不了几天,你就可以又生龙活虎了。”萧奇道,“我女朋友温柔得很,你可不要再以讹传讹了!”
&&&&“也不是我说的,是那天篮球队的人自己猜测的。”雷天笑着解释道,但看他的样子,却是不怎么相信萧奇的解释。
&&&&因为这一个月以来,隔三岔五遇到萧奇的时候,他都是脸上身上带伤,傻子才不会以为这是家庭暴力。
&&&&作为川大最了解萧奇背景的人,雷天确实在为萧奇抱不平,人家是典型的金龟婿,居然还被暴力女给欺负了,这老天爷真是没道理啊。大主宰
&&&&萧奇也没办法给他解释,只能道:“这次小雨多亏了你!否则我可是哭都来不及……你放心,我会为你报仇的。”
&&&&“怎么报仇?”雷天来了兴趣,“娘的,那群小子是玩了命的打我啊,你一定要让我也打回来!”
&&&&“你怕是没这个机会了,他天就要S监狱,应该要呆好几年才€Q来吧。”
&&&&“什么!?”
&&&&雷天吓得吞了吞口水,“不是吧?你这么狠?打架弄得这么凶?”
&&&&“这S不单单是打架的原因,如果天那两个家伙没有S时赶到,恐怕你的命要丢掉一半。”萧奇摇着头道:“现在的大学生,轻浮的实在太多,为了一个争˜吃醋€闹Q这y事情来,如果不好好的整一下˜气,我担心以后校V会越来越乱!”
&&&&如果是旁这,雷天€定会笑他大篇。
&&&&S现在是萧奇在,雷天心‘马上是一紧:“所以,你决定用判刑来处罚他,给大家一个警醒?”
&&&&“是的。”少年点点头,“不仅仅如此,政府部门还要在蓉W市的大学校V‘面,宣传警醒这y事情,一旦再有发生,对是还要比这‘的惩罚!”
&&&&想起以后大学校V‘的争˜吃醋、明争暗斗,居然€直接动兵器,或者硫酸之类的上阵,甚至还有直接投毒的奇葩……yy乱象,不先来一次校V‘面的严打,恐怕是不足以威慑心呐!
&&&&到底,同天发生的事故,和素质教育失衡有关。
&&&&现在的老师首先就不负责,教育的学生然也不合格,然后整个社会树的˜气也不好,所以才一代又一代的学生越来越浮躁,W本的道德素质很缺乏。
&&&&为了不让多使得大家u心u首的事情发生,萧奇干脆就学习法家的思想,来个用‘典处罚!
&&&&S要让学生么知道怕了,那么他做涂事的几率,就会很多。
&&&&而一旦给他树了好的法制观念,那么他到了社会上,然也会牢记住大学时的这些教,不敢再轻举妄动的。
&&&&雷天然觉得有点口干舌燥,萧奇的手笔实在是太大了,根本就不是雷天这个阶层€想象到的事情。
&&&&S是,他又知道萧奇的非常在理,现在的学生比起他那个年代的来,已是变了许多,想找几个老实孩子–。
&&&&“那几个子,还真是倒霉啊!”雷天苦笑着叹气道,ˆ萧奇竖为S面的典型,他也算是撞到g口上了。
&&&&“其实何尝不是他的幸运?”萧奇不同意他的观念,“现在就敢罔顾命,以后天知道他会做什么事情来!”
&&&&萧奇还是那个观念。
&&&&法律对不是护一个的,而是护好,惩罚坏的。
&&&&就算是学生,只要是犯下了罪行,也一定要依法严厉处罚!大主宰
&&&&否则以后他们闯出更大的祸事来,那就是给那些抱着轻判思想的人们一记狠狠的耳光回敬!(未完待续。如果您喜欢这部作品,欢迎您来起点投推荐票、月票,您的支持,就是我最大的动力。。)
【精彩东方文学 】 提供等作品手打文字版最新章节首发,txt电子书格式免费下载。
百度风云榜小说:
Copyright &
All Rights Reserved.
小说手打文字版来自网络收集,喜欢本书请,方便阅读。足球投注能赚钱吗?_百度拇指医生
&&&网友互助
?足球投注能赚钱吗?
拇指医生提醒您:该问题下为网友贡献,仅供参考。
我知道一家很不错的呢
不能!只会让你越来越上瘾!
为您推荐:
* 百度拇指医生解答内容由公立医院医生提供,不代表百度立场。
* 由于网上问答无法全面了解具体情况,回答仅供参考,如有必要建议您及时当面咨询医生Overview of Eucalyptus install for Rocks V
---------------------------------------------------------------------------
1.) Satisfy pre-requisites for eucalyptus installation
- Rocks version V with Java and Xen
2.) Prepare Rocks for Eucalyptus installation
- Install and enable Eucalyptus roll
3.) Installation of Eucalyptus on nodes
- Reboot or add new nodes using Eucalyptus Roll
4.) Installation of Eucalyptus on the Front-end
5.) Bootstrapping Eucalyptus
- Install and Register a sample VM image
- Enable user sign-up
6.) Using Eucalyptus for the first time
- Generate keys
- Get EC2 tools
- Set up environment
7.) EC2 tools quick-start
---------------------------------------------------------------------------
1 - Satisfy pre-requisites for eucalyptus installation
Eucalyptus 1.0 minimally requires a freshly installed Rocks V
front-end system that has been configured to include the Java and Xen
Rocks Rolls.
It is not necessary to have any compute nodes
configured, but if compute nodes are configured they can be
re-targeted as Eucalyptus nodes.
Once the front-end is up and
running, we need to download the Eucalyptus Rocks Roll ISO image and
place it on the front end.
Choose either the 64-bit or 32-bit roll from:
wget http://eucalyptus.cs.ucsb.edu/downloads/eucalyptus-5.0-0.x86_64.disk1.iso
/ibirxfs/hocks/Euca:
[root@rocks-131 Euca]# wget http://eucalyptus.cs.ucsb.edu/downloads/eucalyptus-5.0-0.x86_64.disk1.iso
--16:34:55--
http://eucalyptus.cs.ucsb.edu/downloads/eucalyptus-5.0-0.x86_64.disk1.iso
=> `eucalyptus-5.0-0.x86_64.disk1.iso'
Resolving eucalyptus.cs.ucsb.edu... 128.111.45.35
Connecting to eucalyptus.cs.ucsb.edu|128.111.45.35|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 81,147,904 (77M) [application/x-iso9660-image]
16:35:07 (6.44 MB/s) - `eucalyptus-5.0-0.x86_64.disk1.iso' saved [47904]
Version 1.0
[root@rocks-133 hocks]# wget http://eucalyptus.cs.ucsb.edu/downloads/eucalyptus-5.0-0.x86_64.disk1.iso
--12:16:01--
http://eucalyptus.cs.ucsb.edu/downloads/eucalyptus-5.0-0.x86_64.disk1.iso
Resolving eucalyptus.cs.ucsb.edu... 128.111.45.35
Connecting to eucalyptus.cs.ucsb.edu|128.111.45.35|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: M) [application/x-iso9660-image]
Saving to: `eucalyptus-5.0-0.x86_64.disk1.iso'
12:16:03 (35.0 MB/s) - `eucalyptus-5.0-0.x86_64.disk1.iso' saved [47904]
Version 1.1:
[root@rocks-133 hocks]# wget http://eucalyptus.cs.ucsb.edu/downloads/5
--13:12:58--
http://eucalyptus.cs.ucsb.edu/downloads/5
Resolving eucalyptus.cs.ucsb.edu... 128.111.45.35
Connecting to eucalyptus.cs.ucsb.edu|128.111.45.35|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: M) [application/x-iso9660-image]
Saving to: `eucalyptus-5.0-1.x86_64.disk1.iso'
13:13:01 (25.5 MB/s) - `eucalyptus-5.0-1.x86_64.disk1.iso' saved [67360]
Version 1.2:
[root@rocks-133 hocks]# wget http://eucalyptus.cs.ucsb.edu/downloads/17
--10:57:03--
http://eucalyptus.cs.ucsb.edu/downloads/17
10:57:06 (29.7 MB/s) - `eucalyptus-1.2-rocks.x86_64.iso' saved [18912]
2 - Prepare Rocks for Eucalyptus installation
In this phase, we install the Eucalyptus Roll and instruct the enable
the ability to boot nodes with Eucalyptus installed and running.
rocks, we perform the following steps:
rocks add roll clean=1 /path/to/eucalyptus--5.0.0.*.disk1.iso
(warning: don't use a ../ path!)
[root@rocks-131 Euca]# rocks add roll clean=1 /ibrixfs/hocks/Euca/eucalyptus-5.0-0.x86_64.disk1.iso
157730 blocks
Copying roll from media (directory "/mnt/cdrom") into mirror
Cleaning old "eucalyptus" (5.0,x86_64)
Copying "eucalyptus" (5.0,x86_64) roll...
[root@rocks-133 hocks]# rocks add roll clean=1 /home/hocks/eucalyptus-5.0-0.x86_64.disk1.iso
Copying eucalyptus to Rolls.....157724 blocks
Enable roll
rocks enable roll eucalyptus
Roll distribution
RPMS copied to /home/install/rocks-dist/lan/x86_64/RedHat/RPMS linked to
/home/install/rolls/eucalyptus/5.0/x86_64/RedHat/RPMS/
[root@rocks-133 hocks]# cd /home/install && rocks-dist dist
Installing XML Kickstart profiles
installing "hpc" profiles...
installing "eucalyptus" profiles...
installing "kernel" profiles...
installing "web-server" profiles...
installing "base" profiles...
installing "bio" profiles...
installing "java" profiles...
installing "sge" profiles...
installing "area51" profiles...
installing "xen" profiles...
installing "ganglia" profiles...
installing "os" profiles...
installing "site" profiles...
Applying stage2.img
Applying updates.img
Installing XML Kickstart profiles
installing "kernel" profiles...
installing "base" profiles...
Creating repository
Linking boot stages from lan
Building Roll Links
List Rolls
rocks list roll
The output of the last command should include a line indicating that
the eucalyptus roll is installed and enabled.
[root@rocks-133 install]# rocks list roll
VERSION ARCH
x86_64 yes
x86_64 yes
x86_64 yes
x86_64 yes
x86_64 yes
x86_64 yes
x86_64 yes
x86_64 yes
x86_64 yes
web-server: 5.0
x86_64 yes
x86_64 yes
eucalyptus: 5.0
x86_64 yes
3 - Installation of Eucalyptus on nodes
With the rocks roll now installed and enabled on the front-end, we can
now start booting or rebooting nodes and instructing the system to
rebuild them with eucalyptus installed and running, in typical rocks
OPTION 1: If a node has never been added to the system before, we run
the following command on the front-end:
insert-ethers
At the prompt, select 'VM Container' and wait for the next screen.
this point, the front-end is waiting for nodes to boot.
nodes, and wait for their MAC addresses to appear in the insert-ethers
OPTION 2: If a node is already registered in your system and you wish to
re-target it to include Eucalyptus, perform the following command:
[root@rocks-133 ~]# rocks list host
MEMBERSHIP
CPUS RACK RANK COMMENT
rocks-133:
vm-container-0-0: VM Container 4
vm-container-0-1: VM Container 4
vm-container-0-2: VM Container 4
vm-container-0-3: VM Container 4
vm-container-0-4: VM Container 4
vm-container-0-5: VM Container 4
vm-container-0-6: VM Container 4
vm-container-0-7: VM Container 4
rocks set host pxeboot vm-container-0-0 action=install
rocks set host pxeboot vm-container-0-1 action=install
... (repeat this for each node you want to use with Eucalyptus)
When done, reboot your vm-container
ssh vm-container-0-1 /boot/kickstart/cluster-kickstart-pxe
vm-container-0-1:
Rocks GRUB: Setting boot action to 'reinstall': [
Shutting down kernel logger: [
Shutting down system logger: [
---> While vm container boot, SGE still shows nodes active:
root@rocks-133 ~]# qstat -f
qtype used/tot. load_avg arch
----------------------------------------------------------------------------
all.q@compute-1-8-0.local
lx26-amd64
----------------------------------------------------------------------------
all.q@compute-1-8-1.local
----------------------------------------------------------------------------
all.q@compute-1-8-2.local
lx26-amd64
----------------------------------------------------------------------------
all.q@compute-1-8-3.local
lx26-amd64
Set up rocks compute nodes:
[root@rocks-133 .euca]# rocks add host vm vm-container-0-0 membership=compute name=compute-1-8-0
added VM on node "vm-container-0-0" slice "0" with vm_name "compute-1-8-0"
[root@rocks-133 .euca]# rocks add host vm vm-container-0-1 membership=compute name=compute-1-9-0
added VM on node "vm-container-0-1" slice "0" with vm_name "compute-1-9-0"
[root@rocks-133 .euca]# rocks add host vm vm-container-0-2 membership=compute name=compute-1-10-0
added VM on node "vm-container-0-2" slice "0" with vm_name "compute-1-10-0"
[root@rocks-133 .euca]# rocks add host vm vm-container-0-3 membership=compute name=compute-1-11-0
added VM on node "vm-container-0-3" slice "0" with vm_name "compute-1-11-0"
[root@rocks-133 .euca]# rocks add host vm vm-container-0-4 membership=compute name=compute-1-12-0
added VM on node "vm-container-0-4" slice "0" with vm_name "compute-1-12-0"
[root@rocks-133 .euca]# rocks add host vm vm-container-0-5 membership=compute name=compute-1-13-0
added VM on node "vm-container-0-5" slice "0" with vm_name "compute-1-13-0"
[root@rocks-133 .euca]# rocks add host vm vm-container-0-6 membership=compute name=compute-1-14-0
added VM on node "vm-container-0-6" slice "0" with vm_name "compute-1-14-0"
[root@rocks-133 .euca]# rocks add host vm vm-container-0-7 membership=compute name=compute-1-15-0
added VM on node "vm-container-0-7" slice "0" with vm_name "compute-1-15-0"
[root@rocks-133 .euca]# rocks add host vm vm-container-0-0 membership=compute name=compute-1-8-1
added VM on node "vm-container-0-0" slice "0" with vm_name "compute-1-8-1"
[root@rocks-133 .euca]# for i in 8 9 10 11 12 13 14 15; do rocks set host cpus compute-1-$i-0 cpus=4; done
With either OPTION, the Eucalyptus Roll is being installed on your
nodes as they boot.
When the installation process is complete, your
nodes will reboot (for the second time).
At this point, the nodes
are fully configured with the Eucalyptus node controller software
Installation of Eucalyptus on the Front-end
You may start this step as soon as all nodes appear in the output
of the following command (i.e., you do not have to wait for all of them
to reboot for the second time):
rocks list host
Now that the nodes are enabled, you can to install the front-end
software on the Rocks front-end.
This is done with a single
kroll eucalyptus | sh
sh: line 1: syntax error near unexpected token `('
sh: line 1: `error - cannot find distribution (/home/install/rocks-dist/lan/x86_64/build)'
--> automountd auto.home : add install
compute-1-8-0:
[root@rocks-133 install]# kroll eucalyptus | sh
Preparing...
########################################### [100%]
########################################### [100%]
Preparing...
########################################### [100%]
1:euca-vde
########################################### [100%]
Preparing...
########################################### [100%]
1:eucalyptus
########################################### [100%]
parsed config file /opt/eucalyptus-1.0/etc/eucalyptus/eucalyptus.conf
Buildfile: cloud-ant.xml
cluster-add:
BUILD SUCCESSFUL
Total time: 3 seconds
Starting Eucalyptus services: done.
When this is complete, the Eucalyptus system is fully installed and is
ready to use once some first-time administrative tasks are complete.
00:00:57 socat UDP4-LISTEN:1978,fork,reuseaddr EXEC:"vde_plug
/opt/eucalyptus-1.0/var/run/eucalyptus/net/clceuca0.ctl"
00:15:31 socat EXEC:"vde_plug
/opt/eucalyptus-1.0/var/run/eucalyptus/net/clceuca0.ctl" UDP4:vm-container-0-3:1978,reuseaddr
00:25:20 socat EXEC:"vde_plug
/opt/eucalyptus-1.0/var/run/eucalyptus/net/clceuca0.ctl" UDP4:vm-container-0-2:1978,reuseaddr
00:25:49 socat EXEC:"vde_plug
/opt/eucalyptus-1.0/var/run/eucalyptus/net/clceuca0.ctl" UDP4:vm-container-0-6:1978,reuseaddr
00:24:02 socat EXEC:"vde_plug
/opt/eucalyptus-1.0/var/run/eucalyptus/net/clceuca0.ctl" UDP4:vm-container-0-7:1978,reuseaddr
00:26:20 socat EXEC:"vde_plug
/opt/eucalyptus-1.0/var/run/eucalyptus/net/clceuca0.ctl" UDP4:vm-container-0-1:1978,reuseaddr
This is where things are starting to look odd.
Eucalyptus 1.0 moved
from using a 'socat' network connector to a 'vde_cryptcab' network
connector, and so there should never be any socat processes running on a
eucalyptus 1.0 installation.
This implies that while the 1.0 software
was successfully deployed on the nodes, you are still running the pre-1.0
software on the front-end.
This may mean that the kroll operation
5 - Bootstrapping Eucalyptus
Before using Eucalyptus, we need to install at least one runnable VM
image that users can select.
We have provided a small, simple version
of linux that you can use to test.
Download the image and place it on
the front-end:
[root@rocks-133 hocks]# cd /home/hocks
[root@rocks-133 hocks]# wget http://eucalyptus.cs.ucsb.edu/downloads/euca-ttylinux.tgz
--14:37:52--
http://eucalyptus.cs.ucsb.edu/downloads/euca-ttylinux.tgz
Resolving eucalyptus.cs.ucsb.edu... 128.111.45.35
Connecting to eucalyptus.cs.ucsb.edu|128.111.45.35|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: .1M) [application/x-gzip]
Saving to: `euca-ttylinux.tgz'
100%[================================================================================>] 4,274,264
14:37:52 (17.4 MB/s) - `euca-ttylinux.tgz' saved [4264]
[root@rocks-133 hocks]# tar zxvf euca-ttylinux.tgz
ttylinux/vmlinuz-2.6.16.33-xen
ttylinux/ttylinux.img
Untar the image, cd to the directory 'ttylinux', and perform the
following command to add and register the image with eucalyptus:
******* .img file
In Linux, a .img file contains raw disk contents, capable of being mounted into the
filesystem just like a removable disk. User tophandycwby explained the process of
creating a .img using the loop device driver, dd, and losetup.
on the vm-container: /xen ---> disk image with multiple partition is not supported
**********
[root@rocks-133 hocks]# cd ttylinux
[root@rocks-133 ttylinux]# /opt/eucalyptus-1.0/usr/sbin/euca add_image --disk-image ttylinux.img \
--kernel-image vmlinuz-2.6.16.33-xen
--image-name ttylinux
parsed config file /etc/default/eucalyptus
exporting EUCALYPTUS=/opt/eucalyptus-1.0
copying image files...
added image ttylinux
Buildfile: cloud-ant.xml
register-image:
emi-0D05022C
BUILD SUCCESSFUL
Total time: 3 seconds
Add Compute Image
Note that the ttylinux image that we've just installed is a typical
xen image, which includes a disk image (ttylinux.img) and a kernel
(vmlinuz-2.6.16.33-xen).
At this point, you may add addition xen
images that you may have already installed or available.
The 'euca'
command supports other image addition features, including the
installation of a ramdisk if your image requires one (see the 'euca
--man' output for more information).
Single Disk Example
Create a new XML node file that will replace the current partition.xml XML node file:
# cd /home/install/site-profiles/5.0/nodes/
# cp skeleton.xml replace-partition.xml
Inside replace-partition.xml, add the following section right after the
echo "clearpart --all --initlabel --drives=hda
part / --size 120000 --ondisk hda
" & /tmp/user_partition_info
The above example uses a bash script to populate /tmp/user_partition_info. This will set up an 12 GB root
Then apply this configuration to the distribution by executing:
# cd /home/install
# rocks-dist dist
To reformat compute node compute-1-8-0 to your specification above, you'll need to first remove the partition
info for compute-1-8-0 from the database:
# rocks remove host partition compute-1-8-0
Create compute-small roll:
[root@rocks-133 install]# rocks list roll
VERSION ARCH
x86_64 yes
x86_64 yes
x86_64 yes
x86_64 yes
x86_64 yes
x86_64 yes
x86_64 yes
web-server: 5.0
x86_64 yes
eucalyptus: 5.0
x86_64 yes
rocks disable roll area51
rocks disable roll bio
rocks disable roll web-server
rocks-dist dist
Create compute-node image file:
/etc/xen/rocks/compute-1-8-0
script tp create compute node
cd /state/partition1/xen/disks/
mkdir /mnt/rocksimg
mkdir /mnt/ec2img
lomount -diskimage compute-1-8-0.hda -partition 1 /mnt/rocksimg/
dd if=/dev/zero of=compute-1-8-0.img bs=1 count=1 seek=
mke2fs compute-1-8-0.img
mount -o loop compute-1-8-0.img /mnt/ec2img/
cd /mnt/rocksimg/
tar cf - * | (cd ../ec2img/; tar xvfBp -)
cd ../ec2img/
vi etc/fstab
replace LABEL=/
cd /state/partition1/xen/disks/
file compute-1-8-0.img
compute-1-8-0.img: Linux rev 1.0 ext2 filesystem data (large files)
Copy images:
scp vm-container-0-0:/state/partition1/xen/disks/compute-1-8-0.img /home/hocks/complinux/
scp vm-container-0-0:/state/partition1/xen/kernels/initrd-compute-1-8-0 /home/hocks/complinux/
scp vm-container-0-0:/state/partition1/xen/kernels/vmlinuz-compute-1-8-0 /home/hocks/complinux/
Add to Eucalyptus
[root@rocks-133 complinux]# /opt/eucalyptus-1.1/usr/sbin/euca add_image --disk-image compute-1-8-0.img
--kernel-image vmlinuz-compute-1-8-0 --ramdisk-image initrd-compute-1-8-0 --image-name compute
parsed config file /etc/default/eucalyptus
exporting EUCALYPTUS=/opt/eucalyptus-1.1
copying image files...
Buildfile: cloud-ant.xml
register-image:
register-image:
log4j:WARN Continuable parsing error 34 and column 23
log4j:WARN The content of element type "log4j:configuration" must match
"(renderer*,appender*,plugin*,(category|logger)*,root?,(categoryFactory|loggerFactory)?)".
Got response from http://localhost:8773/services/Eucalyptus:
emi-0B4C020D
BUILD SUCCESSFUL
Total time: 4 seconds
added image compute
start instance
[root@rocks-133 complinux]# ec2-run-instances emi-0B4C020D -k mykey -n 1
RESERVATION
r-083C023B
i-4BA30834
emi-0B4C020D
0.0.0.0 0.0.0.0 pending mykey
T00:00:43+0000
vmlinuz-compute-1-8-0
initrd-compute-1-8-0
[root@rocks-133 complinux]# ec2-describe-instances
RESERVATION
r-083C023B
i-4BA30834
emi-0B4C020D
0.0.0.0 192.168.3.2
pending mykey
008-08-07T00:00:43+0000
vmlinuz-compute-1-8-0
initrd-compute-1-8-0
Once an image is registered, we need to log in to the Eucalyptus
administrative web-site to allow users to start signing up to use the
Direct your browser at the following location:
https://rocks-133.sdsc.edu:8443/
Your browser will flag the Web site as one using an untrusted
(self-signed) certificate.
Accept it.
You will be presented with a login screen, at which point you can log
in using the username 'admin' and password 'admin'.
The first time
you log in, the system will require you to change your password and
enter a valid administrator email address.
This address will be used
whenever a new user requests an account on your Eucalyptus system.
6 - Using Eucalyptus for the first time
Now that you've bootstrapped eucalyptus, the system is ready for users
to sign up and start using your cloud (see the section on User Signup
below for more information on that).
As administrator, you may
interact with the system in precisely the same manner as your users,
with the exception that your keys will allow you to inspect and
terminate all instances running on the system, regardless of which
user owns them.
The first step is to generate your eucalyptus keys.
Once you have logged in as user 'admin' the the Eucalyptus web-page:
https://rocks-133.sdsc.edu:8443/
*** after install new version user admin/admin and change the password
You will see a button entitled 'Generate Certificate'.
Click this to
download your admin key-pair.
Unzip the keys using the following
pumuckl/mkcd>scp euca2-admin-x509.zip hocks@rocks-133:/home/hocks/euca2-admin-x509.zip
The zip-file contains two files with the . these are your
public and private keys.
Place these keys in a secure location:
[root@rocks-133 hocks]# mkdir .euca
[root@rocks-133 hocks]# cd .euca
[root@rocks-133 .euca]# unzip ../euca2-admin-x509.zip
../euca2-admin-x509.zip
NOTE: Remember to set the EC2_CERT, EC2_PRIVATE_KEY, and EC2_URL environment variables correctly.
inflating: euca2-admin-f5faf678-pk.pem
inflating: euca2-admin-f5faf678-cert.pem
/usr/local/.euca
75170 -rw-------
1 root root 1285 May 28 14:47 euca2-admin-f5faf678-cert.pem
75171 -rw-------
1 root root 1679 May 28 14:47 euca2-admin-f5faf678-pk.pem
Next, we need to download the EC2 command-line tools from Amazon:
[root@rocks-133 local]# wget /ec2-downloads/ec2-api-tools.zip
--14:55:47--
/ec2-downloads/ec2-api-tools.zip
Resolving ... 207.171.191.241
Connecting to |207.171.191.241|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: .2M) [application/x-zip-compressed]
Saving to: `ec2-api-tools.zip'
100%[================================================================================>] 7,510,750
14:55:52 (1.84 MB/s) - `ec2-api-tools.zip' saved [0750]
[root@rocks-133 local]# unzip ec2-api-tools.zip
ec2-api-tools.zip
creating: ec2-api-tools-1.3-19403/
creating: ec2-api-tools-1.3-19403/bin/
Once you have unzipped the tools, we must set up your environment by
setting the following environment variables:
/etc/profile.d/euca.sh
export EC2_HOME=/usr/local/ec2-api-tools-1.3-19403
export PATH=$PATH:$EC2_HOME/bin
export EC2_URL=http://rocks-133.sdsc.edu:8773/services/Eucalyptus
export EC2_PRIVATE_KEY=/usr/local/.euca/euca2-admin-*-pk.pem
export EC2_CERT=/usr/local/.euca/euca2-admin-*-cert.pem
[root@rocks-133 profile.d]# . ./euca.sh
[root@rocks-133 profile.d]# env|grep EC
EC2_HOME=/usr/local/ec2-api-tools-1.3-19403
EC2_URL=http://rocks-133.sdsc.edu:8773/services/Eucalyptus
EC2_PRIVATE_KEY=/usr/local/.euca/euca2-admin-f5faf678-pk.pem
EC2_CERT=/usr/local/.euca/euca2-admin-f5faf678-cert.pem
[root@rocks-133 profile.d]# export PATH=$PATH:$EC2_HOME/bin:
Now, we ready to start using the tools.
To test if your cloud is up
and running, execute the following EC2 command:
[root@rocks-133 bin]# ec2-describe-availability-zones
***AILABILITYZONE
| rocks-133
| 031/032 small | host=rocks-133.sdsc.edu
In the output of the above command, you should see your front-end
hostname displayed along with the status of 'UP' and a short
description of how many 'small' instance types your cloud can execute
(002/002 means 2 available out of 2 total).
EC2 log and daemon
# ssh vc0-0
[root@vm-container-0-0 ~]# ps aux | grep euca
... httpd -f /opt/eucalyptus-1.0/etc/eucalyptus/httpd-nc.conf
... vde_switch -n 255 ...
... vde_cryptcab...
... vde_plug2tap
These processes in the vm containers?
/opt/eucalyptus-1.0/axis2c-bin-1.4.0/bin/axis2_http_server -p 8774 -f /opt/eucalyptus-1.0/var/log/eucalyptus/axis2c-8774.log
run-time logs are in
/opt/eucalyptus-1.0/var/log/eucalyptus/
/opt/eucalyptus-1.0/var/eucalyptus/log/debug.log
Processes running
rocks-133:
[root@rocks-133 ~]# cd /opt/eucalyptus-1.0/var/log/eucalyptus/
vde_switch -n 255 -p /opt/eucalyptus-1.0/var/run/eucalyptus/net/clceuca0.switch.pid
-F --daemon --sock /opt/eucalyptus-1.0/var/run/eucalyptus/net/clceuca0.ctl
--mgmt /opt/eucalyptus-1.0/var/run/eucalyptus/net/clceuca0.mgmt -t clceuca0
vde_plug /opt/eucalyptus-1.0/var/run/eucalyptus/net/clceuca0.ctl
/opt/eucalyptus-1.0/axis2c-bin-1.4.0/bin/axis2_http_server -p 8774
-f /opt/eucalyptus-1.0/var/log/eucalyptus/axis2c-8774.log
7 - EC2 tools quick-start
Now we can begin running VM instances on your Eucalyptus cloud.
the EC2 command-line tools, we can look at installed images, execute
instances of those images, describe the running instances and
terminate them when we're finished using them.
The following EC2
commands are used to control your instances:
[root@rocks-133 bin]# ec2-describe-images
emi-0D05022C
eucalyptus
vmlinuz-2.6.16.33-xen
no-ramdisk
[root@rocks-133 .euca]# ec2-describe-instances
(will be empty until you start an instance)
[root@rocks-133 bin]# ec2-add-keypair cloud > cloud.private
KEYPAIR cloud
05:6b:1a:da:f8:43:0e:1c:fa:5a:6f:f5:8c:5c:5e:3c:fa:6b:09:bb
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAh1uAYNw7wdg9zCWs4kyszc7kHyYPKnYtv4yjHjqrdq4lZbWx
nlg3/ajHJnyS3oq50FUg9TV9CfS2IouHtwGu4AhL68AA1X2F8SjcOjYWlU6l0JKb
Qwt8PY9s9F7Ia2kHDSPafNWSGhTwl2ywNeltOJD1GkdijSkpu0tq0NflLKuzGI/m
PjZdajmUkw53m4A+TZQ1z5JS+VOWLGFQWvsqYo2pZjiPDjFsAA0yAL9zwRuWP8Pm
A0q2RfJ0OvissF9CtGiBT6YT+2Vz5vLogf+v3tYFvZQbhA4YLDgOCUH56zypOgvm
qIu/ZtlKs8kp7/gBvE8v2HbemYtOX***UU0MLuwIDAQABAoIBAE+LlGxXwL5wSBDa
6ziqersQJLuxcCQyAzyYd5viOrWqLcwR9OnrSixFrZOOjfk+aWhnPtEbt/nL1+WB
PiZsVMrP1V4cHeFYvQg2TQXgl3DzITrrjVbfPwomaY9KzFljBYPRWCsYj53IRIOY
mg10un0NoxzoaqAuWpn4/jLxTXrsKaot/Ra94RE0IbQK+0qWTAreeq0yBQl9V4b1
T59CUMIbdEaTfmBoMCGN4Wrezqwp3MPsh0bVLZwbG0jgM6Ysj6sYAUJh6RD6dlnv
ARcyWipw/o+gUtMJCWnGeIqMmk9BmGxqyKaAikQAtwD0f51OakaAErM/0SZUKtsB
2lCuk/ECgYEAvLiP6AOKDrLqXsyYUM0VPDhAedRBsaVD6bLc1/8oMD6ic7Dqfz2f
SKGS2a0R60thIMXZiQrwUH+WoQ4s3KffhJhX9XjWHhw0YlIin7LVqAnrCjxIKMwQ
yXLAR2lIY85zKdBAGjVHB2nKWjXgP9xyl4xKWQ/M9dDF6dgu4fKh4GUCgYEAt5zE
v0ci49pN3+iztOTfw5TCNZid3bL2sBNINIeTHP538w11RbjWaXFXgjX6VbKLU3IC
pKDZkFHptuLAsgosMRPI49Qh67kDy1IRw2CaUJjZNxWEdRUnYS8sK+raCMlJ+MXc
g+WnfK5TV8z4xSkVZ0iOm9EWrVVHSt0CG7zeqZ8CgYBD53C4PdXGFjBobdt8b15t
rZvdejctEVcPVrFJ8uBmA5N2ZzjpEaYnfyOUuUZSUGwhW687NTlk7ZOoXa5csval
Ah/cDl+Us/dRTVZx+eoQrYjpxOj97Pc5VNXEnChU6Src57a492SYUUNjFDGdKNf+
mZcC1sGbzUP5MTUlTaVbVQKBgCHL4k8O6fYktZbUP1e5lRJr7D9vQweOrGeGdRDu
L37zu+JqBL77ocOw0Bmwk854WbrXTnM9BC7TVQCLxko/Ixk5eg2tezznRjKDfa+H
tX/GUp0YAdSHO0NhKnE+/jkFy+7VhJxmhiil8cNEgDnSMRVcvpshplnrS38VJREz
94wjAoGAUyZIUxMS3eNYwiidgFr+RBcqbRdQXPX5tWkefh2Q/Ovckplpwq4k15tZ
dw5c6c2xD0kyx1xl1U0f0ssa+bnPfjuu3OvFV6L0XNK+KB1D3eqTghINyt22e6RI
MbRy46d/F9LIzI1WI/Wl2V97uLktunm6/Qb7osqFBFnKHEiEONs=
-----END RSA PRIVATE KEY-----
[root@rocks-133 .euca]# chmod 0600 cloud.private
$ ec2-run-instances
[root@rocks-133 .euca]# ec2-run-instances emi-0D05022C -k cloud -n 1
$ ec2-describe-instances
(should now show the instance)
-bash-3.1$ ec2-describe-instances
RESERVATION
i-cmrvfrux
emi-0D05022C
0.0.0.0 192.168.3.2
008-06-20T18:03:28+0000
vmlinuz-2.6.16.33-xen
no-ramdisk
RESERVATION
r-083D023C
i-604F09A1
emi-0D05022C
0.0.0.0 192.168.3.4
running eva
T21:52:55+0000
vmlinuz-2.6.16.33-xen
Instance files on the vm-container:
/usr/local/eucalyptus/instances/admin/i-604F09A1/
Log files: /opt/eucalyptus-1.1/var/log/eucalyptus/nc.log
Once the instance is shown as 'Running', it will also show two IP addresses
assigned to it.
You may log into it with the SSH key that you created:
$ ssh -i keyname.private root@one-of-the-ip-addresses
ssh -i /home/hocks/.euca/eva root@192.168.3.2
ssh: connect to host 192.168.3.2 port 22: No route to host
ssh -i ~root/mykey.private root@192.168.3.2
Alternatively, you can log into the 'ttylinux' instance that we provided
with login 'root' and password 'root' from the vm-container.
[root@vm-container-0-0 ~]# xm list
ID Mem(MiB) VCPUs State
compute-1-8-0
i-5F9D09A5
i-5F9E09A6
i-5F9F09A7
[root@vm-container-0-0 ~]# /usr/sbin/xm console i-5F9D09A5
$ ec2-terminate-instances
$ ec2-delete-keypair myroot
8- User Sign-up
Instructions in sections 6 and 7 apply to regular users, as well.
interested in joining the cloud should be directed to the front-end
https://your.front.end.hostname:8443/
As soon as the administrator logs in for the first time, the login box
will have an "Apply for account" link underneath it.
After a user fills
out the form the email is sent to the administrator, containing two URLs,
one for accepting and one for rejecting the user.
Note that there is no authentication performed on the people who fill
out the form.
Is up to the administrator to perform this authentication!
The only "guarantee" the administrator has is that the account will not
be active unless the person who requested the account (and, hence, knows
the password) can read email at the submitted address.
Therefore, if the
administrator is willing to give the account to the person behind the
email address, it is safe to approve the account.
Otherwise, the
administrator may use the additional information submitted (such as
the telephone number, project PI, etc.) to make the decision.
Accepting or rejecting a signup request causes an email message to be
sent to the user who made the request.
In the case of an acceptance
notification, the user will see a link for activating the account.
Before activating the account, the user will have to log in with the
username and password that they chose at signup.
In Rocks V parlance -
A vm-container is a physical machine running Dom0. It can host virtual machines.
For Rocks V, the vm-container appliance is the only thing that can host Xen DomU
a compute appliance is exactly that. It can be installed as a physical machine or
as a virtual machine.
That is it can run as DomU guest on vm-container physical
machine OR it can run on the raw hardware (e.g. what is done in all previous
versions of Rocks).
a frontend can only run on physical hardware and cannot host virtual machines
(that will change in 5.1).
Update Ecua
Version 1.0
rocks-131: /ibrixfs/hocks/Euca
wget http://eucalyptus.cs.ucsb.edu/downloads/2
mv 2 eucalyptus-5.0-0.x86_64.disk1.iso
rocks-133:
rocks add roll clean=1 /home/hocks/eucalyptus-5.0-0.x86_64.disk1.iso
rocks enable roll eucalyptus
cd /home/install && rocks-dist dist
for i in 0 1 2 3 4 5 6 7 ; do
rocks set host pxeboot vm-container-0-$i action=install
[root@rocks-133 install]# ssh-agent $SHELL
[root@rocks-133 install]# ssh-add
Enter passphrase for /root/.ssh/id_rsa:
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)
for i in 0 1 2 3 4 5 6 7 ; do ssh vc0-$ done
Warning: Permanently added 'vc0-2' (RSA) to the list of known hosts.
ssh: connect to host vc0-3 port 22: No route to host
Warning: Permanently added 'vc0-4' (RSA) to the list of known hosts.
Warning: Permanently added 'vc0-5' (RSA) to the list of known hosts.
Warning: Permanently added 'vc0-6' (RSA) to the list of known hosts.
Warning: Permanently added 'vc0-7' (RSA) to the list of known hosts.
***AILABILITYZONE
| rocks-133
| 028/028 small | host=rocks-133.sdsc.edu
***AILABILITYZONE
META | Restricts RunInstances to allocating from ANY single cluster.
***AILABILITYZONE
META | Greedy allocation starting with the emptiest cluster.
Version 1.1
[root@rocks-133 hocks]# wget http://eucalyptus.cs.ucsb.edu/downloads/5
downloaded as: eucalyptus-5.0-1.x86_64.disk1.iso
[root@rocks-133 hocks]# rocks add roll clean=1 /home/hocks/eucalyptus-5.0-1.x86_64.disk1.iso
Cleaning eucalyptus from the Rolls Directory
Copying eucalyptus to Rolls.....164012 blocks
[root@rocks-133 hocks]# rocks enable roll eucalyptus
[root@rocks-133 hocks]# cd /home/install && rocks-dist dist
Cleaning distribution
Resolving versions (base files)
Creating repository
Linking boot stages from lan
Building Roll Links
[root@rocks-133 install]# for i in 0 1 2 3 4 5 6 7 ; do
> rocks set host pxeboot vm-container-0-$i action=install
[root@rocks-133 install]# for i in 0 1 2 3 4 5 6 7 ; do ssh vm-container-0-$ done
[root@rocks-133 install]# kroll eucalyptus > build.sh
[root@rocks-133 install]# sh ./build.sh
Preparing...
########################################### [100%]
########################################### [100%]
Preparing...
########################################### [100%]
1:euca-vde
########################################### [100%]
Preparing...
########################################### [100%]
1:euca-httpd
########################################### [100%]
Preparing...
########################################### [100%]
installing package eucalyptus-1.1-1 needs 6MB on the / filesystem
/root/RCS/install.log,v
/root/install.log
revision 1.210
RCS file: /root/RCS/install.log,v
/root/RCS/install.log,v
/root/install.log
revision 1.1 (locked)
/root/RCS/install.log,v
/root/install.log
revision 1.210
./build.sh: line 66: /opt/eucalyptus-1.1/usr/share/euca_conf: No such file or directory
J***A_HOME=/usr/java/jdk1.5.0_10/jre
VERSION=1.5.0_10
VM_VERSION=1.5.0_10-b03
SPEC_VERSION=1.5
CLASS_VERSION=49.0
***************************************************************
***********************...WARNING...***************************
***************************************************************
1. Target installs unlimited strength crypto policy.
2. Install involves modifying /usr/java/jdk1.5.0_10/jre/jre/lib/security.
BUILD SUCCESSFUL
Total time: 8 seconds
./build.sh: line 73: /opt/eucalyptus-1.1/usr/share/euca_conf: No such file or directory
./build.sh: line 77: /opt/eucalyptus-1.1/usr/share/euca_conf: No such file or directory
./build.sh: line 81: /opt/eucalyptus-1.1/usr/share/euca_conf: No such file or directory
./build.sh: line 89: /opt/eucalyptus-1.1/usr/share/euca_conf: No such file or directory
./build.sh: line 90: /opt/eucalyptus-1.1/usr/sbin/euca: No such file or directory
Generating public/private rsa1 key pair.
open /opt/eucalyptus-1.1/var/eucalyptus/keys/vdekey failed: No such file or directory.
Saving the key failed: /opt/eucalyptus-1.1/var/eucalyptus/keys/vdekey.
./build.sh: line 95: /opt/eucalyptus-1.1/var/eucalyptus/keys/pw: No such file or directory
Starting Eucalyptus services: done.
/root/RCS/install.log,v
/root/install.log
revision 1.210
/root/RCS/install.log,v
/root/install.log
revision 1.210 (locked)
/root/RCS/install.log,v
/root/install.log
revision 1.211
/root/RCS/install.log,v
/root/install.log
revision 1.211
/root/RCS/install.log,v
/root/install.log
revision 1.211 (locked)
/root/RCS/install.log,v
/root/install.log
/root/RCS/install.log,v
/root/install.log
revision 1.213
Manual install:
[root@rocks-133 install]# rpm -Uvh --force --nodeps
/home/install/rocks-dist/lan/x86_64/RedHat/RPMS/eucalyptus-1.1-1.x86_64.rpm
Preparing...
########################################### [100%]
1:eucalyptus
########################################### [100%]
Cannot find Eucalyptus webservices or vde!
---> change /etc/default/eucalyptus
change /etc/default/eucalyptus
export EUCALYPTUS="/opt/eucalyptus-1.1/"
copy to vm:
for i in vm-container-0-0 vm-container-0-1 vm-container-0-2 vm-container-0-3 vm-container-0-4
vm-container-0-5 vm-container-0-6 vm-container-0-7 ; do scp /etc/default/eucalyptus
$i:/etc/default/ done
[root@rocks-133 install]# ls lia /opt/euca*
/opt/eucalyptus-1.1:
axis2c-bin-1.4.0
Install manually:
cluster-fork --nodes "vm-container-0-0 vm-container-0-1 vm-container-0-2 vm-container-0-3 vm-
container-0-4 vm-container-0-5 vm-container-0-6 vm-container-0-7" rpm -Uvh --force --nodeps
/home/install/rocks-dist/lan/x86_64/RedHat/RPMS/euca-httpd-1.0-1.x86_64.rpm
Start Euca:
cluster-fork --nodes "vm-container-0-0 vm-container-0-1 vm-container-0-2 vm-container-0-3 vm-
container-0-4 vm-container-0-5 vm-container-0-6 vm-container-0-7" /etc/init.d/eucalyptus start
Start image
-bash-3.1$ ec2-describe-availability-zones
***AILABILITYZONE
| rocks-133
| 003/004 small | host=rocks-133.sdsc.edu
***AILABILITYZONE
META | Restricts RunInstances to allocating from ANY single cluster.
***AILABILITYZONE
META | Greedy allocation starting with the emptiest cluster.
-bash-3.1$ ec2-describe-images
emi-0B4C020D
compute eucalyptus
vmlinuz-compute
emi-0D05022C
eucalyptus
vmlinuz-2.6.16.33-xen
-bash-3.1$ ec2-describe-keypairs
-bash-3.1$ ec2-describe-keypairs
KEYPAIR cloud
05:6b:1a:da:f8:43:0e:1c:fa:5a:6f:f5:8c:5c:5e:3c:fa:6b:09:bb
KEYPAIR euca2-admin-f5faf678-cert.pe
c4:f5:b3:07:d0:3e:e1:fb:96:c2:9a:02:19:3b:fb:51:24:28:5c:39
KEYPAIR euca2-admin-f5faf678-cert.pem
0c:d1:c6:74:2c:67:64:ce:f4:0b:e1:6e:8c:ab:54:19:04:fe:4f:09
KEYPAIR mykey
33:af:c6:30:70:33:6d:06:6f:fa:71:ab:58:08:de:cc:6d:d2:9d:0c
KEYPAIR eva
22:3b:57:b2:44:d1:2d:10:2b:68:09:13:6a:cc:e7:9f:36:69:4b:4c
-bash-3.1$ ec2-add-keypair mykey
KEYPAIR mykey
33:af:c6:30:70:33:6d:06:6f:fa:71:ab:58:08:de:cc:6d:d2:9d:0c
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAiG6twZ0NP1GkmqubvaXyiKxMZI+S4ycFG93DfPW+2COca08m
SfyGkURoeqTeRxTC9W45HdUYA2gU96zMzEFbeE2yCNc3LEeGvUsDn4HIcv0lPrWq
DBWx2D1ygcysmzTuVzuotDBvkjsaigddREoAI2baqHKxfEgWcFoxf3G4MYe4K0sN
X2g3BxfnjqyfseBHTyjP5vOyin9i3L4vOEWnuMRouocV90V35Qh8AJazDXLDAtLY
XozfamG65NIyKQdQRPiLGVobqEx7hcNIoC1ENCC1PdHell0SLYnTTBRhhMzOmjXt
CSgfC9n+tqT9WZbGjeEdLxHbMA3+2RWgjOmbywIDAQABAoIBAExILei6SiTkHjfI
yaxw87mNNK1pRUSylX2uMdZVhN5Oku/A8nSduBPS/uPL+OgfaJ5XgaH3epS1Bjwx
JtTxmhYawveEdbnRSDngjmcJ5qy8c62rXyegna59NN/0M3IYV0b4+Wu+RTOqzjzs
vy4mfgtNP+a9MhV+LOWm2FQcnlM6apgITXUDiZRkCSq+v7dYCNyKWTEOW3pCEPdO
GR43kDwN2IuhpiI1XFG9SCuj3ReqOz+1mEJuBu/KcVsfl3SK+wPbM7wafVcumZlu
GQUlg4gxk9ZxpAnEBahglwKzdjkIvUSZ53RUnI83IVPn871NjoTfSIca5R/yEBUu
jwvIZJkCgYEA8Dm8X1cTjTU3ZngY7MvhpvupEEu17itSwLvYjG9RMKSe0G922Tyo
gInis8FwwLFK8b2lqHH6L3MERw++YfbID0FBABxIMsbMF8W2oQdBSwlQ4GrykpZT
XAwkyIuA5UJZE1hbHxO7nI1CXS4T37XuxWtX454OirxBkRSmE5LuPTcCgYEAkWQl
1u2kmo/r9wTYov1qH5smOj+h+ajAJB0Pon1GN0vTy3b4nHFQ7isXZ6zasugN8pwA
8WkITLSvVMnCiYroU4BhJVejntP9iYn9yN1SOG/9XGvT210dyYWOUk3fDM9qXFav
zaSFkEB1RLhIEbgE86fAsYo57L6dI1x5hovzgA0CgYBtSlchKQSEIcnqnIj6cmdf
CO+Jsmg6ywsDFHMYsSxG7R4zxgJUIvymPhUdoswhXoeyI43SbMhd/f3cNpVvDE9I
YW+pFTTjpP4FcrX73Mkf0kUxVHa3qhySrBOwCYfCxcSwwGn0YY3hU4L10ZjJFoRi
3MtHiwkipTNPRg2oqhgpUQKBgAHsZD/CpxHQY5vB+ae2WIgQgmH044ys+dXAeKt6
osoqe6POcB2JtjtgYI/jjoUdYsnZ3H1VGWICZetmO+eb6dTo9uAKl8SLI2iFZdEZ
dHQAr0Zgus3FHpbC6I4YL6I4wDx2fR7oVUgCQkrlcTaiy5X5myf+HyQNpYCJQaZR
BailAoGAKz7aMKRhlQ0EUqX2+HBwRxjGk76vYYsCYY9ywWQiPhDIazxD1cEdE8zS
NGcVvJEQv5ECJlDFnhj11O0072G4fkRqBnqtcINALpthVe26isCqgaakpK7zdvgM
3N8gqp6BUgAP2RLxfoW3v5G0kk2lznCnDmB0wvSxZ/a3f3DnP64=
-----END RSA PRIVATE KEY-----
-bash-3.1$ ec2-run-instances emi-0D05022C
-k mykey -n 1
RESERVATION
i-qxwnqdiz
emi-0D05022C
0.0.0.0 0.0.0.0 Booting mykey
T18:14:28+0000
-bash-3.1$ ec2-describe-instances
RESERVATION
i-qxwnqdiz
emi-0D05022C
0.0.0.0 192.168.3.2
T18:14:26+0000
vmlinuz-2.6.16.33-xen
no-ramdisk
Compute Image:
-bash-3.1$ ec2-run-instances emi-0B4C020D -k eva -n 1
RESERVATION
r-083B023A
i-5F9D09A5
emi-0B4C020D
0.0.0.0 0.0.0.0 pending eva
T20:20:43+0000
vmlinuz-compute
-bash-3.1$ ec2-run-instances emi-0B4C020D -k eva -n 1
RESERVATION
r-083C023B
i-4BA30834
emi-0B4C020D
0.0.0.0 0.0.0.0 pending eva
T20:24:40+0000
vmlinuz-compute
-bash-3.1$
ec2-describe-instances
RESERVATION
r-083B023A
i-5F9D09A5
emi-0B4C020D
0.0.0.0 192.168.3.2
pending eva
T20:20:43+0000
vmlinuz-compute
RESERVATION
r-083C023B
i-4BA30834
emi-0B4C020D
0.0.0.0 0.0.0.0 pending eva
T20:24:41+0000
vmlinuz-compute
List vm-container instance
/usr/sbin/xm
cluster-fork --nodes "vc0-0 vc0-1 vc0-2 vc0-3 vc0-4 vc0-5 vc0-6 vc0-7" xm list
[root@rocks-133 usr]# cluster-fork --nodes "vc0-0 vc0-1 vc0-2 vc0-3 vc0-4 vc0-5 vc0-6 vc0-7" xm list
ID Mem(MiB) VCPUs State
compute-1-8-0
i-cmrvfrux
ID Mem(MiB) VCPUs State
ID Mem(MiB) VCPUs State
vc0-3: down
ID Mem(MiB) VCPUs State
ID Mem(MiB) VCPUs State
ID Mem(MiB) VCPUs State
ID Mem(MiB) VCPUs State
login to instance console
/usr/sbin/xm console i-qxwnqdiz
Linux version 2.6.16.33-xen (root@manatee.cs.ucsb.edu) (gcc version 4.1.1
(Red Hat 4.1.1-51)) #1
SMP Thu Sep
20 06:35:11 PDT 2007
BIOS-provided physical RAM map:
Xen: 0000 - 0000 (usable)
0MB HIGHMEM available.
136MB LOWMEM available.
NX (Execute Disable) protection: active
ACPI in unprivileged domain disabled
Built 1 zonelists
Kernel command line:
root=/dev/sda1
Enabling fast FPU save and restore... done.
Enabling unmasked SIMD FPU exception support... done.
Initializing CPU#0
PID hash table entries: 1024 (order: 10, 16384 bytes)
Xen reported:
MHz processor.
Freeing unused kernel memory: 380k freed
ttylinux 6.0
Mounting proc:
Mounting sysfs:
Setting console loglevel:
Setting system clock: hwclock: cannot access RTC: No such file or directory
Starting fsck for root filesystem.
e2fsck 1.39 (29-May-2006)
/dev/sda1: clean, 429/1280 files,
Checking root filesystem:
iptables v1.3.7: can't initialize iptables table `filter': iptables who? (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
Starting syslogd:
Starting klogd:
Starting DHCP for interface eth0:
Starting DHCP for interface eth1:
Starting SSH server:
Starting inetd:
ttylinux 6.0
Linux 2.6.16.33-xen on i686 arch
tiny.local login:
stop instance
[root@rocks-133 bin]# ec2-terminate-instances i-qxwnqdiz
i-qxwnqdiz
Running shuttingDown
[root@rocks-133 bin]# ec2-describe-instances
-bash-3.1$
ec2-describe-instances
RESERVATION
r-083D023C
i-604F09A1
emi-0D05022C
0.0.0.0 192.168.3.4
shutting-down
T21:52:55+0000
vmlinuz-2.6.16.33-xen
[root@rocks-133 bin]# ec2-describe-availability-zones
***AILABILITYZONE
| rocks-133
| 004/004 small | host=rocks-133.sdsc.edu
list vm containers
cluster-fork --nodes "vc0-0 vc0-1 vc0-2 vc0-3 vc0-4 vc0-5 vc0-6 vc0-7" date
login to instance
[root@rocks-133 ttylinux]# ssh -i ~root/mykey.private root@192.168.3.3
ssh: connect to host 192.168.3.3 port 22: No route to host
[root@vm-container-0-0 ~]# brctl show
bridge name
STP enabled
interfaces
8000.feffffffffff
8000.feffffffffff
[root@vm-container-0-0 ~]# xm list
ID Mem(MiB) VCPUs State
compute-1-8-0
i-5F9D09A5
i-5F9E09A6
i-5F9F09A7
[root@vm-container-0-0 ~]# xm console i-5F9D09A5
in the instance:
Linux 2.6.16.33-xen on i686 arch
tiny.local login:
root@tiny ~ # ifconfig -a
Link encap:Ethernet
HWaddr AA:CC:00:02:33:5C
UP BROADCAST RUNNING MULTICAST
RX packets:60 errors:0 dropped:0 overruns:0 frame:0
TX packets:57 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3 KiB)
TX bytes:3 KiB)
vm-container-0-0 :
Link encap:Ethernet
HWaddr 3A:5D:B0:23:75:63
inet6 addr: fe80::385d:b0ff:fe23:7563/64 Scope:Link
UP BROADCAST RUNNING MULTICAST
RX packets:821 errors:0 dropped:0 overruns:0 frame:0
TX packets:295 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:5 KiB)
TX bytes:7 KiB)
rocks-133:
Link encap:Ethernet
HWaddr 72:44:0F:D5:AD:AB
inet addr:192.168.3.1
Bcast:192.168.3.255
Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST
RX packets:2362 errors:0 dropped:0 overruns:0 frame:0
TX packets:69 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
[root@vm-container-0-0 ~]# ping6 -c4 -I eucadev3 fe80::c0a8:302
PING fe80::c0a8:302(fe80::c0a8:302) from fe80::385d:b0ff:fe23:7563 eucadev3: 56 data bytes
From fe80::385d:b0ff:fe23:7563 icmp_seq=1 Destination unreachable: Address unreachable
From fe80::385d:b0ff:fe23:7563 icmp_seq=2 Destination unreachable: Address unreachable
From fe80::385d:b0ff:fe23:7563 icmp_seq=3 Destination unreachable: Address unreachable
[root@rocks-133 ttylinux]# ssh fe80::c0a8:302%eucadev3
OpenSSH_4.3p2, OpenSSL 0.9.8b 04 May 2006
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to fe80::c0a8:302%eucadev3 [fe80::c0a8:302%eucadev3] port 22.
[root@rocks-133 ttylinux]# ping6
-c4 -I eucadev3 fe80::c0a8:302
PING fe80::c0a8:302(fe80::c0a8:302) from fe80::7044:fff:fed5:adab eucadev3: 56 data bytes
From fe80::7044:fff:fed5:adab icmp_seq=0 Destination unreachable: Address unreachable
From fe80::7044:fff:fed5:adab icmp_seq=1 Destination unreachable: Address unreachable
From fe80::7044:fff:fed5:adab icmp_seq=2 Destination unreachable: Address unreachable
---> /etc/sysconfig/iptables
-A FORWARD -i eucadev3
-A INPUT -i eucadev3
--> services iptables restart
allocate instance
-bash-3.1$ ec2-describe-availability-zones
***AILABILITYZONE
| rocks-133
| 000/004 small | host=rocks-133.sdsc.edu
***AILABILITYZONE
META | Restricts RunInstances to allocating from ANY single cluster.
***AILABILITYZONE
META | Greedy allocation starting with the emptiest cluster.
-bash-3.1$ ec2-describe-instances
RESERVATION
i-cmrvfrux
emi-0D05022C
0.0.0.0 192.168.3.2
008-06-20T18:03:28+0000
vmlinuz-2.6.16.33-xen
no-ramdisk
i-kiskratj
emi-0D05022C
0.0.0.0 192.168.3.3
008-06-20T21:20:41+0000
vmlinuz-2.6.16.33-xen
no-ramdisk
i-pihsfbjb
emi-0D05022C
0.0.0.0 192.168.3.4
008-06-20T21:21:16+0000
vmlinuz-2.6.16.33-xen
no-ramdisk
i-zanhrugu
emi-0D05022C
0.0.0.0 192.168.3.5
008-06-20T21:21:17+0000
vmlinuz-2.6.16.33-xen
no-ramdisk
-bash-3.1$ ec2-run-instances emi-0D05022C
-k mykey -n 3
Unexpected error:
java.lang.NullPointerException
at com.amazon.aes.webservices.client.Jec2.runInstances(Jec2.java:773)
at com.amazon.aes.webservices.client.cmd.RunInstances.invokeOnline(RunInstances.java:142)
at com.amazon.aes.webservices.client.cmd.BaseCmd.invoke(BaseCmd.java:626)
at com.amazon.aes.webservices.client.cmd.RunInstances.main(RunInstances.java:198)
-bash-3.1$ ec2-run-instances emi-0D05022C
-k mykey -n 5
Unexpected error:
java.lang.NullPointerException
at com.amazon.aes.webservices.client.Jec2.runInstances(Jec2.java:773)
at com.amazon.aes.webservices.client.cmd.RunInstances.invokeOnline(RunInstances.java:142)
at com.amazon.aes.webservices.client.cmd.BaseCmd.invoke(BaseCmd.java:626)
at com.amazon.aes.webservices.client.cmd.RunInstances.main(RunInstances.java:198)
[root@rocks-133 ~]# cluster-fork --nodes "vc0-0 vc0-1 vc0-2 vc0-3 vc0-4 vc0-5 vc0-6 vc0-7" xm list
ID Mem(MiB) VCPUs State
compute-1-8-0
i-cmrvfrux
i-kiskratj
i-pihsfbjb
i-zanhrugu
create keypair
-bash-3.1$ ec2-describe-keypairs
KEYPAIR cloud
05:6b:1a:da:f8:43:0e:1c:fa:5a:6f:f5:8c:5c:5e:3c:fa:6b:09:bb
KEYPAIR euca2-admin-f5faf678-cert.pe
c4:f5:b3:07:d0:3e:e1:fb:96:c2:9a:02:19:3b:fb:51:24:28:5c:39
KEYPAIR euca2-admin-f5faf678-cert.pem
0c:d1:c6:74:2c:67:64:ce:f4:0b:e1:6e:8c:ab:54:19:04:fe:4f:09
KEYPAIR mykey
33:af:c6:30:70:33:6d:06:6f:fa:71:ab:58:08:de:cc:6d:d2:9d:0c
-bash-3.1$ ec2-add-keypair cloud
Unexpected error:
java.lang.NullPointerException
at com.amazon.aes.webservices.client.Jec2.createKeyPair(Jec2.java:1334)
at com.amazon.aes.webservices.client.cmd.CreateKeyPair.invokeOnline(CreateKeyPair.java:40)
at com.amazon.aes.webservices.client.cmd.BaseCmd.invoke(BaseCmd.java:626)
at com.amazon.aes.webservices.client.cmd.CreateKeyPair.main(CreateKeyPair.java:46)
-bash-3.1$ ec2-add-keypair mykey
Unexpected error:
java.lang.NullPointerException
at com.amazon.aes.webservices.client.Jec2.createKeyPair(Jec2.java:1334)
at com.amazon.aes.webservices.client.cmd.CreateKeyPair.invokeOnline(CreateKeyPair.java:40)
at com.amazon.aes.webservices.client.cmd.BaseCmd.invoke(BaseCmd.java:626)
at com.amazon.aes.webservices.client.cmd.CreateKeyPair.main(CreateKeyPair.java:46)
Java heap space
[root@rocks-133 profile.d]# ec2-describe-availability-zones
Server: Java heap space
/usr/java/jdk1.5.0_10/jre/bin/java -server -Xmx64m -Xms64m
Regarding the heap space problem.
Try increasing the size of the
heap from 64m to 128m in file on the front end:
/opt/eucalyptus-1.1/etc/eucalyptus/cloud-ant.xml
property name="jvm.heap" value="64m"
property name="jvm.heap" value="128m"
That should help.
I think we traced it to a documented bug in the JVM:
/bugdatabase/view_bug.do?bug_id=4948040
vim cloud-ant.xml
java -XX:+PermGenCleaningEnabled
java -XX:+CMSPermGenCleaningEnabled
java -XX:+CMSPermGenSweepingEnabled
java -XX:+CMSPermGenPreCleaningEnabled
java -XX:+CMSPermGenSweepingEnabled
java -XX:+CMSPermGenSweepingEnabled -XX:MaxPermGenSize=128m
java -XX:+CMSPermGenSweepingEnabled -XX:MaxPermGenSize=256m
java -XX:+CMSPermGenSweepingEnabled -XX:MaxPermSize=256m
java -XX:+CMSPermGenSweepingEnabled -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled
run instance
ec2-add-keypair eva > eva.private
chmod 0600 eva.private
-bash-3.1$ ec2-run-instances emi-0D05022C -k eva -n 4
RESERVATION
[root@rocks-133 ~]# ec2-add-keypair myroot > myroot.private
[root@rocks-133 ~]# chmod 0600 myroot.private
[root@rocks-133 ~]# ec2-run-instances emi-0D05022C
-k myroot -n 2
RESERVATION
i-izrprlmd
emi-0D05022C
0.0.0.0 0.0.0.0 Booting myroot
T22:17:09+0000
i-vfnmhjxy
emi-0D05022C
0.0.0.0 0.0.0.0 Booting myroot
T22:17:10+0000
[root@rocks-133 ~]# ec2-describe-instances
RESERVATION
i-izrprlmd
emi-0D05022C
0.0.0.0 192.168.3.2
T22:17:08+0000
vmlinuz-2.6.16.33-xen
no-ramdisk
root@rocks-133 install]# cd /home/hocks/complinux/
[root@rocks-133 complinux]# /opt/eucalyptus-1.1/usr/sbin/euca add_image --disk-image compute-1-8-0.hda \
> --kernel-image vmlinuz-compute --image-name compute-rocks
root@rocks-133 install]# cd /home/hocks/complinux/
[root@rocks-133 complinux]# /opt/eucalyptus-1.1/usr/sbin/euca add_image --disk-image compute-1-8-0.hda \
> --kernel-image vmlinuz-compute --image-name compute-rocks
root@rocks-133 install]# cd /home/hocks/complinux/
[root@rocks-133 complinux]# /opt/eucalyptus-1.1/usr/sbin/euca add_image --disk-image compute-1-8-0.hda \
> --kernel-image vmlinuz-compute --image-name compute-rocks
i-vfnmhjxy
emi-0D05022C
0.0.0.0 192.168.3.3
T22:17:09+0000
vmlinuz-2.6.16.33-xen
no-ramdisk
instance directory in vm-container
/usr/local/eucalyptus/instances/admin/i-604F09A1
[root@vm-container-0-3 i-604F09A1]# more config.xml
BASEPATH/vmlinuz-2.6.16.33-xen
Terminate instance
instance started as user root.
Terminate as user hocks:
-bash-3.1$ ec2-terminate-instances i-izrprlmd i-vfnmhjxy
i-izrprlmd
Running shuttingDown
i-vfnmhjxy
Running shuttingDown
Remove image
[root@rocks-133 init.d]# /opt/eucalyptus-1.1/usr/sbin/euca delete_image
parsed config file /etc/default/eucalyptus
exporting EUCALYPTUS=/opt/eucalyptus-1.1
need value for parameter image-name []: compute
removed image compute
Reinstall Euca software
===> on the head node:
1) reinstall the vm-container-0-0 software
# rocks set host pxeboot vm-container-0-0 action=install
# ssh vm-container-0-0 /boot/kickstart/cluster-kickstart-pxe
clearing up Eucalyptus-related files on the frontend
2) uninstall the head node software:
# /etc/init.d/eucalyptus stop
# rpm -ev euca-vde eucalyptus euca-httpd
# rm -rf /opt/eucalyptus-1.1
# rm -rf /share/apps/eucalyptus
3) install the head node software:
# kroll eucalyptus > build.sh
# bash build.sh
# rsync -avz -e ssh /opt/eucalyptus-1.1/var/eucalyptus/keys vm-
container-0-0:/opt/eucalyptus-1.1/var/eucalyptus
(This is the equivalent of euca_sync_keys for 1 host.)
===> on vm-container-0-0:
4) adjust the local instance path to the larger file system, to
accommodate the larger images that you intend to run, and restart the
node controller
vm-container-0-0# vi /etc/default/eucalyptus
(change INSTANCE_PATH to /state/partition1/eucalyptus)
vm-container-0-0# /etc/init.d/eucalyptus stop
vm-container-0-0# /etc/init.d/eucalyptus start
===> on your desktop, get the certs from browser, etc.
Deinstall Euca
Once an installation fails, we find that it is crucial to fully clean
up after it before trying to install again.
In such a situation -
when we do not mind wiping out EVERYTHING (images, Eucalyptus
software, users, logs, etc.) - we use the following set of commands
on the front end:
# rpm -e eucalyptus euca-httpd euca-vde socat
# rm -rf /opt/eucalyptus* /share/apps/eucalyptus
# pkill -9 - pkill euca

参考资料

 

随机推荐