Valorant requires a restart or loads slowly when entering the game. What can I do? I have an idea.

Valorant is a free first -person tactical shooting game developed by Riot Games, which combines the exact system of arms shooting and heroism. Players will be transformed into agents and will compete for target points using team collaboration, tactical layout and skills cooperation 5V5 offensive and defense stroke. The game has a low economic system and high competitiveness. Before each game start, buy weapons and armor according to tactical needs and use a combination of heroic skills to create the benefits of the battlefield. Its solid ballistic mechanism and map design require players for accurate purposes and tactical decision -making. At the same time, the rich character pool and skill combination give the game a strategic depth. If you have the problems you need to restart or slowly load while playing a "Varorant", try the following systematic solution: Network environment optimization is the main solution. Because valante servers can be introduced abroad, physical dist...

用于 Openstack Glance/Cinder/Instance-store 的 NFS 后端


在这篇文章中,让我们介绍如何将NFS配置为Openstack Glance,Cinder和共享实例存储的统一存储后端,我们还将看看它是如何工作的。

设置:1 个控制器和 2 个计算节点。控制器也充当 NFS 服务器。
OS+Openstack: RHEL7 + Juno

控制器: 192.168.255.1 HPDL36
计算: 192.168.255.2 HPDL37
计算: 192.168.255.3 HPDL38

在控制器服务器上设置 NFS 服务器

创建 3 个文件夹作为实例存储、概览和灰烬的共享源,授予足够的访问权限:
mkdir /nfsshare; chmod 777 /nfsshare  
mkdir /nfsshare_glance; chmod 777 /nfsshare_glance  
mkdir /nfsshare_cinder; chmod 777 /nfsshare_cinder
创建 /etc/导出
/nfsshare   *(rw,no_root_squash)  
/nfsshare_cinder *(rw,no_root_squash)  
/nfsshare_glance *(rw,no_root_squash)
启动 NFS 服务器
systemctl start rpcbind  
systemctl start nfs  
systemctl start nfslock

设置 NFS 客户端

一目了然

将 NFS 共享挂载到控制器节点上,一目了然:

mount HPDL36:/nfsshare_glance /var/lib/glance/images
新星实例存储

在 2 个计算节点上挂载 NFS 共享,用于共享实例存储

mount HPDL36:/nfsshare /var/lib/nova/instances
煤渣

煤渣卷服务将处理安装,我们不需要在这里进行手动安装。

设置开放堆栈

由于 Glance 和 Nova 将这些 NFS 挂载的文件夹用作本地文件系统,因此默认的 Openstack 配置将起作用。只有 Cinder 需要 NFS 后端的特殊配置:

将 NFS 共享条目创建到文件中/etc/cinder/nfsshare
HPDL36:/nfsshare_cinder
更改文件的所有权和访问权限:
chown root:cinder /etc/cinder/nfsshare  
chmod 0640 /etc/cinder/nfsshare
配置/etc/cinder.conf
nfs_shares_config=/etc/cinder/nfsshare  
volume_driver=cinder.volume.drivers.nfs.NfsDriver
重新启动煤渣服务
systemctl restart openstack-cinder-api  
systemctl restart openstack-cinder-scheduler  
systemctl restart openstack-cinder-volume
Check mounted Cinder NFS share
[root@HPDL36 ~(keystone_admin)]# mount | grep cinder  
 HPDL36:/nfsshare_cinder on /var/lib/cinder/mnt/2bc8688d1bab3cab3b9a974b3f99cb82 type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.255.1,local_lock=none,addr=192.168.255.1)

Testing

Create Glance image
[root@HPDL36 ~(keystone_admin)]# glance image-create –name cirros –disk-format qcow2 –container-format bare –is-public true –file cirros-0.3.1-x86_64-disk.qcow2

We can see the image is created and stored on /var/lib/glance/images

[root@HPDL36 ~(keystone_admin)]# glance image-list  
 +————————————–+——–+————-+——————+———-+——–+  
 | ID                                   | Name   | Disk Format | Container Format | Size     | Status |  
 +————————————–+——–+————-+——————+———-+——–+  
 | d3fd5cb6-1a88-4da8-a0af-d83f7728e76b | cirros | qcow2       | bare             | 13147648 | active |  
 +————————————–+——–+————-+——————+———-+——–+

[root@HPDL36 ~(keystone_admin)]# ls -lah /var/lib/glance/images/  
 total 13M  
 drwxrwxrwx 2 root   root    49 Feb 11 23:29 .  
 drwxr-xr-x 3 glance nobody  19 Feb 11 13:38 ..  
 -rw-r—– 1 glance glance 13M Feb 11 23:29 d3fd5cb6-1a88-4da8-a0af-d83f7728e76b
Launch a VM:
[root@HPDL36 ~(keystone_admin)]# nova boot –flavor m1.tiny –image cirros –nic net-id=8a7032de-e041-4e5b-a282-51534b38b15f testvm

[root@HPDL36 ~(keystone_admin)]# nova list –fields name,status,power_state,host,networks  
 +————————————–+——–+——–+————-+——–+———————+  
 | ID | Name | Status | Power State | Host | Networks |  
 +————————————–+——–+——–+————-+——–+———————+  
 | f17ecb86-04de-44c9-9466-47ff6577b7d8 | testvm | ACTIVE | Running | HPDL37 | network=192.168.0.7 |  
 +————————————–+——–+——–+————-+——–+———————+

From the compute node HPDL37, we could see the VM related files are created under /var/lib/nova/instances

[root@HPDL37 ~]# virsh list  
 setlocale: No such file or directory  
 Id Name State  
 —————————————————-  
 7 instance-0000005d running

[root@HPDL37 ~]# ls -lah /var/lib/nova/instances/  
 total 4.0K  
 drwxrwxrwx 5 root root 129 Feb 11 23:47 .  
 drwxr-xr-x 9 nova nova 93 Feb 11 15:35 ..  
 drwxr-xr-x 2 nova nova 100 Feb 11 23:47 _base  
 -rw-r–r– 1 nova nova 57 Feb 11 23:43 compute_nodes  
 drwxr-xr-x 2 nova nova 69 Feb 11 23:47 f17ecb86-04de-44c9-9466-47ff6577b7d8  
 -rw-r–r– 1 nfsnobody nfsnobody 0 Feb 11 13:22 glance-touch  
 drwxr-xr-x 2 nova nova 143 Feb 11 23:47 locks  
 -rw-r–r– 1 nfsnobody nfsnobody 0 Feb 11 13:42 nova-touch
Live-Migration

Since we have shared instance-store, let’s try a live-migration:

[root@HPDL36 ~(keystone_admin)]# nova live-migration testvm  
 [root@HPDL36 ~(keystone_admin)]# nova list –fields name,status,power_state,host,networks  
 +————————————–+——–+——–+————-+——–+———————+  
 | ID | Name | Status | Power State | Host | Networks |  
 +————————————–+——–+——–+————-+——–+———————+  
 | f17ecb86-04de-44c9-9466-47ff6577b7d8 | testvm | ACTIVE | Running | HPDL38 | network=192.168.0.7 |  
 +————————————–+——–+——–+————-+——–+———————+

It works, now the VM is live-migrated to HPDL38 compute node.

We could do a measurement when the VM has no load, how fast the migration can be. From controller, I ping the VM every 1ms for 10000 times, which last 10000ms (10s), during the ping, I do the live-migration, then we check the result, how many packet lost we get:

[root@HPDL36 ~(keystone_admin)]# ip netns exec qrouter-02ca3bdc-999a-4d3a-8485-c7ffd4600ebc ping 192.168.0.7 -i 0.001 -c 10000 -W 0.001  
 …  
 …  
 — 192.168.0.7 ping statistics —  
 10000 packets transmitted, 9942 received, 0% packet loss, time 10526ms  
 rtt min/avg/max/mdev = 0.113/0.167/1.649/0.040 ms

We lost actually 58 packets, which means basically live-migration takes only 58ms!

Create a Cinder volume
[root@HPDL36 ~(keystone_admin)]# cinder create –display-name 5gb 5  
 [root@HPDL36 ~(keystone_admin)]# cinder list  
 +————————————–+———–+————–+——+————-+———-+————-+  
 | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |  
 +————————————–+———–+————–+——+————-+———-+————-+  
 | 6e408336-43a8-453a-a5e5-928c12cdd3a1 | available | 5gb | 5 | None | false | |  
 +————————————–+———–+————–+——+————-+———-+————-+  
 [root@HPDL36 ~(keystone_admin)]# ls -lah /var/lib/cinder/mnt/2bc8688d1bab3cab3b9a974b3f99cb82/  
 total 0  
 drwxrwxrwx 2 root root 56 Feb 12 00:16 .  
 drwxr-xr-x 4 cinder cinder 84 Feb 11 16:16 ..  
 -rw-rw-rw- 1 root root 5.0G Feb 12 00:16 volume-6e408336-43a8-453a-a5e5-928c12cdd3a1

We could see the 5GB volume is stored on the mounted Cinder NFS share.

Attach volume to an instance
[root@HPDL36 ~(keystone_admin)]# nova volume-attach testvm 6e408336-43a8-453a-a5e5-928c12cdd3a1  
 +———-+————————————–+  
 | Property | Value |  
 +———-+————————————–+  
 | device | /dev/vdb |  
 | id | 6e408336-43a8-453a-a5e5-928c12cdd3a1 |  
 | serverId | f17ecb86-04de-44c9-9466-47ff6577b7d8 |  
 | volumeId | 6e408336-43a8-453a-a5e5-928c12cdd3a1 |  
 +———-+————————————–+

Let’s check on compute node:

[root@HPDL38 ~]# virsh list  
 setlocale: No such file or directory  
 Id Name State  
 —————————————————-  
 7 instance-0000005d running

[root@HPDL38 ~]# virsh domblklist 7  
 setlocale: No such file or directory  
 Target Source  
 ————————————————  
 vda /var/lib/nova/instances/f17ecb86-04de-44c9-9466-47ff6577b7d8/disk  
 vdb /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82/volume-6e408336-43a8-453a-a5e5-928c12cdd3a1

We see the VM get the volume attached as vdb, and the source is , which is actually the volume file on the Cinder NFS share./var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82/volume-6e408336-43a8-453a-a5e5-928c12cdd3a1

[root@HPDL38 ~]# mount |grep cinder  
 HPDL36:/nfsshare_cinder on /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82 type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.255.3,local_lock=none,addr=192.168.255.1)

[root@HPDL38 ~]# ls -lah /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82  
 total 0  
 -rw-rw-rw- 1 qemu qemu 5.0G Feb 12 00:16 volume-6e408336-43a8-453a-a5e5-928c12cdd3a1

因此,从计算节点,它还将 Cinder NFS 共享挂载到 ,然后直接将卷文件公开给 KVM。/var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82

现在我们知道它是如何工作的,计算节点直接挂载Cinder NFS共享,访问卷文件,不像LVM Cinder后端,Cinder-volume服务通过iSCSI目标公开卷。

附加卷的实时迁移

实时迁移是否适用于附加了卷的 VM?

[root@HPDL36 ~(keystone_admin)]# nova live-migration testvm  
 [root@HPDL36 ~(keystone_admin)]# nova list –fields name,status,power_state,host,networks  
 +————————————–+——–+——–+————-+——–+———————+  
 | ID | Name | Status | Power State | Host | Networks |  
 +————————————–+——–+——–+————-+——–+———————+  
 | f17ecb86-04de-44c9-9466-47ff6577b7d8 | testvm | ACTIVE | Running | HPDL37 | network=192.168.0.7 |  
 +————————————–+——–+——–+————-+——–+———————+

答案是肯定的!

让我们检查计算节点 HPDL37。

[root@HPDL37 ~]# virsh list  
 setlocale: No such file or directory  
 Id Name State  
 —————————————————-  
 11 instance-0000005d running

[root@HPDL37 ~]# virsh domblklist 11  
 setlocale: No such file or directory  
 Target Source  
 ————————————————  
 vda /var/lib/nova/instances/f17ecb86-04de-44c9-9466-47ff6577b7d8/disk  
 vdb /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82/volume-6e408336-43a8-453a-a5e5-928c12cdd3a1

[root@HPDL37 ~]# mount | grep cinder  
 HPDL36:/nfsshare_cinder on /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82 type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.255.2,local_lock=none,addr=192.168.255.1)

[root@HPDL37 ~]# ls -lah /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82  
 total 0  
 -rw-rw-rw- 1 qemu qemu 5.0G Feb 12 00:16 volume-6e408336-43a8-453a-a5e5-928c12cdd3a1

高级煤渣特征测试

从概览图像创建音量
[root@HPDL36 ~(keystone_admin)]# cinder create –image-id d3fd5cb6-1a88-4da8-a0af-d83f7728e76b –display-name vol-from-image 1  
 [root@HPDL36 ~(keystone_admin)]# cinder list  
 +————————————–+———–+—————-+——+————-+———-+————————————–+  
 | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |  
 +————————————–+———–+—————-+——+————-+———-+————————————–+  
 | 209840f0-0559-4de5-ab64-bd4a8249ffd4 | available | 1gb | 1 | None | false | |  
 | 6e408336-43a8-453a-a5e5-928c12cdd3a1 | in-use | 5gb | 5 | None | false | f17ecb86-04de-44c9-9466-47ff6577b7d8 |  
 | 6fda4ea7-8f97-4c62-8df0-f04a36860d30 | available | vol-from-image | 1 | None | true | |  
 +————————————–+———–+—————-+——+————-+———-+————————————–+  
 [root@HPDL36 ~(keystone_admin)]# ls -lh /var/lib/cinder/mnt/2bc8688d1bab3cab3b9a974b3f99cb82/  
 total 18M  
 -rw-rw-rw- 1 root root 1.0G Feb 12 00:43 volume-209840f0-0559-4de5-ab64-bd4a8249ffd4  
 -rw-rw-rw- 1 qemu qemu 5.0G Feb 12 00:16 volume-6e408336-43a8-453a-a5e5-928c12cdd3a1  
 -rw-rw-rw- 1 root root 1.0G Feb 12 00:45 volume-6fda4ea7-8f97-4c62-8df0-f04a36860d30
从卷创建概览图像
[root@HPDL36 ~(keystone_admin)]# cinder upload-to-image 209840f0-0559-4de5-ab64-bd4a8249ffd4 image-from-vol  
 [root@HPDL36 ~(keystone_admin)]# glance image-list  
 +————————————–+—————-+————-+——————+————+——–+  
 | ID | Name | Disk Format | Container Format | Size | Status |  
 +————————————–+—————-+————-+——————+————+——–+  
 | d3fd5cb6-1a88-4da8-a0af-d83f7728e76b | cirros | qcow2 | bare | 13147648 | active |  
 | 85d4ddb0-6159-40f6-b7ce-653eabea7142 | image-from-vol | raw | bare | 1073741824 | active |  
 +————————————–+—————-+————-+——————+————+——–+  
 [root@HPDL36 ~(keystone_admin)]# ls -lh /var/lib/glance/images/  
 total 1.1G  
 -rw-r—– 1 glance glance 1.0G Feb 12 01:05 85d4ddb0-6159-40f6-b7ce-653eabea7142  
 -rw-r—– 1 glance glance 13M Feb 11 23:29 d3fd5cb6-1a88-4da8-a0af-d83f7728e76b
从可引导卷引导实例
[root@HPDL36 ~(keystone_admin)]# nova boot –flavor m1.small –block-device-mapping vda=6fda4ea7-8f97-4c62-8df0-f04a36860d30:::0 vm-boot-from-vol

[root@HPDL36 ~(keystone_admin)]# nova list –fields name,status,power_state,host,networks  
 +————————————–+——————+——–+————-+——–+———————+  
 | ID | Name | Status | Power State | Host | Networks |  
 +————————————–+——————+——–+————-+——–+———————+  
 | f17ecb86-04de-44c9-9466-47ff6577b7d8 | testvm | ACTIVE | Running | HPDL37 | network=192.168.0.7 |  
 | 01c20971-7876-474c-8a38-93d39b78cc98 | vm-boot-from-vol | ACTIVE | Running | HPDL38 | network=192.168.0.8 |  
 +————————————–+——————+——–+————-+——–+———————+

[root@HPDL38 ~]# virsh list  
 setlocale: No such file or directory  
 Id Name State  
 —————————————————-  
 8 instance-0000005e running

[root@HPDL38 ~]# virsh domblklist 8  
 setlocale: No such file or directory  
 Target Source  
 ————————————————  
 vda /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82/volume-6fda4ea7-8f97-4c62-8df0-f04a36860d30
从映像引导实例(通过创建新卷)
[root@HPDL36 ~(keystone_admin)]# nova boot –flavor m1.tiny –block-device source=image,id=d3fd5cb6-1a88-4da8-a0af-d83f7728e76b,dest=volume,size=6,shutdown=preserve,bootindex=0 vm-boot-from-image-create-new-vol

[root@HPDL36 ~(keystone_admin)]# nova list –fields name,status,power_state,host,networks  
 +————————————–+———————————–+——–+————-+——–+———————+  
 | ID | Name | Status | Power State | Host | Networks |  
 +————————————–+———————————–+——–+————-+——–+———————+  
 | f17ecb86-04de-44c9-9466-47ff6577b7d8 | testvm | ACTIVE | Running | HPDL37 | network=192.168.0.7 |  
 | 6cacf835-2adb-4730-ac11-cceacf1d0915 | vm-boot-from-image-create-new-vol | ACTIVE | Running | HPDL38 | network=192.168.0.9 |  
 | 01c20971-7876-474c-8a38-93d39b78cc98 | vm-boot-from-vol | ACTIVE | Running | HPDL37 | network=192.168.0.8 |  
 +————————————–+———————————–+——–+————-+——–+———————+

[root@HPDL36 ~(keystone_admin)]# cinder list  
 +————————————–+———–+—————-+——+————-+———-+————————————–+  
 | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |  
 +————————————–+———–+—————-+——+————-+———-+————————————–+  
 | 209840f0-0559-4de5-ab64-bd4a8249ffd4 | available | 1gb | 1 | None | false | |  
 | 6e408336-43a8-453a-a5e5-928c12cdd3a1 | in-use | 5gb | 5 | None | false | f17ecb86-04de-44c9-9466-47ff6577b7d8 |  
 | 6fda4ea7-8f97-4c62-8df0-f04a36860d30 | in-use | vol-from-image | 1 | None | true | 01c20971-7876-474c-8a38-93d39b78cc98 |  
 | 7d8a4fc5-5f75-4273-a2ac-3e36521be37c | in-use | | 6 | None | true | 6cacf835-2adb-4730-ac11-cceacf1d0915 |  
 +————————————–+———–+—————-+——+————-+———-+————————————–+

[root@HPDL38 ~]# virsh list  
 setlocale: No such file or directory  
 Id Name State  
 —————————————————-  
 9 instance-0000005f running

[root@HPDL38 ~]# virsh domblklist 9  
 setlocale: No such file or directory  
 Target Source  
 ————————————————  
 vda /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82/volume-7d8a4fc5-5f75-4273-a2ac-3e36521be37c
卷快照相关功能

目前不支持,来自Kilo:https://blueprints.launchpad.net/cinder/+spec/nfs-snapshots

卷克隆

目前不支持

Comments

Popular posts from this blog

干翻 nio ,王炸 io_uring 来了 ,史上最详细说明及最全图解!!

Google谷歌镜像网址/网站大全,亲测可用!

V2rayN 电脑客户端如何在 win7/win10/win11上 实现全局代理