openEuler22.03(LTS-SP3)安装kubernetesV1.29

news/2024/10/5 1:19:36

一、环境配置

配置

主机 配置 角色 系统版本 IP
master01 2核4G master openEuler22.03(LTS-SP3) 192.168.0.111
master02 2核4G master openEuler22.03(LTS-SP3) 192.168.0.112
master03 2核4G master openEuler22.03(LTS-SP3) 192.168.0.113
worker01 2核4G worker openEuler22.03(LTS-SP3) 192.168.0.114
worker01 2核4G worker openEuler22.03(LTS-SP3) 192.168.0.115

 

安装组件

角色 组件
master

kubelet、容器运行时(docker、cri-docker)

apiserver、controller-manager、scheduler、etcd、kube-proxy

calico

worker

kubelet、容器运行时(docker、cri-docker)

kube-proxy、coredns、

 calico

 

部署步骤:

1、每个节点部署选定的容器运行时:docker、cri-docker

2、每个节点部署kubeadm、kubelet和kubectl

3、部署控制平面第一个节点:kubeadm init 

4、将其他控制节点和工作节点加入控制平面第一个节点创建出来的集群:kubeadm join

5、部署选定的网络插件:calico

 


 

二、环境准备

1、配置hosts,集群内主机都需要执行。

cat >>/etc/hosts<<EOF
192.168.0.111 master01
192.168.0.112 master02
192.168.0.113 master03
192.168.0.114 worker01
192.168.0.115 worker02
EOF

2、集群主机配置互信,集群内主机都需要执行。

ssh-keygen
ssh-copy-id 192.168.0.111
ssh-copy-id 192.168.0.112
ssh-copy-id 192.168.0.113
ssh-copy-id 192.168.0.114
ssh-copy-id 192.168.0.115

 3、关闭防火墙、selinux,集群内主机都需要执行。

systemctl stop firewalld && systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sestatus

 4、关闭系统的交换分区swap,集群内主机都需要执行。

sed -ri 's/^([^#].*swap.*)$/#\1/' /etc/fstab && grep swap /etc/fstab && swapoff -a && free -h

 5、配置和加载ipvs模块,ipvs比iptable更强大。集群内主机都需要执行。

dnf -y install ipvsadm ipset sysstat conntrack libseccompcat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOFchmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

6、配置时间台同步,集群内主机都需要执行。

dnf install -y ntpdate
crontab -e0 */1 * * * ntpdate time1.aliyun.com

7、开启主机路由转发及网桥过滤,集群内主机都需要执行。
配置内核加载br_netfilter和iptables放行ipv6和ipv4的流量,确保集群内的容器能够正常通信。

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF

加载模块:

modprobe br_netfilter 

让配置生效:

sysctl -p /etc/sysctl.d/k8s.conf 

8、创建应用使用的文件系统,为了防止根目录爆满,不利于后期维护,建议给kubelet和docker创建单独文件系统:

#如果是重搭,则先删掉老vg: vgremove datavg -y

pvcreate /dev/sdb
vgcreate datavg /dev/sdb
lvcreate -L 13G -n dockerlv datavg -y
lvcreate -L 13G -n kubeletlv datavg -y
lvcreate -L 10G -n etcdlv datavg -ymkfs.ext4 /dev/datavg/dockerlv
mkfs.ext4 /dev/datavg/kubeletlv
mkfs.xfs /dev/datavg/etcdlvmkdir /var/lib/docker
mkdir /var/lib/kubelet
mkdir /var/lib/etcd

配置机器重启自动挂文件系统:

lsblk -f

vi /etc/fstab 格式参考以下:

/dev/datavg/dockerlv  /var/lib/docker   ext4 defaults,nofail 1 1
/dev/datavg/kubeletlv /var/lib/kubelet  ext4 defaults,nofail 1 1
/dev/datavg/etcdlv    /var/lib/etcd     xfs  defaults,nofail 1 1mount -a 

三、安装docker和cri-dockerd 

所有主机都需要执行

1、配置yum源:

cd /etc/yum.repos.d/
curl -O https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's/$releasever/8/g' docker-ce.repo

2、创建docker用户,可尝试不创建此用户,之前cri-dockerd无论如何都无法启动,最后创建docker用户后启动成功,所以创建docker用户:

groupadd docker
useradd -g docker docker

3、安装cri-dockerd:

cri-dockerd插件在此工程:https://github.com/Mirantis/cri-dockerd/

先准备cri-dockerd的rpm包,再使用dnf命令安装会自动安装相关依赖包,cri-dockerd用新旧版本都可以,我这里使用v0.3.14-3版本

dnf install  cri-dockerd-0.3.14-3.el8.x86_64.rpm -y

4、安装docker:

dnf install docker -y 

5、配置 cri-dockerd

从国内 cri-dockerd 服务无法下载 k8s.gcr.io上面相关镜像,导致无法启动,所以需要修改cri-dockerd 使用国内镜像源

sed -ri 's@^(.*fd://).*$@\1 --pod-infra-container-image registry.aliyuncs.com/google_containers/pause:3.9@' /usr/lib/systemd/system/cri-docker.service

修改后效果如下:

 

6、启动和设置docker和cri-dockerd开机自启

systemctl daemon-reload && systemctl start docker && systemctl enable dockersystemctl start cri-docker && systemctl enable cri-docker

7、配置镜像加速:

现在dockerhub仓库被禁用了,配置加速器也没啥用了。exec-opts参数可能还是需要配置。

cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": [
    "https://docker.mirrors.ustc.edu.cn",
    "https://hub-mirror.c.163.com",
    "https://reg-mirror.qiniu.com",
    "https://registry.docker-cn.com"
],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "200m",
    "max-file": "5"}
}
EOFsystemctl daemon-reload && systemctl restart docker

四、安装kubernetes

1、配置yum源,所有主机都需要执行

cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/repodata/repomd.xml.key
EOFdnf makecache

 

2、安装 kubelet、kubeadm、kubectl组件,设置systemctl enable kubelet开机自启,所有主机都需要执行

dnf install -y kubelet kubeadm kubectl 

此时还不能启动kubelet,因为集群还没有配置起来,现在仅仅设置开机自启动

systemctl enable kubelet 

3、生成初始化配置文件,在其中一个master执行。

kubeadm config print init-defaults > kubeadm.yaml

4、修改kubeadm.yaml配置文[root@master01 ~]# cat kubeadm.yaml 

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:
# 修改成本master的ipadvertiseAddress:
192.168.0.111bindPort: 6443 nodeRegistration:
# 修改成cri-dockerd的sockcriSocket: unix:
///run/cri-dockerd.sock imagePullPolicy: IfNotPresent
# 修改成本master的主机名name: master01taints:
null --- apiServer:timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd:local:
# 修改etcd的数据目录,默认是/etc/lib/etcd。dataDir:
/var/lib/etcd
# 修改拉取镜像的仓库地址,改为阿里云镜像仓库 imageRepository: registry.aliyuncs.com
/google_containers kind: ClusterConfiguration
# 修改成具体要安装的版本号 kubernetesVersion:
1.29.5
# 如果是多master节点,就需要添加这项,指向代理的地址,这里就设置成master的节点controlPlaneEndpoint: "master01:6443" networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12# 添加pod的IP地址
podSubnet: 10.244.0.0/16 scheduler: {} ---
# 在最后添加上下面两部分, apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd

4、执行kubeadm config images list会列出安装kubernetes需要哪些镜像

kubeadm config images list

5、拉取安装kubernetes所需要用到的镜像,在其中一个master执行。

kubeadm config images pull --config=kubeadm.yaml

6、初始化集群,在其中一个master执行。

kubeadm init --config=kubeadm.yaml

出现下面这种类似的情况,就说明初始化成功了

[root@master01 ~]# kubeadm init --config=kubeadm.yaml 
[init] Using Kubernetes version: v1.29.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [10.96.0.1 192.168.0.111]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [192.168.0.111 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [192.168.0.111 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.502197 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
08c16995b0ac15a581c720018a1a25096b776a2592729a27265339a81db4252b
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:kubeadm join master01:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:aa930347f65820ba0f77a66b3f31d16906c04fe355cf369397456ca7f02b3ff0 \--control-plane --certificate-key 08c16995b0ac15a581c720018a1a25096b776a2592729a27265339a81db4252bPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join master01:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:aa930347f65820ba0f77a66b3f31d16906c04fe355cf369397456ca7f02b3ff0 

7、如果想在其他机器,其他用户能够执行kubectl命令,则执行以下操作:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

8、在master02和master03执行,加入master节点,注意一定要加上--cri-socket unix:///run/cri-dockerd.sock

kubeadm join master01:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:aa930347f65820ba0f77a66b3f31d16906c04fe355cf369397456ca7f02b3ff0 \
--control-plane --certificate-key 08c16995b0ac15a581c720018a1a25096b776a2592729a27265339a81db4252b  \
--cri-socket unix:///run/cri-dockerd.sock

9、在worker01和worker02执行,加入node节点,注意一定要加上--cri-socket unix:///run/cri-dockerd.sock

kubeadm join master01:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:aa930347f65820ba0f77a66b3f31d16906c04fe355cf369397456ca7f02b3ff0 \
--cri-socket unix:///run/cri-dockerd.sock

10、如果安装失败,重置初始化

kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock

五、安装网络插件

1、下载网络插件

wget https://docs.projectcalico.org/manifests/calico.yaml

2、修改资源清单文件
找到下面的容器containers部分来修改 

     containers:

         # Runs calico-node container on each Kubernetes node. This
         # container programs network policy and routes on each
         # host.
          - name: calico-node

          image: docker.io/calico/node:v3.25.0imagePullPolicy: IfNotPresentenvFrom:- configMapRef:# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.name: kubernetes-services-endpointoptional: trueenv:# Use Kubernetes API as the backing datastore.- name: DATASTORE_TYPEvalue: "kubernetes"# Wait for the datastore.- name: WAIT_FOR_DATASTOREvalue: "true"# Set based on the k8s node name.- name: NODENAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# Choose the backend to use.- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backend# Cluster type to identify the deployment type- name: CLUSTER_TYPEvalue: "k8s,bgp"# Auto-detect the BGP IP address.- name: IPvalue: "autodetect"# Enable IPIP- name: CALICO_IPV4POOL_IPIPvalue: "Always"
# 在这里指定网卡 添加下面两行- name: IP_AUTODETECTION_METHODvalue: "interface=ens33"# Enable or Disable VXLAN on the default IP pool.- name: CALICO_IPV4POOL_VXLANvalue: "Never"# Enable or Disable VXLAN on the default IPv6 IP pool.- name: CALICO_IPV6POOL_VXLANvalue: "Never"# Set MTU for tunnel device used if ipip is enabled- name: FELIX_IPINIPMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Set MTU for the VXLAN tunnel device.- name: FELIX_VXLANMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Set MTU for the Wireguard tunnel device.- name: FELIX_WIREGUARDMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.
# 这部分原本是注释的,需要去掉#号,将192.168.0.0/16修改成10.244.0.0/16。这里对应的是kubeadm.yaml的podSubnet配置- name: CALICO_IPV4POOL_CIDRvalue: "10.244.0.0/16"# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"- name: FELIX_HEALTHENABLEDvalue: "true"securityContext:privileged: true

 3、安装网络插件

kubectl apply -f calico.yaml

可用命令观察各服务容器的状态

watch kubectl get pods --all-namespaces -o wide

4、查看节点状态

kubectl get node

 

[root@master01 ~]# kubectl get nodes -owide 
NAME       STATUS   ROLES           AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                    KERNEL-VERSION                       CONTAINER-RUNTIME
master01   Ready    control-plane   3d19h   v1.29.5   192.168.0.111   <none>        openEuler 22.03 (LTS-SP3)   5.10.0-182.0.0.95.oe2203sp3.x86_64   docker://24.0.6
master02   Ready    control-plane   3d19h   v1.29.5   192.168.0.112   <none>        openEuler 22.03 (LTS-SP3)   5.10.0-182.0.0.95.oe2203sp3.x86_64   docker://24.0.6
master03   Ready    control-plane   3d19h   v1.29.5   192.168.0.113   <none>        openEuler 22.03 (LTS-SP3)   5.10.0-182.0.0.95.oe2203sp3.x86_64   docker://24.0.6
worker01   Ready    <none>          3d19h   v1.29.5   192.168.0.114   <none>        openEuler 22.03 (LTS-SP3)   5.10.0-182.0.0.95.oe2203sp3.x86_64   docker://24.0.6
worker02   Ready    <none>          3d19h   v1.29.5   192.168.0.115   <none>        openEuler 22.03 (LTS-SP3)   5.10.0-182.0.0.95.oe2203sp3.x86_64   docker://24.0.6

 

如果镜像难以拉取,可先将镜像加载到docker。这里提供k8s v1.29.5版本部署需要用到的镜像:

链接:https://pan.baidu.com/s/11cQI-dn7xk-hiqDu57y5MA?pwd=dv03
提取码:dv03

 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.ryyt.cn/news/44151.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈,一经查实,立即删除!

相关文章

3-操作系统基本原理

3.1 操作系统概述 操作系统是控制和管理计算机软硬件资源,以尽可能合理、有效的方法组织多个用户共享多种资源的程序集合。它具有并发性、共享性、虚拟性和不确定性等特点,一般的操作系统都具有处理机管理、存储器管理、设备管理、文件管理和用户接口等5种主要功能。 3.2 进程…

1-绪论

1.1 信息与信息系统 1.1.1 信息的基本概念 略(和其他高项的书内容一样) 1.1.2 系统及相关理论 系统是由相互联系、相互依赖、相互作用的事物或过程组成的具有整体功能和综合行为的统一体 特性:整体性,层次性,目的性,稳定性,突变性,自组织性,相似性,相关性,环节适应性…

aBIOTECH | 程时锋-豌豆功能基因组与分子育种研究进展与展望

近日,中国农科院深圳基因组所程时锋团队在aBIOTECH发表上综述:Innovations in Functional Genomics and Molecular Breeding of Pea: Exploring Advances and Opportunities,总结了豌豆功能基因组学和分子育种的研究进展和挑战。简介 豌豆(Pisum sativum L., 2n= 14)是一…

clion+msvc+qml demo

CMake设置-DCMAKE_PREFIX_PATH=C:\Qt\6.6.2\msvc2019_64 demo工程结构: ├───CMakeLists.txt └───main.cpp └───Main.qml └───MyObject.cpp └───MyObject.h └───MyRectangle.qmlCMakeLists.txt cmake_minimum_required(VERSION 3.16)project(qmltest02…

推荐 | 诺奖得主传记《解码者:珍妮弗杜德纳基因编辑的历史与未来》

今天小编给大家推荐一本关于基因编辑的科普故事书。此书曾入选中信出版2022年度好书,豆瓣读书评分也高达8.7分,值得一阅。此书作者沃尔特艾萨克森(Walter Isaacson)是美国知名传记作家,杜兰大学历史学教授,《时代周刊》前主编,CNN前董事长兼首席执行官。其他畅销传记作品…

推荐 | 入门书籍《生物信息学》(第四版),含PDF和PPT

《生物信息学》联合国内高校学者,紧密跟踪学科发展,提炼学科精华,编写完成。全书涵盖了生物信息学、系统生物学、合成生物学的相关内容,以及应用于第二代测序技术的相关软件和算法。第一、二、三版出版以来,越来越多的高校将其作为首选专业教材或选修教材。作为科学出版社…

利用基于 Yolo 技术进行植物检测和计数

这篇论文介绍了一种使用YOLO算法进行植物检测和计数的技术,旨在为农业实践提供一种自动化、有效的解决方案。作者通过收集大量的农田照片,并对每张照片中的植物实例进行精确的边界框标注,训练了这个算法。YOLO算法以其实时物体检测能力而闻名,在图像中将输入图像划分为网格…

JIPB | 中国农科院华南农大王海洋等综述玉米响应密植的遗传调控分子机制

现代玉米育种中,提高品种耐密性和种植密度是提高玉米单产的关键措施。玉米密植后群体通风、透光性降低,会引起避荫反应,造成株高和穗位高增加、抗生物和非生物胁迫能力降低、植株抗倒性降低,并最终导致产量损失。因此,培育耐密理想株型玉米是提高玉米耐密性的重要途径。 2…