kitti 数据集 可视化

news/2024/10/8 10:21:54

1. 网址

KITTI官网网址:https://www.cvlibs.net/datasets/kitti/index.php
下载数据集:https://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d
KITTI数据集论文:Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite
github可视化代码:https://github.com/kuixu/kitti_object_vis
其他: https://bbs.huaweicloud.com/blogs/371967

2. kitti简介

KITTI数据集是由德国卡尔斯鲁厄理工学院 Karlsruhe Institute of Technology (KIT) 和美国芝加哥丰田技术研究院 Toyota Technological Institute at Chicago (TTI-C) 于2012年联合创办,是目前国际上最为常用的自动驾驶场景下的计算机视觉算法评测数据集之一。该数据集用于评测立体图像(stereo),光流(optical flow),视觉测距(visual odometry),3D物体检测(object detection)和3D跟踪(tracking)等计算机视觉技术在车载环境下的性能。KITTI数据集包含市区、乡村和高速公路等场景采集的真实图像数据,每张图像中最多达15辆车和30个行人,还有各种程度的遮挡与截断。 KITTI数据集针对3D目标检测任务提供了14999张图像以及对应的点云,其中7481组用于训练,7518组用于测试,针对场景中的汽车、行人、自行车三类物体进行标注,共计80256个标记对象。

3. 采集车和传感器


KITTI数据集采集车的传感器布置平面如上图所示,车辆装配有2个灰度摄像机,2个彩色摄像机,一个Velodyne 64线3D激光雷达,4个光学镜头,以及1个GPS导航系统,在上图中使用了红色标记。

2个一百四十万像素的PointGray Flea2灰度相机
2个一百四十万像素的PointGray Flea2彩色相机
1个64线的Velodyne激光雷达,10Hz,角分辨率为0.09度,每秒约一百三十万个点,水平视场360°,垂直视场26.8°,至多120米的距离范围
4个Edmund的光学镜片,水平视角约为90°,垂直视角约为35°
1个OXTS RT 3003的惯性导航系统(GPS/IMU),6轴,100Hz,分别率为0.02米,0.1°

注意:双目相机之间的距离为0.54米,点云到相机之间的距离为0.27米,在上图中使用了蓝色标记。

4.坐标系

5. 数据文件介绍

5.1 图像:

training/image_2/000001.png

5.2 点云:

training/velodyne/000001.bin
激光点云数据是以浮点二进制文件格式存储,每行包含8个浮点数数据,其中每个浮点数数据由四位十六进制数表示且通过空格隔开。一个点云数据由4个浮点数数据构成,分别表示点云的x、y、z、r(其中xyz表示点云的坐标,r表示强度或反射值),点云的存储方式如下表所示:

calib文件:
training/calib/000001.txt

P0: 7.215377000000e+02 0.000000000000e+00 6.095593000000e+02 0.000000000000e+00 0.000000000000e+00 7.215377000000e+02 1.728540000000e+02 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 0.000000000000e+00
P1: 7.215377000000e+02 0.000000000000e+00 6.095593000000e+02 -3.875744000000e+02 0.000000000000e+00 7.215377000000e+02 1.728540000000e+02 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 0.000000000000e+00
P2: 7.215377000000e+02 0.000000000000e+00 6.095593000000e+02 4.485728000000e+01 0.000000000000e+00 7.215377000000e+02 1.728540000000e+02 2.163791000000e-01 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 2.745884000000e-03
P3: 7.215377000000e+02 0.000000000000e+00 6.095593000000e+02 -3.395242000000e+02 0.000000000000e+00 7.215377000000e+02 1.728540000000e+02 2.199936000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 2.729905000000e-03
R0_rect: 9.999239000000e-01 9.837760000000e-03 -7.445048000000e-03 -9.869795000000e-03 9.999421000000e-01 -4.278459000000e-03 7.402527000000e-03 4.351614000000e-03 9.999631000000e-01
Tr_velo_to_cam: 7.533745000000e-03 -9.999714000000e-01 -6.166020000000e-04 -4.069766000000e-03 1.480249000000e-02 7.280733000000e-04 -9.998902000000e-01 -7.631618000000e-02 9.998621000000e-01 7.523790000000e-03 1.480755000000e-02 -2.717806000000e-01
Tr_imu_to_velo: 9.999976000000e-01 7.553071000000e-04 -2.035826000000e-03 -8.086759000000e-01 -7.854027000000e-04 9.998898000000e-01 -1.482298000000e-02 3.195559000000e-01 2.024406000000e-03 1.482454000000e-02 9.998881000000e-01 -7.997231000000e-01



R0_rect矩阵的介绍:
Kitti 数据集是一个用于移动视觉和机器学习的著名数据集,经常用于自动驾驶汽车系统的研究。在 Kitti 数据集中,R0_rect表示的是矩形校正矩阵(rectifying rotation matrix)、即0号相机坐标系到矫正坐标系的旋转矩阵。
矩形校正是立体视觉计算中常用的一步,用于使得成对的立体相机的成像平面对齐,这样可以简化如立体匹配这样的后续处理步骤。实质上,它通过一个旋转将两个相机成像平面调整为共面,且使得它们的光轴平行。
R0_rect 矩阵用于将点从未校正的相机坐标系变换到校正后的坐标系。这一步是使用其他有关投影数据(如相机内部矩阵、深度信息等)之前必须进行的步骤。


5.3 label文件

training/label_2/000001.txt

Truck 0.00 0 -1.57 599.41 156.40 629.75 189.25 2.85 2.63 12.34 0.47 1.49 69.44 -1.56
Car 0.00 0 1.85 387.63 181.54 423.81 203.12 1.67 1.87 3.69 -16.53 2.39 58.49 1.57
Cyclist 0.00 3 -1.65 676.60 163.95 688.98 193.93 1.86 0.60 2.02 4.59 1.32 45.84 -1.55
DontCare -1 -1 -10 503.89 169.71 590.61 190.13 -1 -1 -1 -1000 -1000 -1000 -10
DontCare -1 -1 -10 511.35 174.96 527.81 187.45 -1 -1 -1 -1000 -1000 -1000 -10
DontCare -1 -1 -10 532.37 176.35 542.68 185.27 -1 -1 -1 -1000 -1000 -1000 -10
DontCare -1 -1 -10 559.62 175.83 575.40 183.15 -1 -1 -1 -1000 -1000 -1000 -10

6. 可视化代码



import os
import sys
import numpy as np
import cv2path_img = "/media/algo/data_1/everyday/20240928/kitti_object_vis-master/data/object/training/image_2/000001.png"
path_txt = "/media/algo/data_1/everyday/20240928/kitti_object_vis-master/data/object/training/label_2/000001.txt"
path_lidar = "/media/algo/data_1/everyday/20240928/kitti_object_vis-master/data/object/training/velodyne/000001.bin"
path_calib = "/media/algo/data_1/everyday/20240928/kitti_object_vis-master/data/object/training/calib/000001.txt"def roty(t):""" Rotation about the y-axis. """c = np.cos(t)s = np.sin(t)return np.array([[c, 0, s], [0, 1, 0], [-s, 0, c]])def compute_box_3d(obj, P):""" Takes an object and a projection matrix (P) and projects the 3dbounding box into the image plane.Returns:corners_2d: (8,2) array in left image coord.corners_3d: (8,3) array in in rect camera coord."""# compute rotational matrix around yaw axisR = roty(obj.ry) #[3,3]# 3d bounding box dimensionsl = obj.lw = obj.wh = obj.h# 3d bounding box cornersx_corners = [l / 2, l / 2, -l / 2, -l / 2, l / 2, l / 2, -l / 2, -l / 2]y_corners = [0, 0, 0, 0, -h, -h, -h, -h]z_corners = [w / 2, -w / 2, -w / 2, w / 2, w / 2, -w / 2, -w / 2, w / 2]#corners_3d[3, 8] rotate and translate 3d bounding box [3,3] @ [3, 8]corners_3d = np.dot(R, np.vstack([x_corners, y_corners, z_corners]))# print corners_3d.shapecorners_3d[0, :] = corners_3d[0, :] + obj.t[0]corners_3d[1, :] = corners_3d[1, :] + obj.t[1]corners_3d[2, :] = corners_3d[2, :] + obj.t[2]# print 'cornsers_3d: ', corners_3d# only draw 3d bounding box for objs in front of the cameraif np.any(corners_3d[2, :] < 0.1):corners_2d = Nonereturn corners_2d, np.transpose(corners_3d)# project the 3d bounding box into the image planecorners_2d = project_to_image(np.transpose(corners_3d), P)# print 'corners_2d: ', corners_2dreturn corners_2d, np.transpose(corners_3d)def project_to_image(pts_3d, P):""" Project 3d points to image plane.Usage: pts_2d = projectToImage(pts_3d, P)input: pts_3d: nx3 matrixP:      3x4 projection matrixoutput: pts_2d: nx2 matrixP(3x4) dot pts_3d_extended(4xn) = projected_pts_2d(3xn)=> normalize projected_pts_2d(2xn)<=> pts_3d_extended(nx4) dot P'(4x3) = projected_pts_2d(nx3)=> normalize projected_pts_2d(nx2)"""n = pts_3d.shape[0]pts_3d_extend = np.hstack((pts_3d, np.ones((n, 1)))) #[8,4] <-- [8, 3]# print(('pts_3d_extend shape: ', pts_3d_extend.shape))pts_2d = np.dot(pts_3d_extend, np.transpose(P))  # nx3pts_2d[:, 0] /= pts_2d[:, 2]pts_2d[:, 1] /= pts_2d[:, 2]return pts_2d[:, 0:2]
def inverse_rigid_trans(Tr):""" Inverse a rigid body transform matrix (3x4 as [R|t])[R'|-R't; 0|1]"""inv_Tr = np.zeros_like(Tr)  # 3x4inv_Tr[0:3, 0:3] = np.transpose(Tr[0:3, 0:3])inv_Tr[0:3, 3] = np.dot(-np.transpose(Tr[0:3, 0:3]), Tr[0:3, 3])return inv_Trclass Object3d(object):""" 3d object label """def __init__(self, label_file_line):data = label_file_line.split(" ")data[1:] = [float(x) for x in data[1:]]# extract label, truncation, occlusionself.type = data[0]  # 'Car', 'Pedestrian', ...self.truncation = data[1]  # truncated pixel ratio [0..1]self.occlusion = int(data[2])  # 0=visible, 1=partly occluded, 2=fully occluded, 3=unknownself.alpha = data[3]  # object observation angle [-pi..pi]# extract 2d bounding box in 0-based coordinatesself.xmin = data[4]  # leftself.ymin = data[5]  # topself.xmax = data[6]  # rightself.ymax = data[7]  # bottomself.box2d = np.array([self.xmin, self.ymin, self.xmax, self.ymax])# extract 3d bounding box informationself.h = data[8]  # box heightself.w = data[9]  # box widthself.l = data[10]  # box length (in meters)self.t = (data[11], data[12], data[13])  # location (x,y,z) in camera coord.self.ry = data[14]  # yaw angle (around Y-axis in camera coordinates) [-pi..pi]def estimate_diffculty(self):""" Function that estimate difficulty to detect the object as defined in kitti website"""# height of the bounding boxbb_height = np.abs(self.xmax - self.xmin)if bb_height >= 40 and self.occlusion == 0 and self.truncation <= 0.15:return "Easy"elif bb_height >= 25 and self.occlusion in [0, 1] and self.truncation <= 0.30:return "Moderate"elif (bb_height >= 25 and self.occlusion in [0, 1, 2] and self.truncation <= 0.50):return "Hard"else:return "Unknown"def print_object(self):print("Type, truncation, occlusion, alpha: %s, %d, %d, %f"% (self.type, self.truncation, self.occlusion, self.alpha))print("2d bbox (x0,y0,x1,y1): %f, %f, %f, %f"% (self.xmin, self.ymin, self.xmax, self.ymax))print("3d bbox h,w,l: %f, %f, %f" % (self.h, self.w, self.l))print("3d bbox location, ry: (%f, %f, %f), %f"% (self.t[0], self.t[1], self.t[2], self.ry))print("Difficulty of estimation: {}".format(self.estimate_diffculty()))def draw_projected_box3d(image, qs, color=(0, 255, 0), thickness=2):""" Draw 3d bounding box in imageqs: (8,3) array of vertices for the 3d box in following order:1 -------- 0/|         /|2 -------- 3 .| |        | |. 5 -------- 4|/         |/6 -------- 7"""qs = qs.astype(np.int32)for k in range(0, 4):# Ref: http://docs.enthought.com/mayavi/mayavi/auto/mlab_helper_functions.htmli, j = k, (k + 1) % 4# use LINE_AA for opencv3# cv2.line(image, (qs[i,0],qs[i,1]), (qs[j,0],qs[j,1]), color, thickness, cv2.CV_AA)cv2.line(image, (qs[i, 0], qs[i, 1]), (qs[j, 0], qs[j, 1]), color, thickness)i, j = k + 4, (k + 1) % 4 + 4cv2.line(image, (qs[i, 0], qs[i, 1]), (qs[j, 0], qs[j, 1]), color, thickness)i, j = k, k + 4cv2.line(image, (qs[i, 0], qs[i, 1]), (qs[j, 0], qs[j, 1]), color, thickness)return imagedef read_txt_to_objs(path_txt):lines = [line.rstrip() for line in open(path_txt)]objects = [Object3d(line) for line in lines]return objectsdef get_lidar_in_image_fov(pc_velo, calib, xmin, ymin, xmax, ymax, return_more=False, clip_distance=2.0
):""" Filter lidar points, keep those in image FOV """pts_2d = calib.project_velo_to_image(pc_velo)fov_inds = ((pts_2d[:, 0] < xmax)& (pts_2d[:, 0] >= xmin)& (pts_2d[:, 1] < ymax)& (pts_2d[:, 1] >= ymin))fov_inds = fov_inds & (pc_velo[:, 0] > clip_distance)imgfov_pc_velo = pc_velo[fov_inds, :]if return_more:return imgfov_pc_velo, pts_2d, fov_indselse:return imgfov_pc_veloclass Calibration(object):""" Calibration matrices and utils3d XYZ in <label>.txt are in rect camera coord.2d box xy are in image2 coordPoints in <lidar>.bin are in Velodyne coord.y_image2 = P^2_rect * x_recty_image2 = P^2_rect * R0_rect * Tr_velo_to_cam * x_velox_ref = Tr_velo_to_cam * x_velox_rect = R0_rect * x_refP^2_rect = [f^2_u,  0,      c^2_u,  -f^2_u b^2_x;0,      f^2_v,  c^2_v,  -f^2_v b^2_y;0,      0,      1,      0]= K * [1|t]image2 coord:----> x-axis (u)||v y-axis (v)velodyne coord:front x, left y, up zrect/ref camera coord:right x, down y, front zRef (KITTI paper): http://www.cvlibs.net/publications/Geiger2013IJRR.pdfTODO(rqi): do matrix multiplication only once for each projection."""def __init__(self, calib_filepath, from_video=False):if from_video:calibs = self.read_calib_from_video(calib_filepath)else:calibs = self.read_calib_file(calib_filepath)# Projection matrix from rect camera coord to image2 coordself.P = calibs["P2"]self.P = np.reshape(self.P, [3, 4])# Rigid transform from Velodyne coord to reference camera coordself.V2C = calibs["Tr_velo_to_cam"]self.V2C = np.reshape(self.V2C, [3, 4])self.C2V = inverse_rigid_trans(self.V2C)# Rotation from reference camera coord to rect camera coordself.R0 = calibs["R0_rect"]self.R0 = np.reshape(self.R0, [3, 3])# Camera intrinsics and extrinsicsself.c_u = self.P[0, 2]self.c_v = self.P[1, 2]self.f_u = self.P[0, 0]self.f_v = self.P[1, 1]self.b_x = self.P[0, 3] / (-self.f_u)  # relativeself.b_y = self.P[1, 3] / (-self.f_v)def read_calib_file(self, filepath):""" Read in a calibration file and parse into a dictionary.Ref: https://github.com/utiasSTARS/pykitti/blob/master/pykitti/utils.py"""data = {}with open(filepath, "r") as f:for line in f.readlines():line = line.rstrip()if len(line) == 0:continuekey, value = line.split(":", 1)# The only non-float values in these files are dates, which# we don't care about anywaytry:data[key] = np.array([float(x) for x in value.split()])except ValueError:passreturn datadef read_calib_from_video(self, calib_root_dir):""" Read calibration for camera 2 from video calib files.there are calib_cam_to_cam and calib_velo_to_cam under the calib_root_dir"""data = {}cam2cam = self.read_calib_file(os.path.join(calib_root_dir, "calib_cam_to_cam.txt"))velo2cam = self.read_calib_file(os.path.join(calib_root_dir, "calib_velo_to_cam.txt"))Tr_velo_to_cam = np.zeros((3, 4))Tr_velo_to_cam[0:3, 0:3] = np.reshape(velo2cam["R"], [3, 3])Tr_velo_to_cam[:, 3] = velo2cam["T"]data["Tr_velo_to_cam"] = np.reshape(Tr_velo_to_cam, [12])data["R0_rect"] = cam2cam["R_rect_00"]data["P2"] = cam2cam["P_rect_02"]return datadef cart2hom(self, pts_3d):""" Input: nx3 points in CartesianOupput: nx4 points in Homogeneous by pending 1"""n = pts_3d.shape[0]pts_3d_hom = np.hstack((pts_3d, np.ones((n, 1))))return pts_3d_hom# ===========================# ------- 3d to 3d ----------# ===========================def project_velo_to_ref(self, pts_3d_velo):pts_3d_velo = self.cart2hom(pts_3d_velo)  # nx4return np.dot(pts_3d_velo, np.transpose(self.V2C))def project_ref_to_velo(self, pts_3d_ref):pts_3d_ref = self.cart2hom(pts_3d_ref)  # nx4return np.dot(pts_3d_ref, np.transpose(self.C2V))def project_rect_to_ref(self, pts_3d_rect):""" Input and Output are nx3 points """return np.transpose(np.dot(np.linalg.inv(self.R0), np.transpose(pts_3d_rect)))def project_ref_to_rect(self, pts_3d_ref):""" Input and Output are nx3 points """return np.transpose(np.dot(self.R0, np.transpose(pts_3d_ref)))def project_rect_to_velo(self, pts_3d_rect):""" Input: nx3 points in rect camera coord.Output: nx3 points in velodyne coord."""pts_3d_ref = self.project_rect_to_ref(pts_3d_rect)return self.project_ref_to_velo(pts_3d_ref)def project_velo_to_rect(self, pts_3d_velo):pts_3d_ref = self.project_velo_to_ref(pts_3d_velo)return self.project_ref_to_rect(pts_3d_ref)# ===========================# ------- 3d to 2d ----------# ===========================def project_rect_to_image(self, pts_3d_rect):""" Input: nx3 points in rect camera coord.Output: nx2 points in image2 coord."""pts_3d_rect = self.cart2hom(pts_3d_rect)pts_2d = np.dot(pts_3d_rect, np.transpose(self.P))  # nx3pts_2d[:, 0] /= pts_2d[:, 2]pts_2d[:, 1] /= pts_2d[:, 2]return pts_2d[:, 0:2]def project_velo_to_image(self, pts_3d_velo):""" Input: nx3 points in velodyne coord.Output: nx2 points in image2 coord."""pts_3d_rect = self.project_velo_to_rect(pts_3d_velo)return self.project_rect_to_image(pts_3d_rect)def project_8p_to_4p(self, pts_2d):x0 = np.min(pts_2d[:, 0])x1 = np.max(pts_2d[:, 0])y0 = np.min(pts_2d[:, 1])y1 = np.max(pts_2d[:, 1])x0 = max(0, x0)# x1 = min(x1, proj.image_width)y0 = max(0, y0)# y1 = min(y1, proj.image_height)return np.array([x0, y0, x1, y1])def project_velo_to_4p(self, pts_3d_velo):""" Input: nx3 points in velodyne coord.Output: 4 points in image2 coord."""pts_2d_velo = self.project_velo_to_image(pts_3d_velo)return self.project_8p_to_4p(pts_2d_velo)# ===========================# ------- 2d to 3d ----------# ===========================def project_image_to_rect(self, uv_depth):""" Input: nx3 first two channels are uv, 3rd channelis depth in rect camera coord.Output: nx3 points in rect camera coord."""n = uv_depth.shape[0]x = ((uv_depth[:, 0] - self.c_u) * uv_depth[:, 2]) / self.f_u + self.b_xy = ((uv_depth[:, 1] - self.c_v) * uv_depth[:, 2]) / self.f_v + self.b_ypts_3d_rect = np.zeros((n, 3))pts_3d_rect[:, 0] = xpts_3d_rect[:, 1] = ypts_3d_rect[:, 2] = uv_depth[:, 2]return pts_3d_rectdef project_image_to_velo(self, uv_depth):pts_3d_rect = self.project_image_to_rect(uv_depth)return self.project_rect_to_velo(pts_3d_rect)def project_depth_to_velo(self, depth, constraint_box=True):depth_pt3d = get_depth_pt3d(depth)depth_UVDepth = np.zeros_like(depth_pt3d)depth_UVDepth[:, 0] = depth_pt3d[:, 1]depth_UVDepth[:, 1] = depth_pt3d[:, 0]depth_UVDepth[:, 2] = depth_pt3d[:, 2]# print("depth_pt3d:",depth_UVDepth.shape)depth_pc_velo = self.project_image_to_velo(depth_UVDepth)# print("dep_pc_velo:",depth_pc_velo.shape)if constraint_box:depth_box_fov_inds = ((depth_pc_velo[:, 0] < cbox[0][1])& (depth_pc_velo[:, 0] >= cbox[0][0])& (depth_pc_velo[:, 1] < cbox[1][1])& (depth_pc_velo[:, 1] >= cbox[1][0])& (depth_pc_velo[:, 2] < cbox[2][1])& (depth_pc_velo[:, 2] >= cbox[2][0]))depth_pc_velo = depth_pc_velo[depth_box_fov_inds]return depth_pc_velocalib = Calibration(path_calib)img = cv2.imread(path_img)
objects = read_txt_to_objs(path_txt)
img1 = np.copy(img)  # for 2d bbox
img2 = np.copy(img)  # for 3d bboxfor obj in objects:if obj.type == "DontCare":continueif obj.type == "Car":cv2.rectangle(img1,(int(obj.xmin), int(obj.ymin)),(int(obj.xmax), int(obj.ymax)),(0, 255, 0),2,)if obj.type == "Pedestrian":cv2.rectangle(img1,(int(obj.xmin), int(obj.ymin)),(int(obj.xmax), int(obj.ymax)),(255, 255, 0),2,)if obj.type == "Cyclist":cv2.rectangle(img1,(int(obj.xmin), int(obj.ymin)),(int(obj.xmax), int(obj.ymax)),(0, 255, 255),2,)box3d_pts_2d, _ = compute_box_3d(obj, calib.P)if box3d_pts_2d is None:print("something wrong in the 3D box.")continueif obj.type == "Car":img2 = draw_projected_box3d(img2, box3d_pts_2d, color=(0, 255, 0))elif obj.type == "Pedestrian":img2 = draw_projected_box3d(img2, box3d_pts_2d, color=(255, 255, 0))elif obj.type == "Cyclist":img2 = draw_projected_box3d(img2, box3d_pts_2d, color=(0, 255, 255))cv2.namedWindow("img-2d", 0)
cv2.imshow("img-2d", img1)
cv2.namedWindow("img-3d", 0)
cv2.imshow("img-3d", img2)
cv2.waitKey()## show_lidar_on_imagedtype = np.float32
n_vec = 4
scan = np.fromfile(path_lidar, dtype=dtype)
pc_velo = scan.reshape((-1, n_vec))
pc_velo = pc_velo[:, 0:3]img_height, img_width, _ = img.shape
imgfov_pc_velo, pts_2d, fov_inds = get_lidar_in_image_fov(pc_velo, calib, 0, 0, img_width, img_height, True
)imgfov_pts_2d = pts_2d[fov_inds, :]
imgfov_pc_rect = calib.project_velo_to_rect(imgfov_pc_velo)import matplotlib.pyplot as pltcmap = plt.cm.get_cmap("hsv", 256)
cmap = np.array([cmap(i) for i in range(256)])[:, :3] * 255for i in range(imgfov_pts_2d.shape[0]):depth = imgfov_pc_rect[i, 2]color = cmap[int(640.0 / depth), :]cv2.circle(img,(int(np.round(imgfov_pts_2d[i, 0])), int(np.round(imgfov_pts_2d[i, 1]))),2,color=tuple(color),thickness=-1,)
cv2.imshow("projection", img)
cv2.waitKey(0)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.ryyt.cn/news/68903.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈,一经查实,立即删除!

相关文章

table 单元格合并

table 元素合并单元格,用法倒是很简单,但过程中遇到了点小问题,记录下: 1、多行多列合并,使用 rowSpan、colSpan 设置要合并的行列数,再将合并后的多余单元格删除即可:function merge(table, px, py, row, col, remove = true) {py--;let target = table.rows[px].cell…

phpinfo函数的作用

phpinfo() 函数是 PHP 中的一个内置函数,主要用于显示关于 PHP 配置的各种信息。这些信息包括 PHP 版本、已加载的扩展、环境变量、HTTP 头信息、目录路径设置等。这对于调试和了解当前 PHP 运行环境非常有用。作用包括:显示 PHP 的版本号。 列出所有已启用的扩展及其版本信息…

宝塔面板打不开怎么办

如果遇到宝塔面板无法打开的问题,可以尝试以下几个步骤来排查和解决问题:检查网络连接确保服务器与客户端之间的网络连接正常,尝试访问其他网站或服务以确认网络状况。查看端口是否开放宝塔面板默认使用的端口号为8888(或自定义端口),确保该端口在服务器防火墙中已经开放…

[Markdown] Markdown 及文档格式转换

1 概述 : Markdown Markdown 的诞生什么是 Markdown? Markdown 的诞生初衷Markdown 是一种用于编写结构化文档的纯文本格式,基于在电子邮件和 usenet 帖子中指示格式的约定。 它由 John Gruber 开发(在 Aaron Swartz 的帮助下),并于 2004 年以 语法描述 和用于将 Markdown…

如果 表名 拼写错误或表不存在,你会看到 #1146 - Table ecms.表名 doesnt exist 的错误

<?php $servername = "localhost"; $username = "your_username"; $password = "your_password"; $dbname = "ecms";// 创建连接 $conn = new mysqli($servername, $username, $password, $dbname);// 检查连接 if ($conn->con…

错误消息:#2002 - Cant connect to local MySQL server through socket /tmp/mysql.sock (2)

错误消息:#2002 - Cant connect to local MySQL server through socket /tmp/mysql.sock (2) 原因:数据库服务未启动。 连接参数错误。解决方法:检查数据库服务:确认 MySQL 服务是否正常运行。sudo service mysql status检查连接参数:确认连接参数(主机名、用户名、密码、…

请问想登录宝塔面板但是忘记密码_宝塔密码忘记了怎么办

通过邮箱找回:如果你在设置宝塔面板时绑定了邮箱,可以通过绑定的邮箱来找回密码。 访问宝塔面板登录页面,找到“忘记密码”选项并点击,按照提示输入已绑定的邮箱地址。 登录邮箱查看收到的重置链接或验证码,按照邮件中的指引完成密码重置。通过SSH命令行重置:首先通过SSH…