corpus = [ "我喜欢吃苹果", "我喜欢吃香蕉", "她喜欢吃葡萄", "他不喜欢吃香蕉", "他喜欢吃苹果", "她喜欢吃草莓"]
# 定义一个分词函数,将文本转换为单个字符的列表 deftokenize(text): return [char for char in text] # 将文本拆分为字符列表 # 对每个文本进行分词,并打印出对应的单字列表 print("单字列表:") for text in corpus: tokens = tokenize(text) print(tokens)
2024/06/12 16:15:41 Open AI ✅ Answer: 傻妞: 哦,太阳的重量可大了,它是个恒星,比我们的地球重得多。科学家们用的是质量而不是重量来衡量,太阳的质量大约是地球的333,000倍,真是个超级大块头,想想如果它能变成棉花糖,那得多软多亮啊!不过,太阳对我们来说太遥远了,它的重量咱们还是别去抱了,哈哈。
默认的情况下,k8s 对于 pod 在单个节点的资源分配并不会考虑到 NUMA 架构。比如 cpu 默认会采用 cgroup CFS 来做资源的分配,并未考虑到 NUMA 架构。为了提升 pod 的性能,需要 pod 在分配资源时感知 NUMA 架构。 为此,k8s 在 kubelet 中通过 CPU Manager、Memory Manager、Device Manager、Topology Manager 等特性对 NUMA 做了支持,支持的 pod QoS 类型要求为 Granteed pod。 各特性的支持版本情况如下:
特性
alpha
beta
stable
CPU Manager
1.12
1.26
Memory Manager
1.21
1.22
-
Topology Manager
1.16
1.18
-
CPU Manager
在 k8s 中使用 cgroup 的 CFS 配额来执行 pod 的 CPU 约束,在 CFS 的模式下,pod 可能会运行在不同的核上,会导致 pod 的缓存失效的问题。对于性能要求非常高的 pod,为了提升性能,可以通过 cgroup 中的 cpuset 绑核的特性来提升 pod 的性能。
Mar 06 15:00:02 iZt4nd5yyw9vfuxn3q2g3tZ kubelet[102800]: E0306 15:00:02.463939 102800 cpu_manager.go:223] "Could not initialize checkpoint manager, please drain node and remove policy state file" err="could not restore state from checkpoint: configured policy \"static\" differs from state checkpoint policy \"none\", please drain this node and delete the CPU manager checkpoint file \"/var/lib/kubelet/cpu_manager_state\" before restarting Kubelet" Mar 06 15:00:02 iZt4nd5yyw9vfuxn3q2g3tZ kubelet[102800]: E0306 15:00:02.463972 102800 kubelet.go:1392] "Failed to start ContainerManager" err="start cpu manager error: could not restore state from checkpoint: configured policy \"static\" differs from state checkpoint policy \"none\", please drain this node and delete the CPU manager checkpoint file \"/var/lib/kubelet/cpu_manager_state\" before restarting Kubelet"
// TopologyHint is a struct containing the NUMANodeAffinity for a Container type TopologyHint struct { // 记录了 NUMA Node 满足资源请求的位掩码 NUMANodeAffinity bitmask.BitMask // Preferred is set to true when the NUMANodeAffinity encodes a preferred // allocation for the Container. It is set to false otherwise. // 亲和性的结果是否为首选的 Preferred bool }
// HintProvider is an interface for components that want to collaborate to // achieve globally optimal concrete resource alignment with respect to // NUMA locality. type HintProvider interface { // GetTopologyHints returns a map of resource names to a list of possible // concrete resource allocations in terms of NUMA locality hints. Each hint // is optionally marked "preferred" and indicates the set of NUMA nodes // involved in the hypothetical allocation. The topology manager calls // this function for each hint provider, and merges the hints to produce // a consensus "best" hint. The hint providers may subsequently query the // topology manager to influence actual resource assignment. GetTopologyHints(pod *v1.Pod, container *v1.Container) map[string][]TopologyHint // GetPodTopologyHints returns a map of resource names to a list of possible // concrete resource allocations per Pod in terms of NUMA locality hints. GetPodTopologyHints(pod *v1.Pod) map[string][]TopologyHint // Allocate triggers resource allocation to occur on the HintProvider after // all hints have been gathered and the aggregated Hint is available via a // call to Store.GetAffinity(). Allocate(pod *v1.Pod, container *v1.Container) error }
CPU Manager、Memory Manager 和 Device Manager 均实现了该接口。在 Topology Manager 中根据各个 Manager 返回的 TopologyHint 数据,从而决定最终的 NUMA Node 分配,并调用各个 Manager 的 Allocate 来做最终的 NUMA Node 分配。
其中的 Subject 中的 O 对应的 k8s 中的 Group,CN 对应的 k8s 中的 User。kube-apiserver 会通过证书的 O 和 CN 获取到 User 和 Group 信息。在 k8s 系统中,实际上并没有存储 Group 和 User 信息,而是完全依赖该证书中的信息。