Hello Julian,
Thank you for the feedback! The issue you described might be related to various CPU allocation settings for the JVM. By setting -XX:ActiveProcessorCount to 24, the JVM may attempt to optimize for that number of cores, resulting in inefficient ThreadPool management and unnecessary resource use, especially in a container environment with limits set to 1000 millicores.
Additionally, it is important to ensure that the JVM does not try to allocate more CPU than is available within the container limits to prevent throttling. Unlike memory, where a shortage results in an OOM error and possible container termination, excess CPU leads to throttling, keeping the container running but with reduced performance. This underscores the need for careful configuration to maintain an ideal balance between allocation and performance.
Furthermore, I notice that attempting to configure the JVM to recognize 24 cores in a container limited to 1000 millicores creates a significant discrepancy between the resources expected by the JVM and those actually available from Kubernetes. It's important to note that the JVM adheres to the resources defined for the container through cgroups. Even if the node has more processors than the pod, the pod will only have access to what has been limited to it. This imbalance could be the root of the performance issues you mentioned, such as prolonged startup times and general sluggishness of the application.
I hope this helps, thank you for the comment, see you soon.