Keep in mind, all memory list above can throw OutOfMemoryError when insufficient. And once more, there are no concept of young generation gc and jvmoptions for now.
https://docs.oracle.com/javase/specs/jvms/se17/html/index.html
To see the RAM limit inside a container
1 | # kubectl exec <podname> -n <ns_name> -it -- sh |
To see current RAM usage
1 | # a fuzz value |
You could learn CGroup for better understanding. And Java Native Memory Tracking(NMT) can be used together with cgroup.
Application running on kubernetes can’t make full use of the memory.
On an 8GB VM, we have to reserve 1G for OS and kubernetes. Click here fore more.
It is eclipse-temurin, formerly known as AdoptOpenJDK that is widely used among developers. Here are examples
Here are some production proven open source images
It’s just for development use when use JRE based docker image. In production, we could use Multi-stage builds to reduce JRE size by 100MB.
Java 11+ can directly configure reservable memory from the hard limit in CGroups.
1 | ## JVM reads from cgroup files |
Since we have the limit, the parameter -Xmx
is no more required, use -XX:MaxRAMPercentage
instead. For example, you can check your RAM with following cmd:
1 | java -XX:+PrintFlagsFinal -XX:MaxRAMPercentage=70 -version \ |
To permanently alter the default config, pass the environment in pods or nomad HCL.
1 | JAVA_TOOL_OPTIONS="-XX:MaxRAMPercentage=70" |
The percentage requires a accurately estimation depending on your workloads. Allocating 90% memory is not as ideal as the default value. I have tested that allocating more than 70% RAM for heap might be more likely to be OOM-Killed by Kernel, making the developer impossible to dump a hprof file.
To config a nomad job, configure the memory
attribute for oversubscription.
1 | Mebibyte |
To config a pod in Kubernetes, configure the resources
attribute.
1 | # Mi means Mebibyte |
For example, we have an application with 3GB heap memory reserverd, we might consider following
To use a calculator, click here
Soft limit is not designed for peak request, use Horizontal Pod Autoscaling instead.
Here are some java agent based solution to collect realtime JVM memory usage and send to a centralized database. A sidecar jar needs to be packed into the image.
Free versions require a SRE team to maintain the TSDB and dashboard. For more solutions, check out at OpenAPM
Following metrics are important
pass -XX:+HeapDumpOnOutOfMemoryError
to save hpref files in the pod. Howerver, your pod might be destroyed when health check fails.
At first, read the official documents before step in.
When your application slows down or crashes by OutOfMemoryError, it usually leads by
-Xss
, we could use -Xss512k
to reduce the size.