How to Connect JProfiler to a JVM Running in Kubernetes Pod

Running a JVM in Kubernetes Pod somewhat complicates things when it comes to connecting to it from an external profiler tool. Below is an example of how to accomplish just that with one such tool – JProfiler.

A definitely not recommended approach is to “bake” JProfiler in the application’s image, which leads to tight coupling and larger application image.

A better way is to use an Init Container. A Pod can have one or more Init Containers. For the most part, they are like regular Containers and have the property that once their execution completes successfully they are terminated; only then the application Container(s) in the Pod are started.

The approach here is to use an Init Container to copy the JProfiler installation to a volume shared between our Init Container and the other Containers that will be started in the Pod. This way, our JVM can reference at startup time the JProfiler agent from the shared volume.


This assumes that an application image and working deployment configuration for the Java application exist.

We will also need a JProfiler image. If you don’t have a JProfiler image, here is a sample Dockerfile that can be used to build one (check if your JProfiler license agreement allows you to do that):

FROM centos:7

# Switch to root

 JPROFILER_DISTRO="jprofiler_linux_10_1_1.tar.gz" \
 STAGING_DIR="/jprofiler-staging" \

 io.k8s.display-name="JProfiler from ${JPROFILER_DISTRO}"

RUN yum -y update \
 && yum -y install ca-certificates curl \
 && mkdir -p ${HOME} ${STAGING_DIR} \
 && cd ${STAGING_DIR} \
 # curl is expected to be available; wget would work, too
 # Add User-Agent header to pretend to be a browser and avoid getting HTTP 404 response
 && curl -v -OL "${JPROFILER_DISTRO}" -H "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.62 Safari/537.36" \
 && tar -xzf ${JPROFILER_DISTRO} \
 && rm -f ${JPROFILER_DISTRO} \
 # Eliminate the version-specific directory
 && cp -R */* ${HOME} \
 && rm -Rf ${STAGING_DIR} \
 && chmod -R 0775 ${HOME} \
 && yum clean all

# chown and switch user as needed


The above is for an older JProfiler version but it should work the same for a newer one.


Change the application’s deployment configuration as follows:

  • If not defined already, add “volumes” section under “spec.template.spec” and define a new volume:
  - name: jprofiler
    emptyDir: {}
  • If not defined already, add “initContainers” (Kubernetes 1.6+) under “spec.template.spec” and define an Init Container using JProfiler’s image name and tag (and if needed, replace “/jprofiler” with the location where JProfiler’s file directory is in that image):
  - name: jprofiler-init
    command: ["/bin/sh", "-c", "cp -R /jprofiler/ /tmp/"]
      - name: jprofiler
        mountPath: "/tmp/jprofiler"
  • Add as part of application’s container definition:
  - name: jprofiler
    mountPath: /jprofiler
  • Add to the JVM startup arguments JProfiler as an agent. “Where” to add it depends on how the application is started and JVM arguments passed in:

Change the path accordingly if using an image other than one built using the Dockerfile above.

Notice that there isn’t a “nowait” argument. That will block the JVM at startup and wait for a JProfiler GUI to connect. The reason is that with this configuration the profiling agent does not receive its profiling settings as command line parameters or from a config file but from the JProfiler GUI.

Running the Application

Deploy the application with the new deployment configuration and using a single replica. For example, by configuring

  replicas: 1

Another way is to use “replicas: 0”, deploy the application, and at a later point scale it to 1 when ready to profile the application.

Notice that:

  • Without the “nowait” argument, the application won’t start until JProfiler GUI connects to it.
  • If the JProfiler GUI is started first then it has to be configured to wait for the application to be started.

Next, connect local JProfiler to the JVM that is in a Kubernetes Pod:

  • Set up port forwarding from the local host to the JProfiler agent’s port in the Kubernetes Pod  (8849):
    kubectl -n <namespace> <pod-name> port-forward 8849:8849

    Use something like

    kubectl -n <namespace> get pods

    to find out what the pod’s name is.

    The local port 8849 (the number to the left of “:”) must be available. If it is not, specify a different port and use it in the step below.

  • Start JProfiler up locally and point it to, port 8849.

At this point JProfiler should connect to the JVM in the Pod and the application startup should continue. Adjust the Readiness Probe, if there is one defined for the application, or disable it altogether if it causes Pod restarts before being able to connect JProfiler to the JVM.

Happy Profiling!