• /
  • EnglishEspañolFrançais日本語한국어Português
  • Inicia sesiónComenzar ahora

Synthetics private locations runtime transition guide

New Relic is transitioning Synthetics runtime images from Node.js 16 with Chrome 134 to Node.js 22 with Chrome 147 or higher. This update addresses CVE-2026-5281 and brings the runtimes to currently supported versions. Chrome 134 is outside of Google's supported channels, and Node.js 16 reached end-of-life in September 2023.

The new runtime images are available on Docker Hub:

Importante

Action required by DATE. New Relic is ending support for Node.js 16 and Chrome 134 runtimes. If you don't update, New Relic will automatically migrate your monitors. However, automatic migration may not catch monitors that pass but fail silently on some script steps.

Public vs private location differences

The migration path depends on whether your monitors run on public or private locations.

Public locations: Change the Browser version dropdown in the monitor's configuration page from Chrome 134 to Latest. No infrastructure changes needed.

Private locations: You must deploy new runtime images on your synthetics job manager (SJM) infrastructure.

Importante

For private locations, the Browser and Runtime version dropdowns in General settings have no effect. The version is determined entirely by the image the SJM is running. Changing the dropdown does not change which runtime version processes the job — only the deployed image does. As such, the runtimeTypeVersion attribute in SyntheticCheck is not a reliable way to identify the runtime version for private monitor jobs.

The Runtime Upgrades page at Synthetics > Runtime Upgrades is designed for public monitors. For private locations, there is no automated pre-migration validation through that tool. If you point multiple job managers—each running a different runtime—at the same private location key, any results shown are real job executions, not isolated compatibility checks. Use the parallel private location strategy to compare monitor behavior between runtimes before committing to the upgrade.

What's changing

Use this table to identify if your monitor results came from a SJM running a rc1.x image.

ComponentOld versionrc1.x version
Node.js1622 or higher
Chrome134147 or higher
API runtime version1.2.1341.2.143 or higher
Browser runtime version3.0.553.0.63 or higher

Querying runtime versions

Each rc1.x Docker Hub tag corresponds to a specific nr.runtimeVersion value reported in SyntheticCheck events. You can query the nr.runtimeVersion attribute in NRQL:

SELECT count(*) FROM SyntheticCheck
WHERE locationLabel = 'YOUR_PRIVATE_LOCATION'
FACET type, nr.runtimeVersion SINCE 1 day ago

Sugerencia

Use rc1.14 or higher for the browser runtime to get Chrome 146.0.7680.177, which includes the patch for CVE-2026-5281.

Key behavioral changes

These changes may impact your existing monitors:

  • HTTP keep-alive default changed. Node.js 22 defaults http.globalAgent to keepAlive: true (it was false in Node.js 16). Scripts that create custom HTTP agents without explicitly setting keepAlive: false may experience longer execution times or timeouts, as connections remain open and prevent the process from exiting.

  • Higher resource usage. Chrome 147 requires more CPU and memory than Chrome 134 for the same workload. Browser runtime containers typically use 625-980 MiB of their 3.256 GiB default memory limit during execution, compared to lower usage on Chrome 134.

  • Increased container overhead. Scripted browser monitors have an average container overhead of 6-10 seconds. Scripted API monitors average 2-4 seconds. Monitors that were close to timeout thresholds on the old runtime may now exceed them.

Choose your migration strategy

There are two approaches for private location migration. Choose based on your risk tolerance and monitor fleet size.

Option A: In-place upgrade

Upgrade all SJMs to the new runtime images. All monitors immediately run on the new runtimes.

Best for: Small monitor fleets, non-critical environments, or when you can tolerate some monitor failures during the transition.

Steps:

  1. Update the DESIRED_RUNTIMES configuration on each SJM to use the new image tags.
  2. Restart or redeploy the SJM.
  3. Monitor results.

Risk: Some monitors may fail until their scripts are updated to work with Node.js 22 and Chrome 147 or higher.

Option B: Parallel private location (recommended)

Create a second private location, deploy SJMs with the new runtime images there, and run monitors on both locations simultaneously for A/B comparison.

Best for: Production environments, large monitor fleets, or when you need zero disruption to existing monitoring.

Steps:

  1. Create a new private location in New Relic. Give it a descriptive name.

  2. Deploy one or more SJMs pointed at the new private location with the new runtime images.

  3. Set up a muting rule to suppress alert noise from the new location during testing:

    Go to Alerts > Muting rules and create a rule with the condition tags.privateLocation EQUALS <your-new-location-name>.

  4. Add the new private location to your monitors. Each monitor can be assigned to multiple private locations. Jobs for each location run independently — a failure on the new location does not affect results from the old location.

  5. Compare results between the two locations. Use this NRQL query:

    SELECT count(*), percentage(count(*), WHERE result = 'SUCCESS') AS 'Success %',
    average(executionDuration) AS 'Avg Exec Duration'
    FROM SyntheticCheck
    SINCE 1 day ago
    FACET locationLabel, monitorName
  6. Fix any failing monitors on the new location by manually editing scripts.

  7. Once all monitors pass on the new location, remove the old location from your monitors and decommission the old SJM infrastructure.

Trade-off: Double infrastructure cost during the transition period. You need separate hosts or cluster resources for the second set of SJMs.

This approach gives you a complete picture of how all your monitors execute on the new runtimes, including differences in results and execution duration—both of which affect job manager load and resource planning.

To test only for script failures without the full infrastructure comparison, set up a second private location with a small test SJM and run a subset of monitors. This shows how existing monitors behave on the new runtimes, but not how the runtimes fit your existing infrastructure capacity.

Deploy SJM with new runtime images

Update your existing SJM deployment to use the new runtime image tags. The SJM itself (newrelic/synthetics-job-manager:latest) does not change — only the runtime images it pulls.

Sugerencia

For detailed installation and configuration instructions, see Install the synthetics job manager and Job manager configuration.

Docker

Update the DESIRED_RUNTIMES environment variable to reference the new image tags:

bash
$
docker run \
>
--name sjm \
>
--restart unless-stopped \
>
-e PRIVATE_LOCATION_KEY=YOUR_PRIVATE_LOCATION_KEY \
>
-e "DESIRED_RUNTIMES=[newrelic/synthetics-ping-runtime:latest,newrelic/synthetics-node-api-runtime:RC_IMAGE_TAG,newrelic/synthetics-node-browser-runtime:RC_IMAGE_TAG]" \
>
-v /var/run/docker.sock:/var/run/docker.sock:rw \
>
newrelic/synthetics-job-manager:latest

Replace YOUR_PRIVATE_LOCATION_KEY with your private location key, and RC_IMAGE_TAG with the image tag from Docker Hub, like rc1.17.

If you have an existing SJM container, stop and remove it first, then start the new one:

bash
$
docker stop YOUR_CONTAINER_NAME
$
docker rm YOUR_CONTAINER_NAME

Podman

Ensure you have completed all Podman dependencies including the Podman API service on port 8000. Then update the DESIRED_RUNTIMES:

bash
$
podman pod create --network slirp4netns --name sjm-pod \
>
--add-host=podman.service:YOUR_HOST_IP
$
$
podman run -d \
>
--name sjm \
>
--pod sjm-pod \
>
--restart unless-stopped \
>
-e PRIVATE_LOCATION_KEY=YOUR_PRIVATE_LOCATION_KEY \
>
-e "DESIRED_RUNTIMES=[newrelic/synthetics-ping-runtime:latest,newrelic/synthetics-node-api-runtime:RC_IMAGE_TAG,newrelic/synthetics-node-browser-runtime:RC_IMAGE_TAG]" \
>
-e CONTAINER_ENGINE=PODMAN \
>
-e PODMAN_API_SERVICE_PORT=8000 \
>
-e PODMAN_POD_NAME=sjm-pod \
>
newrelic/synthetics-job-manager:latest

Sugerencia

Pre-pull the runtime images before starting the SJM to avoid timeout issues during first startup. The browser runtime image is approximately 3 GB:

bash
$
podman pull docker.io/newrelic/synthetics-node-browser-runtime:RC_IMAGE_TAG
$
podman pull docker.io/newrelic/synthetics-node-api-runtime:RC_IMAGE_TAG
$
podman pull docker.io/newrelic/synthetics-ping-runtime:latest

Replace YOUR_PRIVATE_LOCATION_KEY with your private location key, and RC_IMAGE_TAG with the image tag from Docker Hub, like rc1.17.

Kubernetes

Update the Helm values for the synthetics job manager chart. If you use a values.yaml file, update the runtime image tags:

bash
$
helm repo update
$
$
helm upgrade sjm newrelic/synthetics-job-manager \
>
--namespace YOUR_NAMESPACE \
>
--set synthetics.privateLocationKey=YOUR_PRIVATE_LOCATION_KEY \
>
--set-json 'synthetics.desiredRuntimes=[{"image":"newrelic/synthetics-ping-runtime","tag":"latest"},{"image":"newrelic/synthetics-node-api-runtime","tag":"RC_IMAGE_TAG"},{"image":"newrelic/synthetics-node-browser-runtime","tag":"RC_IMAGE_TAG"}]'

For a new installation:

bash
$
helm install sjm newrelic/synthetics-job-manager \
>
--namespace synthetics --create-namespace \
>
--set synthetics.privateLocationKey=YOUR_PRIVATE_LOCATION_KEY \
>
--set-json 'synthetics.desiredRuntimes=[{"image":"newrelic/synthetics-ping-runtime","tag":"latest"},{"image":"newrelic/synthetics-node-api-runtime","tag":"RC_IMAGE_TAG"},{"image":"newrelic/synthetics-node-browser-runtime","tag":"RC_IMAGE_TAG"}]'

Replace YOUR_PRIVATE_LOCATION_KEY with your private location key, and RC_IMAGE_TAG with the image tag from Docker Hub, like rc1.17.

NRQL queries for monitoring the transition

If you've set up a second private location with the same monitors—where checks run as real jobs, not pre-migration validations—use these queries to track your migration progress:

Failure rate by monitor on the new runtime:

SELECT percentage(count(*), WHERE result = 'SUCCESS') AS 'Success %',
count(*) AS 'Total Checks'
FROM SyntheticCheck
WHERE locationLabel = 'YOUR_NEW_LOCATION'
SINCE 1 day ago
FACET monitorName

Execution duration comparison between old and new locations:

SELECT average(executionDuration) AS 'Avg Execution Duration (ms)',
average(duration) AS 'Avg Duration (ms)',
average(executionDuration - duration) AS 'Avg Overhead (ms)'
FROM SyntheticCheck
SINCE 1 day ago
FACET locationLabel, monitorName

Identify monitors with increased execution times:

SELECT monitorName, average(executionDuration) AS 'Avg ExecDuration'
FROM SyntheticCheck
SINCE 1 day ago
FACET monitorName, locationLabel
ORDER BY average(executionDuration) DESC

Fix failing monitors

Troubleshooting

Common issues

IssuePossible causeSolution
Error: tab crashedChrome 147 memory limit exceededIncrease HEAVY_WORKER_MEMORY or reduce HEAVYWEIGHT_WORKERS
30+ seconds added to execution timeKeep-alive connections preventing process exitFixed in rc1.11; check for custom agents in scripts
Podman SJM fails to create bridge networkRootless Podman permissionsFollow the Podman dependencies setup; ensure cgroup delegation and Podman API service
Podman SJM exits during image pullLarge images timing out on first pullPre-pull runtime images with podman pull before starting the SJM
Monitor passes but misses script stepsSilent failures in multi-step scriptsUse the parallel location strategy to compare results between old and new runtimes

Useful NRQL queries

Check for monitors with increased failure rates:

SELECT percentage(count(*), WHERE result = 'FAILED') AS 'Failure %'
FROM SyntheticCheck
SINCE 1 day ago
FACET monitorName
WHERE percentage(count(*), WHERE result = 'FAILED') > 0

Compare execution duration before and after migration:

SELECT average(executionDuration) AS 'Avg ExecDuration',
max(executionDuration) AS 'Max ExecDuration',
average(executionDuration - duration) AS 'Avg Overhead'
FROM SyntheticCheck
SINCE 1 day ago
FACET monitorName
ORDER BY average(executionDuration) DESC

Find monitors with Chrome tab crashes:

SELECT count(*)
FROM SyntheticCheck
WHERE error LIKE '%tab crashed%'
SINCE 1 day ago
FACET monitorName

Resource recommendations

Based on testing with rc1.15 runtimes:

ComponentRecommended minimumDefault
SJM container memory3.256 GiB3.256 GiB
Browser runtime memory (HEAVY_WORKER_MEMORY)4 GiB3.256 GiB
Browser runtime shared memory2.256 GiB2.256 GiB
Browser runtime CPU shares (HEAVY_WORKER_CPUS)21
Ping runtime memory1 GiB1 GiB

HEAVY_WORKER_CPUS sets Docker CPU shares (a relative weight), not a hard CPU core limit. Increasing it only makes a difference when multiple containers are competing for CPU simultaneously.

Timeline

DateEvent
April 2026New runtime images (rc1.15) available on Docker Hub
April 2026Security bulletin NR26-04 published
~July 2026End of support for Node.js 16 / Chrome 134 runtimes
~July 2026Automatic migration of remaining monitors

Advertencia

Monitors that are automatically migrated may pass validation but fail silently on some script steps. Test your monitors proactively using the parallel private location strategy to ensure a smooth transition.

Copyright © 2026 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.