Problem
You sent metric data points to the Metric API, and are not seeing what you expect when querying the data. Use the following checklist to determine the root cause:
- Make sure you are querying the data correctly.
- Check the HTTP status codes returned by the API. Issues like authorization failures can be diagnosed with HTTP status codes.
- If you are sending data from a Prometheus server via New Relic's remote_write endpoint, check your Prometheus server logs for errors or non-2xx HTTP responses from the New Relic endpoint.
- Query your account for
NrIntegrationError
events. New Relic's ingestion endpoints are asynchronous, meaning the endpoint verifies the payload after it returns the HTTP response. If any issues occur while verifying your payload, then anNrIntegrationError
event will be created in your account. New Relic also usesNrIntegrationError
events to notify customers when various rate limits have been reached.
Solution
View error details
For an introduction to using the NrIntegrationError
event, see NrIntegrationError
.
Here's an example NRQL for examining issues with Metric API ingest:
SELECT count(*) FROM NrIntegrationError WHERE newRelicFeature = 'Metrics' FACET category, message LIMIT 100 SINCE 24 hours ago
The category
indicates the type of error and the message
provides more detailed information about the error. If the category
is rateLimit
, then you should also examine the rateLimitType
field for more information on the type of rate limiting.
Category | rateLimitType | Description and solution |
---|---|---|
| (not set) | There is an issue with the JSON payload. These include JSON syntax errors, attribute names, or values that are too long. Check the |
|
| You are sending too many datapoints per minute. If you get this error, you can either send data less frequently, or request changes to your metric rate limits by contacting your New Relic account representative, or visiting our Support portal. |
|
| You have an attribute with a high number of unique values, like |
|
| You have Prometheus servers reporting too many unique time series via New Relic's remote_write endpoint. Reduce the number of unique time series reported by modifying your Prometheus server configuration to reduce the number of targets being scraped, or by using relabel rules in the remote_write section of your server configuration to drop time series or highly unique labels. |
|
| Too many requests per minute are being sent. To resolve this, put more datapoints in each request, and send them less frequently. |
|
| You have exceeded your daily error group limit. Incoming error groups will be dropped for the remainder of the day and will continue as normal after UTC midnight. To resolve this, reduce the amount of unique error messages collected by New Relic. |
Match errors to ingested payloads
When an NrIntegrationError
event is created as a result of a syntax issue with the HTTP request payload, then the event contains the attributes apiKeyPrefix
and requestId
.
- The
apiKeyPrefix
matches the first 6 characters of the API key used to send the data. - The
requestId
matches therequestId
sent in the HTTP response.
To view these fields, run this NRQL query:
SELECT message, apiKeyPrefix, requestId FROM NrIntegrationError LIMIT 100
To verify a specific requestId
, run this NRQL query:
SELECT * FROM NrIntegrationError WHERE requestId = 'REQUEST_ID'
Programmatically retrieve NrIntegrationError events
To programmatically retrieve these errors:
Ensure you have an Insights query API key (go to insights.newrelic.com > Manage data > API keys).
Create an HTTP request as shown below:
Dica
If your organization hosts data in the EU data center, ensure you're using the EU region endpoints.
bash$curl -H "Accept: application/json" -H "X-Query-Key:YOUR_API_KEY_HERE" "https://insights-api.newrelic.com/v1/accounts/YOUR_ACCOUNT_HERE/query?nrql=SELECT%20*%20FROM%20NrIntegrationError%20where%20newRelicFeature='Metrics'"