As of UTC 04:21 queues have been drained which means there should be no further delays or impact to customers. At this point, we consider this issue resolved. If necessary, please reach out to support if experiencing any further issues https://www.elastic.co/support/welcome.
Posted Jul 02, 2020 - 04:23 UTC
We have mitigated the root cause and are currently ingesting logging data. Due to an issue on our networking layer a failed master node was unable to be replaced within the logging cluster. As part of this incident we've lost approximately 2 hours of logs between UTC 00:00 and UTC 02:00 on July 2nd. We'll continue to monitor the backlog queue. Further updates in the next 2 hours
Posted Jul 02, 2020 - 04:12 UTC
We have identified the root cause of the delay within our logging infrastructure and we're preparing a mitigation to allow us to bring the logs back online. Further updates in an hour.
Posted Jul 02, 2020 - 03:42 UTC
We have identified an issue with logging cluster in AWS us-east-1 that is causing delays in logs ingestion pipelines. Customers may have delayed visibility into the logs displayed in the UI for their cluster. We are currently working on fixing the issue. Update will be provided within one hour.
Posted Jul 02, 2020 - 02:49 UTC
This incident affected: AWS N. Virginia (us-east-1) (Deployment logging: AWS us-east-1, Deployment metrics: AWS us-east-1).