Skip to article frontmatterSkip to article content

Incident Reports

This is a public record of incidents for 2i2c managed cloud services[1].

All Reports

DateReportDuration
2025-10-17Enforce-nfs-quota pod restarts on victor19m 23s
2025-10-16OOM kill of enforce-xfs-quota on neurohackademy1h 14m
2025-10-16Invalid configuration data breaks enforce-nfs-quota container24m 23s
2025-10-15Starting Server of users with 2i2c admin emails from admin panel fails on earthscope17m 6s
2025-10-08Oceanhackweek hub URL returned 503 responseUnknown
2025-10-03UToronto: Users who have never logged in before can’t start servers23m 24s
2025-08-29UCMerced: Too Many Users Starting up at the same time1h 30m
2025-08-26LEAP out of GPU quota2h 13m
2025-08-20JupyterHub at temple.2i2c.cloud unreachable13m 46s
2025-08-12Incident report August 12 2025 - LEAP hub outageUnknown
2025-07-21Incident report July 21 2025 - Openscapes hub pods dying post-stress-testingUnknown
2025-07-11Incident report July 11 2025 - CloudBank health check fail plus GroupExporter pod restartsUnknown
2025-05-12Incident report May 12 2025 - Earthscope resource provisioning issueUnknown
2023-02-012023-02-01 Heavy use of dask-gateway induced critical pod evictionsUnknown
2023-01-232023-01-23 Upgrading ingress-nginx caused brief outage in carbonplanUnknown
2022-11-14LIS hub cannot scale6h 30m
2022-10-31UToronto Hub Login failing with 500 errors2h 33m
2022-09-06UToronto Hub is throwing 500 errors when users try to login5h 23m
2022-08-30UToronto JupyterHub not accessibleUnknown
2020-10-282020-08-28 - Memory overload on WER clusterUnknown

Learn More


Footnotes
  1. Reports are automatically generated from PagerDuty postmortems and published via GitHub Actions.