Back to Dashboard
Github

Github

Github

Operational

Last checked: Mar 08, 2026 07:27

Incident History (Last 30 Days)

Resolved Major

Incident with Webhooks

Mar 6, 23:28 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Mar 6, 23:28 UTC
Update - Webhooks is operating normally.

Mar 6, 23:26 UTC
Update - We have deployed a fix and are observing a full recovery. The affected endpoint was the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. We will continue monitoring to confirm stability.

Mar 6, 22:35 UTC
Update - We are preparing a new mitigation for the issue affecting the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) …

Mar 06, 2026

23:28

Resolved Major

Actions is experiencing degraded availability

Mar 5, 23:55 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Mar 5, 23:40 UTC
Update - We are close to full recovery. Actions and dependent services should be functioning normally now.

Mar 5, 23:37 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.

Mar 5, 23:15 UTC
Update - Actions and dependent services, including Pages, are recovering.

Mar 5, 23:00 UTC
Update - We applied a mitigation and we should see a recovery soon.

Mar 5, 22:54 UTC
Update - …

Mar 05, 2026

23:55

Resolved Major

Multiple services are affected, service degradation

Mar 5, 19:30 UTC
Resolved - On Mar 5, 2026, between 16:24 UTC and 19:30 UTC, Actions was degraded. During this time, 95% of workflow runs failed to start within 5 minutes with an average delay of 30 minutes and 10% workflow runs failed with an infrastructure error. This was due to Redis infrastructure updates that were being rolled out to production to improve our resiliency. These changes introduced a set of incorrect configuration change into our Redis load balancer causing internal traffic to be routed to an incorrect host leading to two incidents.

We mitigated this incident by correcting the …

Mar 05, 2026

19:30

Resolved Major

Disruption with some GitHub services

Mar 5, 01:30 UTC
Resolved - On March 5, 2026, between 12:53 UTC and 13:35 UTC, the Copilot mission control service was degraded. This resulted in empty responses returned for users' agent session lists across GitHub web surfaces. Impacted users were unable to see their lists of current and previous agent sessions in GitHub web surfaces. This was caused by an incorrect database query that falsely excluded records that have an absent field.

We mitigated the incident by rolling back the database query change. There were no data alterations nor deletions during the incident.

To prevent similar issues in the future, we're improving …

Mar 05, 2026

01:30

Resolved Major

Some OpenAI models degraded in Copilot

Mar 5, 01:13 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Mar 5, 01:13 UTC
Update - The issues with our upstream model provider have been resolved, and gpt-5.3-codex is once again available in Copilot Chat and across IDE integrations. We will continue monitoring to ensure stability, but mitigation is complete.

Mar 5, 00:53 UTC
Update - We are experiencing degraded availability for the gpt-5.3-codex model in Copilot Chat, VS Code and other Copilot products. This is due to …

Mar 05, 2026

01:13

Resolved Major

Claude Opus 4.6 Fast not appearing for some Copilot users

Mar 3, 21:11 UTC
Resolved - On March 3, 2026, between 19:44 UTC and 21:05 UTC, some GitHub Copilot users reported that the Claude Opus 4.6 Fast model was no longer available in their IDE model selection. After investigation, we confirmed that this was caused by enterprise administrators adjusting their organization's model policies, which correctly removed the model for users in those organizations. No users outside the affected organizations lost access.

We confirmed that the Copilot settings were functioning as designed, and all expected users retained access to the model. The incident was resolved once we verified that the change was intentional …

Mar 03, 2026

21:11

Resolved Critical

Incident with all GitHub services

Mar 3, 20:09 UTC
Resolved - On March 3, 2026, between 18:46 UTC and 20:09 UTC, GitHub experienced a period of degraded availability impacting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other dependent services. At the peak of the incident, GitHub.com request failures reached approximately 40%. During the same period, approximately 43% of GitHub API requests failed. Git operations over HTTP had an error rate of approximately 6%, while SSH was not impacted. GitHub Copilot requests had an error rate of approximately 21%. GitHub Actions experienced less than 1% impact.

This incident shared the same underlying cause …

Mar 03, 2026

20:09

Resolved Major

Delayed visibility of newly added issues on project boards

Mar 3, 05:54 UTC
Resolved - Between March 2, 21:42 UTC and March 3, 05:54 UTC project board updates, including adding new issues, PRs, and draft items to boards, were delayed from 30 minutes to over 2 hours, as a large backlog of messages accumulated in the Projects data denormalization pipeline.

The incident was caused by an anomalously large event that required longer processing time than expected. Processing this message exceeded the Kafka consumer heartbeat timeout, triggering repeated consumer group rebalances. As a result, the consumer group was unable to make forward progress, creating head-of-line blocking that delayed processing of subsequent project …

Mar 03, 2026

05:54

Resolved Major

Incident with Pull Requests /pulls

Mar 2, 22:04 UTC
Resolved - On March 2nd, 2026, between 7:10 UTC and 22:04 UTC the pull requests service was degraded. Users navigating between tabs on the pull requests dashboard were met with 404 errors or blank pages.

This was due to a configuration change deployed on February 27th at 11:03 PM UTC. We mitigated the incident by reverting the change.

We’re working to improve monitoring for the page to automatically detect and alert us to routing failures.

Mar 2, 22:04 UTC
Update - The issue on https://github.com/pulls is now fully resolved. All tabs are working again.

Mar 2, 21:04 UTC
Update - We're deploying a …

Mar 02, 2026

22:04

Resolved Major

Incident with Copilot agent sessions

Feb 27, 23:49 UTC
Resolved - On February 27, 2026, between 22:53 UTC and 23:46 UTC, the Copilot coding agent service experienced elevated errors and degraded functionality for agent sessions. Approximately 87% of attempts to start or interact with agent sessions encountered errors during this period.

This was due to an expired authentication credential for an internal service component, which prevented Copilot agent session operations from completing successfully.

We mitigated the incident by rotating the expired credential and deploying the updated configuration to production. Services began recovering within minutes of the fix being deployed.

We are working to improve automated credential rotation coverage across …

Feb 27, 2026

23:49

Resolved Major

Code view fails to load when content contains some non-ASCII characters

Feb 27, 06:04 UTC
Resolved - Starting February 26, 2026 at 22:10 UTC through February 27, 05:50 UTC, the repository browsing UI was degraded and users were unable to load pages for files and directories with non-ASCII characters (including Japanese, Chinese, and other non-Latin scripts). On average, the error rate was 0.014% and peaked at 0.06% of requests to the service. Affected users saw 404 errors when navigating to repository directories and files with non-ASCII names. This was due to a code change that altered how file and directory names were processed, which caused incorrectly formatted data to be stored in …

Feb 27, 2026

06:04

Resolved Major

High latency on webhook API requests

Feb 27, 00:04 UTC
Resolved - Between February 26, 2026 UTC and February 27, 2026 UTC, customers hitting the webhooks delivery API may have experienced higher latency or failed requests. During the impact window, 0.82% of requests took longer than 3s and 0.004% resulted in a 500 error response.

Our monitors caught the impact on the individual backing data source, and we were able to attribute the degradation to a noisy neighbor effect due requests to a specific webhook generating excessive load on the API. The incident was mitigated once traffic from the specific hook decreased.

We have since added a rate limiter …

Feb 27, 2026

00:04

Resolved Major

Incident with Copilot

Feb 26, 11:06 UTC
Resolved - On February 26, 2026, between 09:27 UTC and 10:36 UTC, the GitHub Copilot service was degraded and users experienced errors when using Copilot features including Copilot Chat, Copilot Coding Agent and Copilot Code Review. During this time, 5-15% of affected requests to the service returned errors.

The incident was resolved by infrastructure rebalancing.

We are improving observability to detect capacity imbalances earlier and enhancing our infrastructure to better handle traffic spikes.

Feb 26, 11:06 UTC
Update - Copilot is operating normally.

Feb 26, 10:22 UTC
Investigating - We are investigating reports of degraded performance for Copilot

Feb 26, 2026

11:06

Resolved Major

Incident with Copilot Agent Sessions impacting CCA/CCR

Feb 25, 16:44 UTC
Resolved - On February 25, 2026, between 15:05 UTC and 16:34 UTC, the Copilot coding agent service was degraded, resulting in errors for 5% of all requests and impacting users starting or interacting with agent sessions.

This was due to an internal service dependency running out of allocated resources (memory and CPU). We mitigated the incident by adjusting the resource allocation for the affected service, which restored normal operations for the coding agent service.

We are working to implement proactive monitoring for resource exhaustion across our services, review and update resource allocations, and improve our alerting capabilities to …

Feb 25, 2026

16:44

Resolved Critical

Code search experiencing degraded performance

Feb 24, 00:46 UTC
Resolved - Between 2026-02-23 19:10 and 2026-02-24 00:46 UTC, all lexical code search queries in GitHub.com and the code search API were significantly slowed, and during this incident, between 5 and 10% of search queries timed out. This was caused by a single customer who had created a network of hundreds of orchestrated accounts which searched with a uniquely expensive search query. This search query concentrated load on a single hot shard within the search index, slowing down all queries. After we identified the source of the load and stopped the traffic, latency returned to normal.

To avoid …

Feb 24, 2026

00:46

Resolved Major

Incident with Issues and Pull Requests Search

Feb 23, 21:30 UTC
Resolved - On February 23, 2026, between 21:01 UTC and 21:30 UTC the Search service experienced degraded performance, resulting in an average of 3.5% of search requests for Issues and Pull Requests being rejected. During this period, updates to Issues and Pull Requests may not have been immediately reflected in search results.

During a routine migration, we observed a spike in internal traffic due to a configuration change in our search index. We were alerted to the increase in traffic as well as the increase in error rates and rolled back to the previous stable index.

We …

Feb 23, 2026

21:30

Resolved Critical

Incident with Actions

Feb 23, 17:03 UTC
Resolved - On February 23, 2026, between 15:00 UTC and 17:00 UTC, GitHub Actions experienced degraded performance. During the time, 1.8% of Actions workflow runs experienced delayed starts with an average delay of 15 minutes. The issue was caused by a connection rebalancing event in our internal load balancing layer, which temporarily created uneven traffic distribution across sites and led to request throttling.

To prevent recurrence, we are tuning connection rebalancing behavior to spread client reconnections more gradually during load balancer reloads. We are also evaluating improvements to site-level traffic affinity to eliminate the uneven distribution at …

Feb 23, 2026

17:03

Resolved Major

Incident with Copilot

Feb 23, 16:19 UTC
Resolved - On February 23, 2026, between 14:45 UTC and 16:19 UTC, the Copilot service was degraded for Claude Haiku 4.5 model. On average, 6% of the requests to this model failed due to an issue with an upstream provider. During this period, automated model degradation notifications directed affected users to alternative models. No other models were impacted. The upstream provider identified and resolved the issue on their end.
We are working to improve automatic model failover mechanisms to reduce our time to mitigation of issues like this one in the future.

Feb 23, 15:59 UTC
Update - Copilot …

Feb 23, 2026

16:19

Resolved Major

Extended job start delays for larger hosted runners

Feb 20, 20:41 UTC
Resolved - On February 20, 2026, between 17:45 UTC and 20:41 UTC, 4.2% of workflows running on GitHub Larger Hosted Runners were delayed by an average of 18 minutes. Standard, Mac, and Self-Hosted Runners were not impacted.

The delays were caused by communication failures between backend services for one deployment of larger runners. Those failures prevented expected automated scaling and provisioning of larger hosted runner capacity within that deployment. This was mitigated when the affected infrastructure was recycled, larger runner pools in the affected deployment successfully scaled up, and queued jobs processed.

We are working to improve …

Feb 20, 2026

20:41

Resolved Major

Incident with Copilot GPT-5.1-Codex

Feb 20, 11:41 UTC
Resolved - On February 20, 2026, between 07:30 UTC and 11:21 UTC, the Copilot service experienced a degradation of the GPT 5.1 Codex model. During this time period, users encountered a 4.5% error rate when using this model. No other models were impacted.
The issue was resolved by a mitigation put in place by the external model provider. GitHub is working with the external model provider to further improve the resiliency of the service to prevent similar incidents in the future.

Feb 20, 11:19 UTC
Update - The issues with our upstream model provider have been resolved, and GPT 5.1 …

Feb 20, 2026

11:41

Resolved Major

Degraded performance in merge queue

Feb 18, 19:20 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Feb 18, 19:18 UTC
Update - We have seen significant recovery in merge queue we are continuing to monitor for any other degraded services.

Feb 18, 18:27 UTC
Update - We are investigating reports of issues with merge queue. We will continue to keep users updated on progress towards mitigation.

Feb 18, 18:26 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.

Feb 18, 18:25 UTC
Investigating

Feb 18, 2026

19:20

Resolved Major

Intermittent authentication failures on GitHub

Feb 17, 19:06 UTC
Resolved - On February 17, 2026, between 17:07 UTC and 19:06 UTC, some customers experienced intermittent authentication failures affecting GitHub Actions, parts of Git operations, and other authentication-dependent requests. On average, the Actions error rate was approximately 0.6% of affected API requests. Git operations ssh read error rate was approximately 0.29%, while ssh write and http operations were not impacted. During the incident, a subset of requests failed due to token verification lookups intermittently failing, leading to 401 errors and degraded reliability for impacted workflows.

The issue was caused by elevated replication lag in the token verification database …

Feb 17, 2026

19:06

Resolved Major

Disruption with some GitHub services regarding file upload

Feb 13, 22:58 UTC
Resolved - On February 13, 2026, between 21:46 UTC and 22:58 UTC (72 minutes), the GitHub file upload service was degraded and users uploading from a web browser on GitHub.com were unable to upload files to repositories, create release assets, or upload manifest files. During the incident, successful upload completions dropped by ~85% from baseline levels. This was due to a code change that inadvertently modified browser request behavior and violated CORS (Cross-Origin Resource Sharing) policy requirements, causing upload requests to be blocked before reaching the upload service.

We mitigated the incident by reverting the code change that …

Feb 13, 2026

22:58

Resolved Major

Disruption with some GitHub services

Feb 12, 20:34 UTC
Resolved - Between February 11th 21:30 UTC and February 12th 15:40 UTC, users in Western Europe experienced degraded quality for all Next Edit Suggestions requests. Additionally, on February 12th, between 18:40 UTC and 20:30 UTC, users in Australia and South America experienced degraded quality and increased latency of up to 500ms for all Next Edit Suggestions requests. The root cause was a newly introduced regression in an upstream service dependency.

The incident was mitigated by failing over Next Edit Suggestions traffic to unaffected regions, which caused the increased latency. Once the regression was identified and rolled back, …

Feb 12, 2026

20:34

Resolved Major

Intermittent disruption with Copilot completions and inline suggestions

Feb 12, 16:50 UTC
Resolved - Between February 11th 21:30 UTC and February 12th 15:40 UTC, users in Western Europe experienced degraded quality for all Next Edit Suggestions requests. Additionally, on February 12th, between 18:40 UTC and 20:30 UTC, users in Australia and South America experienced degraded quality and increased latency of up to 500ms for all Next Edit Suggestions requests. The root cause was a newly introduced regression in an upstream service dependency.

The incident was mitigated by failing over Next Edit Suggestions traffic to unaffected regions, which caused the increased latency. Once the regression was identified and rolled back, we …

Feb 12, 2026

16:50

Resolved Critical

Disruption with some GitHub services

Feb 12, 11:12 UTC
Resolved - From Feb 12, 2026 09:16:00 UTC to Feb 12, 2026 11:01 UTC, users attempting to download repository archives (tar.gz/zip) that include Git LFS objects received errors. Standard repository archives without LFS objects were not affected. On average, the archive download error rate was 0.0042% and peaked at 0.0339% of requests to the service. This was caused by deploying a corrupt configuration bundle, resulting in missing data used for network interface connections by the service.

We mitigated the incident by applying the correct configuration to each site. We have added checks for corruption in this deployment, and …

Feb 12, 2026

11:12

Resolved Critical

Incident with Codespaces

Feb 12, 09:56 UTC
Resolved - On February 12, 2026, between 00:51 UTC and 09:35 UTC, users attempting to create or resume Codespaces experienced elevated failure rates across Europe, Asia and Australia, peaking at a 90% failure rate.

The disconnects were triggered by a bad configuration rollout in a core networking dependency, which led to internal resource provisioning failures. We are working to improve our alerting thresholds to catch issues before they impact customers and strengthening rollout safeguards to prevent similar incidents.

Feb 12, 09:56 UTC
Update - Recovery looks consistent with Codespaces creating and resuming successfully across all regions.

Thank you for your …

Feb 12, 2026

09:56

Resolved Minor

Disruption with some GitHub services

Feb 12, 00:59 UTC
Resolved - On February 11 between 16:37 UTC and 00:59 UTC the following day, 4.7% of workflows running on GitHub Larger Hosted Runners were delayed by an average of 37 minutes. Standard Hosted and self-hosted runners were not impacted.

This incident was caused by capacity degradation in Central US for Larger Hosted Runners. Workloads not pinned to that region were picked up by other regions, but were delayed as those regions became saturated. Workloads configured with private networking in that region were delayed until compute capacity in that region recovered. The issue was mitigated by rebalancing capacity …

Feb 12, 2026

00:59

Resolved Major

Incident with API Requests

Feb 11, 17:15 UTC
Resolved - On February 11, 2026, between 13:51 UTC and 17:03 UTC, the GraphQL API experienced degraded performance due to elevated resource utilization. This resulted in incoming client requests waiting longer than normal, timing out in certain cases. During the impact window, approximately 0.65% of GraphQL requests experienced these issues, peaking at 1.06%.

The increased load was due to an increase in query patterns that drove higher than expected resource utilization of the GraphQL API. We mitigated the incident by scaling out resource capacity and limiting the capacity available to these query patterns.

We're improving our telemetry …

Feb 11, 2026

17:15

Resolved Major

Incident with Copilot

Feb 11, 15:46 UTC
Resolved - On February 11, 2025, between 14:30 UTC and 15:30 UTC, the Copilot service experienced degraded availability for requests to Claude Haiku 4.5. During this time, on average 10% of requests failed with 23% of sessions impacted. The issue was caused by an upstream problem from multiple external model providers that affected our ability to serve requests.

The incident was mitigated once one of the providers resolved the issue and we rerouted capacity fully to that provider. We have improved our telemetry to improve incident observability and implemented an automated retry mechanism for requests to this …

Feb 11, 2026

15:46

Resolved Critical

Disruption with some GitHub services

Feb 10, 15:58 UTC
Resolved - On February 10th, 2026, between 14:35 UTC and 15:58 UTC web experiences on GitHub.com were degraded including Pull Requests and Authentication, resulting in intermittent 5xx errors and timeouts. The error rate on web traffic peaked at approximately 2%. This was due to increased load on a critical database, which caused significant memory pressure resulting in intermittent errors.

We mitigated the incident by applying a configuration change to the database to increase available memory on the host.

We are working to identify changes in load patterns and are reviewing the configuration of our databases to ensure …

Feb 10, 2026

15:58

Resolved Critical

Copilot Policy Propagation Delays

Feb 10, 09:57 UTC
Resolved - GitHub experienced degraded Copilot policy propagation from enterprise to organizations between February 3 at 21:00 UTC through February 10 at 16:00 UTC. During this period, policy changes could take up to 24 hours to apply. We mitigated the issue on February 10 at 16:00 UTC after rolling back a regression that caused the delays. The propagation queue fully caught up on the delayed items by February 11 at 10:35 UTC, and policy changes now propagate normally.

During this incident, whenever an enterprise updated a Copilot policy (including model policies), there were significant delays before those policy …

Feb 10, 2026

09:57

Resolved Critical

Incident with Issues, Actions and Git Operations

Feb 9, 20:09 UTC
Resolved - On February 9, 2026, GitHub experienced two related periods of degraded availability affecting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other services. The first period occurred between 16:12 UTC and 17:39 UTC, and the second between 18:53 UTC and 20:09 UTC. In total, users experienced approximately 2 hours and 43 minutes of degraded service across the two incidents.

During both incidents, users encountered errors loading pages on GitHub.com, failures when pushing or pulling code over HTTPS, failures starting or completing GitHub Actions workflow runs, and errors using GitHub Copilot. Additional services including …

Feb 09, 2026

20:09

Resolved Minor

Notifications are delayed

Feb 9, 19:29 UTC
Resolved - On February 9th notifications service started showing degradation around 13:50 UTC, resulting in an increase in notification delivery delays. Our team started investigating.

Around 14:30 UTC the service started to recover as the team continued investigating the incident. Around 15:20 UTC degradation resurfaced, with increasing delays in notification deliveries and small error rate (below 1%) on UI and API endpoints related to notifications.

At 16:30 UTC, we mitigated the incident by reducing contention through throttling workloads and performing a database failover. The median delay for notification deliveries was 80 minutes at this point and queues …

Feb 09, 2026

19:29

Resolved Critical

Incident with Pull Requests

Feb 9, 17:40 UTC
Resolved - On February 9, 2026, GitHub experienced two related periods of degraded availability affecting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other services. The first period occurred between 16:12 UTC and 17:39 UTC, and the second between 18:53 UTC and 20:09 UTC. In total, users experienced approximately 2 hours and 43 minutes of degraded service across the two incidents.

During both incidents, users encountered errors loading pages on GitHub.com, failures when pushing or pulling code over HTTPS, failures starting or completing GitHub Actions workflow runs, and errors using GitHub Copilot. Additional services including …

Feb 09, 2026

17:40

Resolved Critical

Incident with Actions

Feb 9, 15:46 UTC
Resolved - On February 9th, 2026, between 09:16 UTC and 15:12 UTC GitHub Actions customers experienced run start delays. Approximately 0.6% of runs across 1.8% of repos were affected, with an average delay of 19 minutes for those delayed runs.

The incident occurred when increased load exposed a bottleneck in our event publishing system, causing one compute node to fall behind on processing Actions Jobs. We mitigated by rebalancing traffic and increasing timeouts for event processing. We have since isolated performance critical events to a new, dedicated publisher to prevent contention between events and added safeguards to better …

Feb 09, 2026

15:46

Resolved Major

Degraded performance for Copilot Coding Agent

Feb 9, 12:12 UTC
Resolved - On February 9, 2026, between ~06:00 UTC and ~12:12 UTC, Copilot Coding Agent and related Copilot API endpoints experienced degraded availability. The primary impact was to agent-based workflows (requests to /agents/swe/*, including custom agent configuration checks), where 154k users saw failed requests and error responses in their editor/agent experience. Impact was concentrated among users and integrations actively using Copilot Coding Agent with VS Code.

The degradation was caused by an unexpected surge in traffic to the related API endpoints that exceeded an internal secondary rate limit. That resulted in upstream request denials which were surfaced …

Feb 09, 2026

12:12

Resolved Major

Degraded Performance in Webhooks API and UI, Pull Requests

Feb 9, 11:26 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

Feb 9, 11:26 UTC
Update - Actions is operating normally.

Feb 9, 11:26 UTC
Update - Issues is operating normally.

Feb 9, 11:26 UTC
Update - Webhooks is operating normally.

Feb 9, 11:26 UTC
Update - Pull Requests is operating normally.

Feb 9, 11:11 UTC
Update - We have identified a faulty infrastructure component and have failed over to a healthy instance. We are continuing to monitor the system for recovery.

Feb 9, 11:04 UTC
Update

Feb 09, 2026

11:26

Resolved Major

Incident with Pull Requests

Feb 6, 18:36 UTC
Resolved - On February 6, 2026, between 17:49 UTC and 18:36 UTC, the GitHub Mobile service was degraded, and some users were unable to create pull request review comments on deleted lines (and in some cases, comments on deleted files). This impacted users on the newer comment-positioning flow available in version 1.244.0 of the mobile apps. Telemetry indicated that the failures increased as the Android rollout progressed. This was due to a defect in the new comment-positioning workflow that could result in the server rejecting comment creation for certain deleted-line positions.

We mitigated the incident by halting the …

Feb 06, 2026

18:36

Resolved Major

Incident with Copilot

Feb 6, 11:58 UTC
Resolved - On February 10, 2026, between 10:28 and 11:54 UTC, Visual Studio Code users experienced a degraded experience on GitHub Copilot when using the Claude Opus 4.6 model. During this time, approximately 50% of users encountered agent turn failures due to the model being unable to serve the volume of incoming requests.

Rate limits set too low for actual demand caused the issue. While the initial deployment showed no concerns, a surge in traffic from Europe on the following day caused VSCode to begin hitting rate limit errors. Additionally, a degradation message intended to notify users of …

Feb 06, 2026

11:58