Travis CI
All Systems Operational
API Operational
Web App Operational
Builds Processing Operational
Mac Builds ? Operational
Linux Builds (container-based) ? Operational
Linux Builds (Trusty and Precise on GCE) ? Operational
Background Processing Operational
Log Processing Operational
User Sync ? Operational
Notifications ? Operational
Third Party Services Operational
GitHub Operational
DNSimple Name Servers Operational
Heroku Operational
Pusher Pusher REST API Operational
Pusher WebSocket client API Operational
Pusher Presence channels Operational
Pusher Webhooks Operational
AWS ec2-us-east-1 Operational
Help Scout Email Processing Operational
Help Scout Web App Operational
PagerDuty Notification Delivery Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
System Metrics Month Week Day
API Uptime (api.travis-ci.org)
Fetching
API Uptime (api.travis-ci.com)
Fetching
Active Linux Builds for Open Source projects (container-based)
Fetching
Backlog Linux Builds for Open Source projects (container-based)
Fetching
Active Linux Builds for Open Source projects
Fetching
Backlog Linux Builds for Open Source projects
Fetching
Active OS X Builds for Open Source Projects
Fetching
Backlog OS X Builds for Open Source Projects
Fetching
Past Incidents
Dec 4, 2016

No incidents reported today.

Dec 3, 2016
Resolved - The backlog for .org OS X builds has cleared. 👍🎉 Resolving incident.
Dec 3, 05:31 UTC
Update - The backlog on travis-ci.com has cleared, we're still monitoring the backlog on travis-ci.org.
Dec 2, 18:18 UTC
Update - We're continuing to process through the backlog without issues, but due to the size of the backlog there are still long delays for builds to run. We're continuously monitoring the situation in order to maintain the best possible throughput.

We're truly sorry for this lengthy interruption to your builds and we will be publishing a postmortem next week.
Dec 2, 17:11 UTC
Update - We are beginning to process backlog at full capacity and continue to monitor closely.
Dec 2, 12:03 UTC
Monitoring - We are slowly starting up capacity and resuming builds.
Dec 2, 11:29 UTC
Identified - We identified an issue that is preventing build VMs form booting and are working on a fix.
Dec 2, 11:14 UTC
Investigating - We are experiencing some issues while restoring OSX capacity and are investigating.
Dec 2, 10:48 UTC
Monitoring - We are restarting the workers for our .org open source OS X builds and monitoring to ensure the system is stable.
Dec 2, 09:46 UTC
Update - Our Infrastructure provider is placing our hypervisor host under maintenance while they perform additional clean-ups.
Dec 2, 06:46 UTC
Update - Our infrastructure provider is doing a rolling restart of all our physical hardware.
Dec 2, 06:09 UTC
Investigating - Our infrastructure provider is resolving an issue with Hypervisor hosts. All OS X Builds are stopped while they take down hosts, reload, and migrate.
Dec 2, 05:04 UTC
Monitoring - OS X builds are running at full capacity for .com and .org. Monitoring.
Dec 2, 04:45 UTC
Update - Host hypervisors have been restarted, OS X builds are resuming first for .com, then .org
Dec 2, 04:27 UTC
Identified - We've stopped all OS X workers processing jobs for .com and .org while we stabilize our hypervisor hosts
Dec 2, 03:21 UTC
Update - OS X .org workers have restarted and resuming jobs at full capacity.
Dec 2, 02:40 UTC
Update - We are restarting the workers for our .org open source OS X builds. Please stand by as jobs will resume shortly after.
Dec 2, 02:06 UTC
Update - OS X .com backlog has cleared. Still processing open source OS X builds.
Nov 30, 23:54 UTC
Update - Resumed OS X builds at full capacity, monitoring performance.
Nov 30, 18:15 UTC
Monitoring - We have resumed partial OS X builds at reduced capacity, and placing some of our build vm hosts in maintenance mode while we continue to monitor performance issues.
Nov 30, 17:37 UTC
Update - Temporary stoppage on .org and .com OS X builds to drain VM pool again, after which we will restart the flow of jobs at reduced capacity
Nov 30, 16:45 UTC
Investigating - We are re-escalating this incident as we are experiencing high load on our hypervisor server, which is affecting all OS X build vms for .org and .com
Nov 30, 15:50 UTC
Update - Public repos are back to full capacity. We are working through the accumulated backlog.
Nov 30, 06:12 UTC
Update - The private repo backlog has cleared. We are continuing to run public repos at reduced capacity.
Nov 30, 04:21 UTC
Update - We have returned to full capacity for private repos.
Nov 30, 03:28 UTC
Update - We are still observing VM leakage, albeit very slight. We are going to remain at half capacity and reassess in 1 hour.
Nov 30, 02:44 UTC
Monitoring - We have restarted jobs at half capacity and are monitoring VM lifecycle metrics.
Nov 30, 01:45 UTC
Update - We have stopped all jobs in order to drain the VM pool again, after which we will restart the flow of jobs at reduced capacity while monitoring VM leakage.
Nov 30, 00:54 UTC
Identified - We are re-escalating this incident given ongoing issues with VM lifecycle management.
Nov 30, 00:37 UTC
Monitoring - We have resumed all OS X builds on .org and .com at full capacity, and will start to work through the backlog while we monitor the recent changes.
Nov 29, 20:04 UTC
Update - Temporary stoppage on .org and .com OS X builds. We're pushing fixes to our VM cloud manager and hypervisor client, and will need to restart these services.
Nov 29, 19:40 UTC
Identified - OS X builds for .org and .com are processing at 50% capacity while we continue to debug our VM cloud manager issues.
Nov 29, 17:49 UTC
Monitoring - All the OS X workers are back online. Jobs will be delayed until we’re able to catch up through the backlog of jobs waiting to be built.
Nov 29, 17:23 UTC
Identified - We found an issue with our virtual machine cloud manager. We’ve now brought back up part of the workers and we’re slowly resuming OS X jobs.
Nov 29, 17:02 UTC
Update - We’ve stopped all OS X builds to investigate further and are working on restoring them as quickly as possible.
Nov 29, 16:39 UTC
Investigating - We’re experiencing issues booting OS X jobs. This is causing severe build delays in our OS X infrastructure. We are currently investigating and will post an update as soon as we know some more.
Nov 29, 16:30 UTC
Dec 1, 2016

No incidents reported.

Nov 28, 2016

No incidents reported.

Nov 27, 2016

No incidents reported.

Nov 26, 2016

No incidents reported.

Nov 25, 2016

No incidents reported.

Nov 24, 2016
Resolved - We've caught up on the backlog from this incident and are processing builds as expected now. We apologize for the delay in realizing this issue, this particular issue was not caught by our current monitoring for this piece of our infrastructure. We'll be reviewing system logs and identifying additional monitoring we can add to better detect this in the future. Thank you for your patience while we resolved this issue.
Nov 24, 04:57 UTC
Update - We are beginning to resume OS X VMs for both public and private builds.
Nov 24, 04:50 UTC
Update - We've restored the cluster to service and we're doing some additional checks on things before we resume builds again. Thank you for your patience.
Nov 24, 04:41 UTC
Identified - We've determined that one of our clustered VM firewalls was in an inconsistent state that was causing outbound network traffic to fail in an unexpected way that did not result in other parts of the cluster properly handling the traffic. We're working to restore this cluster to service.
Nov 24, 04:35 UTC
Investigating - We are currently investigating why our OS X VMs cannot resolve github.com. This is currently preventing `git clone` from working and makes OS X builds fail.
Nov 24, 04:22 UTC
Nov 23, 2016

No incidents reported.

Nov 22, 2016

No incidents reported.

Nov 21, 2016
Resolved - Everything has been humming along nicely for the last hours.
Nov 21, 14:16 UTC
Monitoring - Backlog has been processed on travis-ci.org and travis-ci.com, builds should be running normally again. We are monitoring the situation and have a further investigation under way as to why the system didn't scale up quickly enough on its own.
Nov 21, 11:49 UTC
Identified - The delays are caused by us running under capacity on our AWS setup. It seems we are imploding instances in our ASG at a higher rate than usual. We have manually increased the number of VMs we're running and should be through the backlog shortly.
Nov 21, 11:25 UTC
Investigating - We are experiencing delays both on travis-ci.org and travis-ci.com for container based builds (our default for Linux builds). We are investigating.
Nov 21, 11:07 UTC
Nov 20, 2016

No incidents reported.