Travis CI
All Systems Operational
API   Operational
90 days ago
100.0 % uptime
Today
Web App   Operational
90 days ago
100.0 % uptime
Today
Builds Processing Operational
90 days ago
99.51 % uptime
Today
Mac Builds   ? Operational
90 days ago
99.42 % uptime
Today
Linux Builds (container-based)   ? Operational
90 days ago
99.56 % uptime
Today
Linux Builds (Trusty and Precise on GCE)   ? Operational
90 days ago
99.56 % uptime
Today
Background Processing Operational
90 days ago
100.0 % uptime
Today
Log Processing   Operational
90 days ago
100.0 % uptime
Today
User Sync   ? Operational
90 days ago
100.0 % uptime
Today
Notifications   ? Operational
90 days ago
100.0 % uptime
Today
Third Party Services Operational
GitHub   Operational
DNSimple Name Servers   Operational
Heroku   Operational
Pusher Pusher REST API   Operational
Pusher WebSocket client API   Operational
Pusher Presence channels   Operational
Pusher Webhooks   Operational
AWS ec2-us-east-1   Operational
Help Scout Email Processing   Operational
Help Scout Web App   Operational
PagerDuty Notification Delivery   Operational
npm, Inc. Registry Reads   ? Operational
Google Cloud Platform Google Compute Engine   Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
System Metrics Month Week Day
API Uptime (api.travis-ci.org)
Fetching
API Uptime (api.travis-ci.com)
Fetching
Active Linux Builds for Open Source projects
Fetching
Backlog Linux Builds for Open Source projects
Fetching
Active macOS Builds for Open Source projects
Fetching
Backlog macOS Builds for Open Source projects
Fetching
Active Linux Builds for Open Source projects (container-based)
Fetching
Backlog Linux Builds for Open Source projects (container-based)
Fetching
Past Incidents
Jan 17, 2018
Resolved - Builds have stabilized on all infrastructures, and we will continue to monitor the situation. If you’re still experiencing any problems, please get in touch via email: support@travis-ci.com
Jan 17, 11:18 UTC
Monitoring - We have identified the cause of the problem to be our RabbitMQ cluster which needed to be upgraded and the queues purged.
We are now back to processing jobs on all infrastructures and closely monitoring the situation. As a clean-up effort, we have to cancel all jobs that were previously stuck.
Thank you for your patience and understanding.
Jan 17, 09:55 UTC
Update - We continue working on the build back log issue. We are canceling some builds to help our recovery efforts. Thank you for your continued patience and understanding.
Jan 17, 05:17 UTC
Update - We are resuming open source builds on the macOS and Linux container-based (i.e. sudo: false) infrastructures.
Jan 17, 04:27 UTC
Update - We are still working on getting our infrastructure back up. Open source builds on both Linux and macOS are still stopped for the time being.
Jan 17, 04:09 UTC
Update - All open source builds are currently stopped for both our Linux and macOS infrastructures. We are currently working on getting them back on their feet. To help us do this, we are canceling currently queued jobs. We are terribly sorry for the disruption and we will continue to provide updates in a timely manner. Thank you for your enduring patience.
Jan 17, 02:25 UTC
Update - We are still investigating the job backlog incident.
Jan 17, 00:26 UTC
Investigating - We are continuing to work on resolving this issue, and are investigating other potential causes for the delay.
Jan 16, 21:17 UTC
Identified - We have noticed an increased back log that extends to both 'sudo: required' and 'sudo: false' (container based) infrastructure. We have identified the causes that contributed to the back log, and are working on resolving them.
Jan 16, 19:18 UTC
Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.
Jan 16, 18:46 UTC
Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.
Jan 16, 18:10 UTC
Jan 16, 2018
Resolved - Systems are operating normally.
Jan 16, 15:37 UTC
Monitoring - Logs delivery has fully recovered. We are monitoring the situation.
Jan 16, 15:28 UTC
Identified - An influx of builds clogged some pipes. Build backlog has recovered, log delivery is starting to recover.
Jan 16, 15:26 UTC
Investigating - We are investigating elevated wait times for new builds and log delivery.
Jan 16, 15:13 UTC
Jan 15, 2018

No incidents reported.

Jan 14, 2018

No incidents reported.

Jan 13, 2018

No incidents reported.

Jan 12, 2018
Resolved - macOS builds have stabilised. We will continue to monitor the situation.
Jan 12, 15:29 UTC
Investigating - macOS builds are currently running at reduced capacity. We continue to work on a solution.
Jan 12, 09:55 UTC
Resolved - We have fully recovered from yesterday’s GitHub outage. macOS builds are still operating at reduced capacity and we continue to work on a solution, which you can follow here: https://www.traviscistatus.com/incidents/6xb4kjczh4k6
Jan 12, 09:56 UTC
Update - For the time being, we are turning our focus to fixing unstable services in our macOS infrastructure. During this time, Mac jobs will continue to run but at a reduced capacity.
Jan 11, 18:40 UTC
Investigating - Following GitHub's outage earlier today, we are currently working on getting our system back on its feet. In the meantime, synchronizing with GitHub may not work and our platform may have delays in processing all queues. Sorry for the inconvenience and we will update as we know more.
Jan 11, 17:16 UTC
Jan 11, 2018
Resolved - The issue has been resolved.
Jan 11, 13:26 UTC
Monitoring - A hot fix has been deployed to update the expired GPG key and `apt-get update` commands should now succeed. If you're still experiencing issues, please let us know through https://github.com/travis-ci/travis-ci/issues/9037 or support@travis-ci.com.
Jan 11, 12:29 UTC
Identified - `apt-get` commands during build time are failing due to an expired GPG key from the mongo repository and we are working on a hotfix to update the key. In the meantime, it is possible to workaround the errors by manually updating the key in your .travis.yml as recommended at: https://github.com/travis-ci/travis-ci/issues/9037#issuecomment-356914965
Jan 11, 12:14 UTC
Jan 10, 2018
Resolved - This incident has been resolved.
Jan 10, 05:45 UTC
Identified - Based on the data we have gathered, we are planning to re-provision all NAT hosts for this infrastructure, which will happen within the next 12 hours during a time of decreased demand. More details will be posted in a separate scheduled maintenance.
Jan 9, 23:46 UTC
Update - We believe something is misbehaving in our NAT layer, and we are running more tests to determine next steps.
Jan 9, 20:19 UTC
Investigating - We are investigating increased networking errors on container-based Linux jobs.
Jan 9, 16:07 UTC
Completed - The scheduled maintenance has been completed.
Jan 10, 05:44 UTC
Verifying - Verification is currently underway for the maintenance items.
Jan 10, 05:36 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jan 10, 05:00 UTC
Scheduled - We have been receiving reports of intermittent network issues over the past week. A newly-provisioned NAT using the latest Amazon NAT AMI has not exhibited any of the problem behaviors. Based on this information, we have decided that the NAT hosts in our container-based Linux infrastructure are in need of a refresh. This maintenance will disrupt running and queued jobs for container-based Linux, and may introduce changes to the IP addresses used for internet connectivity.
Jan 9, 23:57 UTC
Jan 9, 2018
Resolved - This incident has been resolved.
Jan 9, 14:59 UTC
Update - We are beginning to shift all container-based linux jobs back on to our ec2 infrastructure. We continue to monitor the situation closely.
Jan 9, 11:25 UTC
Monitoring - We are monitoring as we begin routing 10% of jobs through the updated infrastructure.
Jan 8, 23:04 UTC
Identified - We are performing an emergency maintenance of our container-based Linux infrastructure. All jobs targeting container-based Linux are being routed to our sudo-enabled Linux infrastructure during the maintenance action.
Jan 8, 20:49 UTC
Jan 7, 2018

No incidents reported.

Jan 6, 2018

No incidents reported.

Jan 5, 2018

No incidents reported.

Jan 4, 2018

No incidents reported.

Jan 3, 2018
Resolved - This incident has been resolved.
Jan 3, 03:16 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Jan 3, 02:42 UTC
Update - We are continuing to investigate this issue. Thank you for your patience!
Jan 2, 23:35 UTC
Investigating - Container-based Linux for public repositories is currently operating over capacity. We are investigating why the auto-scaling capacity is not keeping up with demand.
Jan 2, 22:19 UTC