Travis CI
All Systems Operational
API   Operational
90 days ago
100.0 % uptime
Today
Web App   Operational
90 days ago
100.0 % uptime
Today
Builds Processing Operational
90 days ago
99.5 % uptime
Today
Mac Builds   ? Operational
90 days ago
99.4 % uptime
Today
Linux Builds (container-based)   ? Operational
90 days ago
99.54 % uptime
Today
Linux Builds (Trusty and Precise on GCE)   ? Operational
90 days ago
99.56 % uptime
Today
Background Processing Operational
90 days ago
100.0 % uptime
Today
Log Processing   Operational
90 days ago
100.0 % uptime
Today
User Sync   ? Operational
90 days ago
100.0 % uptime
Today
Notifications   ? Operational
90 days ago
100.0 % uptime
Today
Third Party Services Operational
GitHub   Operational
DNSimple Name Servers   Operational
Heroku   Operational
Pusher Pusher REST API   Operational
Pusher WebSocket client API   Operational
Pusher Presence channels   Operational
Pusher Webhooks   Operational
AWS ec2-us-east-1   Operational
Help Scout Email Processing   Operational
Help Scout Web App   Operational
PagerDuty Notification Delivery   Operational
npm, Inc. Registry Reads   ? Operational
Google Cloud Platform Google Compute Engine   Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
System Metrics Month Week Day
API Uptime (api.travis-ci.org)
Fetching
API Uptime (api.travis-ci.com)
Fetching
Active Linux Builds for Open Source projects
Fetching
Backlog Linux Builds for Open Source projects
Fetching
Active macOS Builds for Open Source projects
Fetching
Backlog macOS Builds for Open Source projects
Fetching
Active Linux Builds for Open Source projects (container-based)
Fetching
Backlog Linux Builds for Open Source projects (container-based)
Fetching
Past Incidents
Jan 22, 2018

No incidents reported today.

Jan 21, 2018

No incidents reported.

Jan 20, 2018

No incidents reported.

Jan 19, 2018
Resolved - This incident has been resolved.
Jan 19, 15:27 UTC
Monitoring - We've spent the last 24 hours doing maintenance on our Mac infrastructure with the help of our provider. Things now look stable on our end albeit delays might be higher due to increased demand. We are thanking you again for your continued patience.
Jan 19, 14:52 UTC
Identified - We continue to mitigate build delays for Mac builds.
Jan 18, 21:05 UTC
Update - To help with the current situation, we are taking the hard decision of canceling Mac build jobs older than 2018-01-18 00:00 UTC. We apologize for this disruption and are thanking you for your understanding.
Jan 18, 14:44 UTC
Investigating - We are sorry to inform you that you'll likely experience delays with your Mac builds (both private and open source) until further notice. Our team is currently addressing this. We are sorry for the inconvenience in the meantime and we are thanking you for your patience.
Jan 18, 14:38 UTC
Jan 17, 2018
Resolved - Builds are starting and running normally at the moment hence, we are closing this incident. Thank you again for hanging in there with us and please reach out to support@travis-ci.com if you run into something.
Jan 17, 20:41 UTC
Monitoring - We are now seeing open source builds on macOS and container-based Linux starting again. Please restart any builds that are currently "hanging". We will cancel any remaining stalled jobs later today. Thank you for your patience and we are continuing to monitor the situation.
Jan 17, 19:25 UTC
Investigating - We’re currently investigating delays starting open source container-based and macOS builds.
Jan 17, 17:41 UTC
Resolved - Builds have stabilized on all infrastructures, and we will continue to monitor the situation. If you’re still experiencing any problems, please get in touch via email: support@travis-ci.com
Jan 17, 11:18 UTC
Monitoring - We have identified the cause of the problem to be our RabbitMQ cluster which needed to be upgraded and the queues purged.
We are now back to processing jobs on all infrastructures and closely monitoring the situation. As a clean-up effort, we have to cancel all jobs that were previously stuck.
Thank you for your patience and understanding.
Jan 17, 09:55 UTC
Update - We continue working on the build back log issue. We are canceling some builds to help our recovery efforts. Thank you for your continued patience and understanding.
Jan 17, 05:17 UTC
Update - We are resuming open source builds on the macOS and Linux container-based (i.e. sudo: false) infrastructures.
Jan 17, 04:27 UTC
Update - We are still working on getting our infrastructure back up. Open source builds on both Linux and macOS are still stopped for the time being.
Jan 17, 04:09 UTC
Update - All open source builds are currently stopped for both our Linux and macOS infrastructures. We are currently working on getting them back on their feet. To help us do this, we are canceling currently queued jobs. We are terribly sorry for the disruption and we will continue to provide updates in a timely manner. Thank you for your enduring patience.
Jan 17, 02:25 UTC
Update - We are still investigating the job backlog incident.
Jan 17, 00:26 UTC
Investigating - We are continuing to work on resolving this issue, and are investigating other potential causes for the delay.
Jan 16, 21:17 UTC
Identified - We have noticed an increased back log that extends to both 'sudo: required' and 'sudo: false' (container based) infrastructure. We have identified the causes that contributed to the back log, and are working on resolving them.
Jan 16, 19:18 UTC
Update - We continue to investigate the increased back log, as well as a growing back log on our 'sudo: required' infrastructure.
Jan 16, 18:46 UTC
Investigating - We are currently investigating an increased back log on our open container-based infrastructure. During this time, you may experience some delays in builds starting. We will keep you posted as soon as we have an update.
Jan 16, 18:10 UTC
Jan 16, 2018
Resolved - Systems are operating normally.
Jan 16, 15:37 UTC
Monitoring - Logs delivery has fully recovered. We are monitoring the situation.
Jan 16, 15:28 UTC
Identified - An influx of builds clogged some pipes. Build backlog has recovered, log delivery is starting to recover.
Jan 16, 15:26 UTC
Investigating - We are investigating elevated wait times for new builds and log delivery.
Jan 16, 15:13 UTC
Jan 15, 2018

No incidents reported.

Jan 14, 2018

No incidents reported.

Jan 13, 2018

No incidents reported.

Jan 12, 2018
Resolved - macOS builds have stabilised. We will continue to monitor the situation.
Jan 12, 15:29 UTC
Investigating - macOS builds are currently running at reduced capacity. We continue to work on a solution.
Jan 12, 09:55 UTC
Resolved - We have fully recovered from yesterday’s GitHub outage. macOS builds are still operating at reduced capacity and we continue to work on a solution, which you can follow here: https://www.traviscistatus.com/incidents/6xb4kjczh4k6
Jan 12, 09:56 UTC
Update - For the time being, we are turning our focus to fixing unstable services in our macOS infrastructure. During this time, Mac jobs will continue to run but at a reduced capacity.
Jan 11, 18:40 UTC
Investigating - Following GitHub's outage earlier today, we are currently working on getting our system back on its feet. In the meantime, synchronizing with GitHub may not work and our platform may have delays in processing all queues. Sorry for the inconvenience and we will update as we know more.
Jan 11, 17:16 UTC
Jan 11, 2018
Resolved - The issue has been resolved.
Jan 11, 13:26 UTC
Monitoring - A hot fix has been deployed to update the expired GPG key and `apt-get update` commands should now succeed. If you're still experiencing issues, please let us know through https://github.com/travis-ci/travis-ci/issues/9037 or support@travis-ci.com.
Jan 11, 12:29 UTC
Identified - `apt-get` commands during build time are failing due to an expired GPG key from the mongo repository and we are working on a hotfix to update the key. In the meantime, it is possible to workaround the errors by manually updating the key in your .travis.yml as recommended at: https://github.com/travis-ci/travis-ci/issues/9037#issuecomment-356914965
Jan 11, 12:14 UTC
Jan 10, 2018
Resolved - This incident has been resolved.
Jan 10, 05:45 UTC
Identified - Based on the data we have gathered, we are planning to re-provision all NAT hosts for this infrastructure, which will happen within the next 12 hours during a time of decreased demand. More details will be posted in a separate scheduled maintenance.
Jan 9, 23:46 UTC
Update - We believe something is misbehaving in our NAT layer, and we are running more tests to determine next steps.
Jan 9, 20:19 UTC
Investigating - We are investigating increased networking errors on container-based Linux jobs.
Jan 9, 16:07 UTC
Completed - The scheduled maintenance has been completed.
Jan 10, 05:44 UTC
Verifying - Verification is currently underway for the maintenance items.
Jan 10, 05:36 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jan 10, 05:00 UTC
Scheduled - We have been receiving reports of intermittent network issues over the past week. A newly-provisioned NAT using the latest Amazon NAT AMI has not exhibited any of the problem behaviors. Based on this information, we have decided that the NAT hosts in our container-based Linux infrastructure are in need of a refresh. This maintenance will disrupt running and queued jobs for container-based Linux, and may introduce changes to the IP addresses used for internet connectivity.
Jan 9, 23:57 UTC
Jan 9, 2018
Resolved - This incident has been resolved.
Jan 9, 14:59 UTC
Update - We are beginning to shift all container-based linux jobs back on to our ec2 infrastructure. We continue to monitor the situation closely.
Jan 9, 11:25 UTC
Monitoring - We are monitoring as we begin routing 10% of jobs through the updated infrastructure.
Jan 8, 23:04 UTC
Identified - We are performing an emergency maintenance of our container-based Linux infrastructure. All jobs targeting container-based Linux are being routed to our sudo-enabled Linux infrastructure during the maintenance action.
Jan 8, 20:49 UTC