You may experience symptoms such as webpages not loading, systems not responding to ping, and remote login sessions terminating unexpectedly.
Our network staff will work to identify the cause, and restore reliable networking.
UPDATE
[2014-06-24 20:36:30 | Derek Calderon]
Packet loss at the EECS border is ongoing. It currently appears that the issue is due to an unusual increase in external attack volume with which our firewalls are unable to cope. We have added additional mitigation measures at the border and will continue our attempts to alleviate the issue. Based on our current trending data, the situation appears to be at its worst during daytime hours in east Asia, which correlates with the apparent source of the aggressive traffic.
We will continue to work on the issue and hope to have a resolution soon. We apologize for all inconvenience caused by the poor network quality.
UPDATE
[2014-06-25 18:09:41 | Derek Calderon]
While the attacks are ongoing, we have taken measures to minimize further traffic impact. There may still be occasional spikes but the worst should be behind us.
We are still investigating the precise nature of the attacks, it is clear that the primary goal is to fill the target’s TCP session buffer. We have seen a large volume of SYN floods, and there may be other methods in use that we have not yet discovered.
If you run any servers on the EECS network, please be aware that you may experience unusual TCP session loads as a result of the ongoing situation. We recommend monitoring your logs and session tables to ensure continued service.
The production impact of this event is located at the network border between EECS and Campus, through which we connect to the Internet. Traffic which may have been dropped includes traffic between EECS and Campus, EECS and the Internet, and on the AirBears and AirBears2 wireless networks within EECS locations.
UPDATE
[2014-06-30 17:20:50 | Derek Calderon]
The source of the offending traffic was identified and removed from the network by EECS security staff at approximately 2AM on June 29th. Although the most damaging volumes of traffic were attempting to enter our network from the Internet, it now appears that the ultimate source was within Soda Hall. When the identified device was removed from the network, traffic patterns immediately stabilized and have now remained stable for a period of more than 36 hours. As a result we are determining this issue to be resolved; we will now complete our internal investigation.
Once again, we apologize for all inconveniences suffered during the period of degraded service. In the future we will work towards identifying and mitigating these issues in a more timely manner.
Resolved as of 2014-06-29 02:00:00