65°F
weather icon Partly Cloudy

Metro knows what happened during 911 crash, but not what caused it

During a press briefing Tuesday afternoon on the 911 system that crashed two weeks ago — less than two months after a new $2.3-million system was installed — the Metropolitan Police department didn’t have many answers.

Police knew what happened during the Feb. 2 crash, but not what caused it.

They knew the crash lasted six hours, but didn’t know how many callers were affected.

And though Metro was scant on details, they assured that more backup measures have been added since the failure, making the system “stronger,” Deputy Chief Charles Hank said. But he was careful not to guarantee a similar crash wouldn’t recur.

“It’s technology,” Hank said. “It can fail.”

Hank called what happened that night a “broadcast storm” — a technical term meaning that the system experienced an “overload-type situation,” in which a surge of information fed into a feedback loop, clogged the system and ultimately crashed it.

But Metro wasn’t sure what caused the surge.

“From my understanding it had nothing to do with call volume,” Hank said, adding that Airbus DS Communications, the vendor Metro uses, handles 60-percent of U.S. 911 calls, some of which happen in cities with larger populations and dispatch demands than Metro.

Commenting on possible causes, Hank also said it wasn’t an external hack because the system’s firewall wasn’t compromised. When asked whether it could have been an internal hack, Hank said “we don’t believe that occurred.”

Metro did not deny another possible cause — an employee plugging in a smartphone, tablet or flash drive into the server, which could have accidentally triggered the surge.

“We don’t think that occurred, but that could have been one scenario,” Hank said.

Since the crash, Metro has prohibited employees from plugging such storage devices into the system. The server room’s security was also upgraded so very few individuals could enter “to lessen any chances of tampering,” Hank said.

“We have installed locks on all equipment cabinets,” Hank added.

One reason Metro didn’t have answers Tuesday was because the system’s history was deleted when it was restored late Feb. 2. Hank compared it to factory-resetting a smartphone, which wipes the device’s data.

Hank said the system is stronger because new safety nets are in place.

If a crash were to happen today, Metro has a “switch” that can reroute all incoming calls to backup lines, sort of like call-forwarding a house phone to another landline. The switch is in place, but Metro is working to make it automatic.

If that were to fail, Metro is working on a second call center, which will be ready in the next 60 days and will operate concurrently with the one already in place. So if a crash happened, the center could swallow the original call volume without a disruption.

And if that second center were to fail, Metro is working on a in-case-of-emergency backup center, which will operate out of a southwest valley location only when needed. It, too, could absorb Metro’s normal calls without disruption, but it won’t be up and running for a few months.

As a last resort, calls would be rerouted to Henderson or North Las Vegas — the first course of action during the Feb. 2 crash.

“We take seriously our obligation to ensure public trust is restored,” Hank said, “and that is why we have gone to great lengths to institute several layers of protection to guard against any potential system failures.”

Contact Rachel Crosby at rcrosby@reviewjournal.com or at 702-387-5290. Find her on Twitter: @rachelacrosby

 

Don't miss the big stories. Like us on Facebook.
THE LATEST