Incident response: Puzzle pieces and misadventure

The blinds were drawn, a glass of water sat sweating in the humidity, and as I looked around the room some very distraught men in suits looked back at me. The middle of a Midwest summer I had just climbed off my motorcycle and was met in the parking lot by the COO and general counsel. “Please hurry” the lawyer said. A few minutes later I sat looking back at the officers of a company. They knew they had been breached and were not happy. Who would be?

“The FBI will be here in the morning” said the COO, “What do we tell them.”

In the middle of summer, classes having been dismissed, my summer project a jeep hulk in pieces in my garage, and yet here I sat somewhat cleaned up. To quote Hawkeye Pierce, “The pro’s from Dover have arrived.” And, the fury was burning out quick. The CIO sat behind a laptop likely polishing his resume, they did not have a CISO, the senior system administrator was haranguing one of the secretaries about how if they had a shiny gizmo none of this would have happened. If he was in charge. The lawyer who had called me was in a tug of war with the CFO. The CEO with shock and dismay looked at me for any clue as why the fat, bearded, earring wearing brute that had just climbed off a motorcycle was sitting in front of him. I am built sturdy not speedy.

“I’m professor Sam, and I specialize in incident response and digital forensics,” I told the room. “You’ve gone down a path for some amount of time. Now you have had a breach. You have been working on it for weeks. The FBI has called and you wonder what will happen next.” I summarized the problem space. “Let us not focus now on why you’re here. The problems have resulted in the incident. Let us talk about getting out of this and what you will tell the FBI tomorrow.”

There are a dozen or more structured methods of incident response and none of them work for everybody. Some require deep pockets and basically start with Mandiant or Symantec on speed dial. Others are structured checklists that authors forgot humans must do these things in a work day of some type. I started with the near-term problem, “You’re going to tell law enforcement exactly what they already know.” You see, a crime may or may not have been committed. We will not know for weeks. You have been working on this for weeks and just did not understand how much others might be seeing the activities of an adversary or criminal group on your network. Long ago was the time to embed a trusted entity like a junior counsel and security practitioner with INFRAGARD. Now is the time for the FBI to come tell you what you know, and for you tell the FBI what they know.  Check one mollified if not happy general counsel.

Looking over at the CFO I answered the question he did not know how to even ask. The company was spending about $1000 per user a year on information technology. Not that much to be honest. Most companies their size could easily be spending a third of their operational budget. The problem had started with a low-level employee just trying to get the job they paid them to do done. He had plugged his personal laptop in because his work desktop was non-functional. Using his work credentials on their corporate accounts was not the issue. The malware on his laptop harvesting those credentials was the problem. I told the CFO they were looking at a cost of $450 per user to clean up the breach. That would nearly double their security budget. In the future, they should program out that money in prevention and detection. Check the unhappy CFO box.

The CEO looked up at me and asked, “You do not know anything about our company how could you have this plan?” A great question, but not necessarily accurate assumption. There are consistent patterns between human behaviors, human use of technology and how these things merge into a corporate enterprise environment. Commodity technologies such as personal computers, cell phones, tablets, routers, servers and more all have consistent price points. The less mature the organization the more skewed the budget percentages are to acquisition and deployment costs absent security, management and upkeep costs. Given a known number of users, an information technology investment price point, and a lot of other details can be divined as a baseline to measure against. The CEO looked over at the CIO, “Did you know this?” The CIO looked at me and recounted a few things. He had no formal training in information technology. The CIO was an Ivy League MBA. He had taken the job when the previous CIO had failed to keep within budget three quarters in a row. Pinching productivity and deferring maintenance the new CIO had stayed within budget and been riding a good career high based on his success.

The CIO had known he was accepting risk for the enterprise. It said right on the memorandums his former CISO had made him sign, “You accept the risk for this system/process not being to standard.” They had all varied to some degree, but the dozen or so forms had all said the same thing in general. The deferred investment on the intrusion detection system with inspection of all connected devices had been one of those forms. He just did not realize how much impact the deferred technology would have in the future. It was a gamble. It was a bad bet.

The CIO complained, “You have given us nothing actionable. You just told us what we all already know. Why are we paying you anything?” This is a common tactic when somebody is working out of their depth. Defer, defend, or cast doubt on others. Of course, he was both right and wrong since I was basically working for free. I did not know much of anything about them other than the few minutes with the general counsel. I ticked off the environmental variables. They had no playbook, they had no policy for incident response, there was not standardized framework for information security, the meager incident response capability was basically playing whack a mole, and the security team was mission rich and resource poor. The engineering and information technology group were acquisition experts not information system experts. The quantity of security shelf ware reaching antique status was staggering.

The CEO pulled out a power point slide deck with lots of graphs. Each graph was a professor’s dream. Mixed start points, percentages with no real values, swooping lines with no numerical reference, ill-defined terms that were mixed throughout the deck. Basically, meaningless information upon which to ascertain risk or make decisions from. “What will we do?” Asked the CEO.

I started sketching the incident response to them. I told them you all are the executive steering committee. The secretaries at the edge of the room started writing like crazy. The room got noticeably warmer. You as a committee will meet every 8 hours until you as a group decide to declare the incident complete.  Nobody will ask questions, seek a status, or get in the way of the response. The meetings are where you will get more information and be asked for input.

The CIO will take his top technical and administrative managers and they will be the incident response managers. You will physically or logically segment the network. The information technology guys will go to every desktop while it is off the network and do a virus scan to detect the malware. Figuring each IT guy can do a room of computers at the same time you are looking at four to five days to get this done. As a steering committee, you may designated three priority areas for them to hit first and in what order. Everything cannot be a priority.

I walked the senior execs through the plan. Nothing would be connected to the network until it was declared clean.

While the end points are being inspected another group will rebuild the domain controllers and web servers. This is where it gets tricky. The go-to guy to be a incident response manager is also the domain controller administrator. Resources already are getting thin. I told them to accept some less efficient uses of people if it meant the overall response would be managed better. The time for heroes was gone. Now is the time for structure. I stayed over the rest of the week and the following weekend. I sat quietly at the back of the room during most of the meetings. I would inject some reason or thought when they were getting off track. The tech team would ask questions but really, they were usually just asking permission. No, the response was not perfect and mistakes were made. I kept a notebook of those for my after-action report. With my more seasoned eye I see lots of things that could have been improved. Yet in the end they got it done.

I do not think the organization ever quite recovered. The technical layer of the enterprise was back and running along at best of what could be expected. The fundamental trust, expectations, and common narrative of leadership had been shaken. The choices of decades of leadership had been eroded at the root. No vendor, tool, application, or contract would replace or repair the relationships.

These kind of calls ended when I came back to government. Since the incident in the story the NIST has come out with a great incident response guide. The fight now is between the guys who have taken a SANS course and the ones who took a university course. Each of them have different structures and try and force “their” way down the throats of the other side. In reality any structure will work as long as it is based on solving problems and shared. A mentor from academia told me the time to test a plan is when nobody thinks it is worthwhile. You can learn a lot about a leadership team by how much they prepare for the fire when there is no fire in sight. I try and calibrate leadership with a made-up statistic (sample of a sample).

I ask for the external block, trap, detonate event numbers. I then ask for the internal malware, phishing, etc. event numbers. This is the delta between what was caught and what was not caught. Let us say the organization is getting an A and 99% is caught. 99 events are stopped for every not caught event. So, a reasonable expectation is that 99 incident/event responses there is a chance 1 is not caught that represents an active adversary currently in the network. True or not? Does not really matter. The idea is nobody is perfect at any stage of the information security lifecycle.

This gets back to the last lesson I dropped on that executive team a long time ago. Information security is different than most everything else in business. The businesses you buy, the successful sales, the active decisions, and the things you do drives business forward. In business, the “no” decision or abeyance of decision usually decreases risk. Whereas, the thing you do not do, the thing you prioritize lower, the very structure of how you think as a business person will open you to the possibility of a breach. It is never the thing you “do” but the thing you do not do that will harm you. That is why a CIO must think like a CISO, like an information systems engineer, an enterprise architect, and the puzzle pieces of engineering acquisition and defense of the enterprise must fit together so everything supports decreasing enterprise risk.


Leave a Reply