Unmasked.

Chapter 3

Aaron"s pa.s.sword yielded even more fruit. HBGary used Google Apps for its e-mail services, and for both Aaron and Ted, the pa.s.sword cracking provided access to their mail. But Aaron was no mere user of Google Apps: his account was also the administrator of the company"s mail. With his higher access, he could reset the pa.s.swords of any mailbox and hence gain access to all the company"s mail-not just his own. It"s this capability that yielded access to Greg Hoglund"s mail.

And what was done with Greg"s mail?

A little bit of social engineering, that"s what.

A little help from my friends Contained within Greg"s mail were two bits of useful information. One: the root pa.s.sword to the machine running Greg"s rootkit.com site was either "88j4bb3rw0cky88" or "88Scr3am3r88". Two: Jussi Jaakonaho, "Chief Security Specialist" at Nokia, had root access. Vandalizing the website stored on the machine was now within reach.

The attackers just needed a little bit more information: they needed a regular, non-root user account to log in with, because as a standard security procedure, direct ssh access with the root account is disabled. Armed with the two pieces of knowledge above, and with Greg"s e-mail account in their control, the social engineers set about their task. The e-mail correspondence tells the whole story: tells the whole story: From: Greg To: Jussi Subject: need to ssh into rootkit im in europe and need to ssh into the server. can you drop open up firewall and allow ssh through port 59022 or something vague?

and is our root pa.s.sword still 88j4bb3rw0cky88 or did we change to 88Scr3am3r88 ?

thanks -------------------------------------.

From: Jussi To: Greg Subject: Re: need to ssh into rootkit hi, do you have public ip? or should i just drop fw?

and it is w0cky - tho no remote root access allowed -------------------------------------.

From: Greg To: Jussi Subject: Re: need to ssh into rootkit no i dont have the public ip with me at the moment because im ready for a small meeting and im in a rush.

if anything just reset my pa.s.sword to changeme123 and give me public ip and ill ssh in and reset my pw.

From: Jussi To: Greg Subject: Re: need to ssh into rootkit ok, it should now accept from anywhere to 47152 as ssh. i am doing testing so that it works for sure.

your pa.s.sword is changeme123 i am online so just shoot me if you need something.

in europe, but not in finland? :-) _jussi -------------------------------------.

From: Greg To: Jussi Subject: Re: need to ssh into rootkit if i can squeeze out time maybe we can catch up.. ill be in germany for a little bit. anyway I can"t ssh into rootkit. you sure the ips still 65.74.181.141?

thanks -------------------------------------.

From: Jussi To: Greg Subject: Re: need to ssh into rootkit does it work now?

From: Greg To: Jussi Subject: Re: need to ssh into rootkit yes jussi thanks did you reset the user greg or?

From: Jussi To: Greg Subject: Re: need to ssh into rootkit nope. your account is named as hoglund -------------------------------------.

From: Greg To: Jussi Subject: Re: need to ssh into rootkit yup im logged in thanks ill email you in a few, im backed up thanks Thanks indeed. To be fair to Jussi, the fake Greg appeared to know the root pa.s.sword and, well, the e-mails were coming from Greg"s own e-mail address. But over the course of a few e-mails it was clear that "Greg" had forgotten both his username and his pa.s.sword. And Jussi handed them to him on a platter.

Later on, Jussi did appear to notice something was up: From: Jussi To: Greg Subject: Re: need to ssh into rootkit did you open something running on high port?

As with the HBGary machine, this could have been avoided if keys had been used instead of pa.s.swords. But they weren"t. Rootkit.com was now compromised.

Standard practice Once the username and pa.s.sword were known, defacing the site was easy. Log in as Greg, switch to root, and deface away! The attackers went one better than this, however: they dumped the user database for rootkit.com, listing the e-mail addresses and pa.s.sword hashes for everyone who"d ever registered on the site. And, as with the hbgaryfederal.com CMS system, the pa.s.swords were hashed with a single naive use of MD5, meaning that once again they were susceptible to rainbow table-based pa.s.sword cracking. So the crackable pa.s.swords were cracked, too.

So what do we have in total? A Web application with SQL injection flaws and insecure pa.s.swords. Pa.s.swords that were badly chosen. Pa.s.swords that were reused. Servers that allowed pa.s.sword-based authentication. Systems that weren"t patched. And an astonishing willingness to hand out credentials over e-mail, even when the person being asked for them should have realized something was up.

The thing is, none of this is unusual. Quite the opposite. The Anonymous hack was not exceptional: the hackers used standard, widely known techniques to break into systems, find as much information as possible, and use that information to compromise further systems. They didn"t have to, for example, use any non-public vulnerabilities or perform any carefully targeted social engineering. And because of their desire to cause significant public disruption, they did not have to go to any great lengths to hide their activity.

Nonetheless, their attack was highly effective, and it was well-executed. The desire was to cause trouble for HBGary, and that they did. Especially in the social engineering attack against Jussi, they used the right information in the right way to seem credible.

Most frustrating for HBGary must be the knowledge that they know what they did wrong, and they were perfectly aware of best practices; they just didn"t actually use them use them. Everybody knows knows you don"t use easy-to-crack pa.s.swords, but some employees did. Everybody you don"t use easy-to-crack pa.s.swords, but some employees did. Everybody knows knows you don"t re-use pa.s.swords, but some of them did. Everybody you don"t re-use pa.s.swords, but some of them did. Everybody knows knows that you should patch servers to keep them free of known security flaws, but they didn"t. that you should patch servers to keep them free of known security flaws, but they didn"t.

And HBGary isn"t alone. a.n.a.lysis of the pa.s.swords leaked from rootkit.com and Gawker shows that pa.s.sword re-use is extremely widespread, with something like 30 percent of users re-using their pa.s.swords. HBGary won"t be the last site to suffer from SQL injection, either, and people will continue to use pa.s.sword authentication for secure systems because it"s so much more convenient than key-based authentication. of the pa.s.swords leaked from rootkit.com and Gawker shows that pa.s.sword re-use is extremely widespread, with something like 30 percent of users re-using their pa.s.swords. HBGary won"t be the last site to suffer from SQL injection, either, and people will continue to use pa.s.sword authentication for secure systems because it"s so much more convenient than key-based authentication.

So there are clearly two lessons to be learned here. The first is that the standard advice is good advice. If all best practices had been followed then none of this would have happened. Even if the SQL injection error was still present, it wouldn"t have caused the cascade of failures that followed.

The second lesson, however, is that the standard advice isn"t good enough. Even recognized security experts who should know better won"t follow it. What hope does that leave for the rest of us?

[image]On November 16, 2009, Greg Hoglund, a cofounder of computer security firm HBGary, sent an e-mail to two colleagues. The message came with an attachment, a Microsoft Word file called AL_QAEDA.doc, which had been further compressed and pa.s.sword protected for safety. Its contents were dangerous.

"I got this word doc linked off a dangler site for Al Qaeda peeps," wrote Hoglund. "I think it has a US govvy payload buried inside. Would be neat to [a.n.a.lyze] it and see what it"s about. DONT open it unless in a [virtual machine] obviously... DONT let it FONE HOME unless you want black suits landing on your front acre. :-)"

The attached doc.u.ment, which is in English, begins: "LESSON SIXTEEN: a.s.sa.s.sINATIONS USING POISONS AND COLD STEEL (UK/BM-154 TRANSLATION)."

It purports to be an Al-Qaeda doc.u.ment on dispatching one"s enemies with knives (try "the area directly above the genitals"), with ropes ("Choking... there is no other area besides the neck"), with blunt objects ("Top of the stomach, with the end of the stick."), and with hands ("Poking the fingers into one or both eyes and gouging them.").

But the poison recipes, for ricin and other a.s.sorted horrific bioweapons, are the main draw. One, purposefully made from a specific combination of spoiled food, requires "about two spoonfuls of fresh excrement." The doc.u.ment praises the effectiveness of the resulting poison: "During the time of the destroyer, Jamal Abdul Na.s.ser, someone who was being severely tortured in prison (he had no connection with Islam), ate some feces after losing sanity from the severity of the torture. A few hours after he ate the feces, he was found dead."

According to Hoglund, the recipes came with a side dish, a specially crafted piece of malware meant to infect Al-Qaeda computers. Is the US government in the position of deploying the hacker"s darkest tools-rootkits, computer viruses, trojan horses, and the like? Of course it is, and Hoglund was well-positioned to know just how common the practice had become. Indeed, he and his company helped to develop these electronic weapons.

Thanks to a cache of HBGary e-mails leaked by the hacker collective Anonymous, we have at least a small glimpse through a dirty window into the process by which tax dollars enter the military-industrial complex and emerge as malware.

Task B In 2009, HBGary had partnered with the Advanced Information Systems group of defense contractor General Dynamics to work on a project euphemistically known as "Task B." The team had a simple mission: slip a piece of stealth software onto a target laptop without the owner"s knowledge.

They focused on ports-a laptop"s interfaces to the world around it-including the familiar USB port, the less-common PCMCIA Type II card slot, the smaller ExpressCard slot, WiFi, and Firewire. No laptop would have all of these, but most recent machines would have at least two.

The HBGary engineering team broke this list down into three categories. First came the "direct access" ports that provided "uninhibited electronic direct memory access." PCMCIA, ExpressCard, and Firewire all allowed external devices-say a custom piece of hardware delivered by a field operative-to interact directly with the laptop with a minimum amount of fuss. The direct memory access provided by the controllers for these ports mean that devices in them can write directly to the computer"s memory without intervention from the main CPU and with little restriction from the operating system. If you want to overwrite key parts of the operating system to sneak in a bit of your own code, this is the easiest way to go.

The second and third categories, ports that needed "trust relationships" or relied on "buffer overflows," included USB and wireless networking. These required more work to access, especially if one wanted to do so without alerting a user; Windows in particular is notorious for the number of prompts it throws when USB devices are inserted or removed. A cheerful note about "Searching for device driver for NSA_keylogger_rootkit_tango" had to be avoided.

So HBGary wanted to go the direct access route, characterizing it as the "low hanging fruit" with the lowest risk. General Dynamics wanted HBGary to investigate the USB route as well (the ports are more common, but an attack has to trick the operating system into doing its bidding somehow, commonly through a buffer overflow).

The team had two spy movie scenarios in which its work might be used, scenarios drafted to help the team think through its approach: 1) Man leaves laptop locked while quickly going to the bathroom. A device can then be inserted and then removed without touching the laptop itself except at the target port. (i.e. one can"t touch the mouse,keyboard, insert a CD, etc.)2) Woman shuts down her laptop and goes home. One then can insert a device into the target port and a.s.sume she will not see it when she returns the next day. One can then remove the device at a later time after she boots up the machine.

Why would the unnamed client for Task B-which a later e-mail makes clear was for a government agency-want such a tool? Imagine you want access to the computer network used in a foreign government ministry, or in a nuclear lab. Such a facility can be tough to crack over the Internet; indeed, the most secure facilities would have no such external access. And getting an agent inside the facility to work mischief is very risky-if it"s even possible at all.

But say a scientist from the facility uses a memory stick to carry data home at night, and that he plugs the memory stick into his laptop on occasion. You can now get a piece of custom spyware into the facility by putting a copy on the memory stick-if you can first get access to the laptop. So you tail the scientist and follow him from his home one day to a local coffee shop. He steps away to order another drink, to go to the bathroom, or to talk on his cell phone, and the tail walks past his table and sticks an all-but-undetectable bit of hardware in his laptop"s ExpressCard slot. Suddenly, you have a vector that points all the way from a local coffee shop to the interior of a secure government facility.

The software exploit code actually delivered onto the laptop was not HBGary"s concern; it needed only to provide a route through the computer"s front door. But it had some constraints. First, the laptop owner should still be able to use the port so as not to draw attention to the inserted hardware. This is quite obviously tricky, but one could imagine a tiny ExpressCard device that slid down into the slot but could in turn accept another ExpressCard device on its exterior-facing side. This sort of parallel plugging might well go unnoticed by a user with no reason to suspect it.

HBGary"s computer infiltration code then had to avoid the computer"s own electronic defenses. The code should "not be detectable" by virus scanners or operating system port scans, and it should clean up after itself to eliminate all traces of entry.

Greg Hoglund was confident that he could deliver at least two laptop-access techniques in less than a kilobyte of memory each. As the author of books like Exploiting Software: How to Break Code Exploiting Software: How to Break Code, Rootkits: Subverting the Windows Kernel Rootkits: Subverting the Windows Kernel, and Exploiting Online Games: Cheating Ma.s.sively Distributed Systems Exploiting Online Games: Cheating Ma.s.sively Distributed Systems, he knew his way around the deepest recesses of Windows in particular.

Hoglund"s special interest was in all-but-undetectable computer "rootkits," programs that provide privileged access to a computer"s innermost workings while cloaking themselves even from standard operating system functions. A good rootkit can be almost impossible to remove from a running machine-if you could even find it in the first place.

Just a demo Some of this work was clearly for demonstration purposes, and much of it was probably never deployed in the field. For instance, HBGary began $50,000 of work for General Dynamics on "Task C" in June 2009, creating a piece of malware that infiltrated Windows machines running Microsoft Outlook.

The target user would preview a specially crafted e-mail message in Outlook that took advantage of an Outlook preview pane vulnerability to execute a bit of code in the background. This code would install a kernel driver, one operating at the lowest and most trusted level of the operating system, that could send traffic over the computer"s serial port. (The point of this exercise was never spelled out, though the use of serial ports rather than network ports suggest that cutting-edge desktop PCs were not the target.) Once installed, the malware could execute external commands, such as sending specific files over the serial port, deleting files on the machine, or causing the infamous Windows "blue screen of death." In addition, the code should be able to pop open the computer"s CD tray and blink the lights on its attached keyboards-another reminder that Task C was, at this stage, merely for a demo.

General Dynamics would presumably try to interest customers in the product, but it"s not clear from the e-mails at HBGary whether this was ever successful. Even with unique access to the innermost workings of a security firm, much remains opaque; the real conversations took place face-to-face or on secure phone lines, not through e-mail, so the glimpses we have here are fragmentary at best. This care taken to avoid sending sensitive information via unencrypted e-mail stands in stark contrast with the careless approach to security that enabled the hacks in the first place. in the first place.

But that doesn"t mean specific information is hard to come by-such as the fact that rootkits can be purchased for $60,000.

Step right up!

Other tools were in use and were sought out by government agencies. An internal HBGary e-mail from early 2010 asks, "What are the license costs for HBGary rk [rootkit] platform if they want to use it on guardian for afisr [Air Force Intelligence, Surveillance, and Reconnaissance]?"

The reply indicates that HBGary has several tools on offer. "Are you asking about the rootkit for XP (kernel driver that hides in plain sight and is a keylogger that exfiltrates data) or are you asking about 12 Monkeys? We"ve sold licenses of the 1st one for $60k. We haven"t set a price on 12 Monkeys, but can."

The company had been developing rootkits for years. Indeed, it had even developed a private Microsoft Word doc.u.ment outlining its basic rootkit features, features which customers could have (confirming the e-mail listed above) for $60,000.

That money bought you the rootkit source code, which was undetectable by most rootkit scanners or firewall products when it was tested against them in 2008. Only one product from Trend Micro noticed the rootkit installation, and even that alert was probably not enough to warn a user. As the HBGary rootkit doc.u.ment notes, "This was a low level alert. TrendMicro a.s.saults the user with so many of these alerts in every day use, therefore most users will quickly learn to ignore or even turn off such alerts."

When installed in a target machine, the rootkit could record every keystroke that a user typed, linking it up to a Web browser history. This made it easy to see usernames, pa.s.swords, and other data being entered into websites; all of this information could be silently "exfiltrated" right through even the pickiest personal firewall.

But if a target watched its outgoing traffic and noted repeated contacts with, say, a US Air Force server, suspicions might be aroused. The rootkit could therefore connect instead to a "dead drop"-a totally anonymous server with no apparent connection to the agency using the rootkit-where the target"s keyboard activity could be retrieved at a later time.

But by 2009, the existing generic HBGary rootkit package was a bit long in the tooth. Hoglund, the rootkit expert, apparently had much bigger plans for a next-gen product called "12 monkeys."

12 Monkeys The 12 Monkeys rootkit was also a contract paid out by General Dynamics; as one HBGary e-mail noted, the development work could interfere with Task B, but "if we succeed, we stand to make a great deal of profit on this."

On April 14, 2009, Hoglund outlined his plans for the new super-rootkit for Windows XP, which was "unique in that the rootkit is not a.s.sociated with any identifiable or enumerable object. This rootkit has no file, named data structure, device driver, process, thread, or module a.s.sociated with it."

How could Hoglund make such a claim? Security tools generally work by scanning a computer for particular objects-pieces of data that the operating system uses to keep track of processes, threads, network connections, and so on. 12 Monkeys simply had nothing to find. "Since no object is a.s.sociated with the objectless rootkit, detection will be very difficult for a security scanner," he wrote. In addition, the rootkit would encrypt itself to cloak itself further, and hop around in the computer"s memory to make it even harder to find.

As for getting the data off a target machine and back to the rootkit"s buyer, Hoglund had a clever idea: he disguised the outgoing traffic by sending it only when other outbound Web traffic was being sent. Whenever a user sat down at a compromised machine and started surfing the Web, their machine would slip in some extra outgoing data "disguised as ad-clicks" that would contain a log of all their keystrokes.

While the basic rootkit went for $60,000, HBGary hoped to sell 12 Monkeys for much more: "around $240k."

0-day The goal of this sort of work is always to create something undetectable, and there"s no better way to be undetectable than by taking advantage of a security hole that no one else has ever found. Once vulnerabilities are disclosed, vendors like Microsoft race to patch them, and they increasingly push those patches to customers via the Internet. Among hackers, then, the most prized exploits are "0-day" exploits-exploits for holes for which no patch yet exists.

HBGary kept a stockpile of 0-day exploits. A slide from one of the company"s internal presentations showed that the company had 0-day exploits for which no patch yet existed-but these 0-day exploits had not yet even been published published. No one knew about them.

The company had exploits "on the shelf" for Windows 2000, Flash, Java, and more; because they were 0-day attacks, any computer around the world running these pieces of software could be infiltrated.

One of the unpublished Windows 2000 exploits, for instance, can deliver a "payload" of any size onto the target machine using a heap exploit. "The payload has virtually no restrictions" on what it can do, a doc.u.ment notes, because the exploit secures SYSTEM level access to the operating system, "the highest user-mode operating system defined level" available.

These exploits were sold to customers. One email, with the subject "Juicy Fruit," contains the following list of software: VMware ESX and ESXi *

Win2K3 Terminal Services Win2K3 MSRPC Solaris 10 RPC Adobe Flash *

Sun Java *

Win2k Professional & Server XRK Rootkit and Keylogger *

Rootkit 2009 *

The e-mail talks only about "tools," not about 0-day exploits, though that appears to be what was at issue; the list of software here matches HBGary"s own list of its 0-day exploits. And the asterisk beside some of the names "means the tool has been sold to another customer on a non-exclusive basis and can be sold again."

References to Juicy Fruit abound in the leaked e-mails. My colleague Peter Bright and I have spent days poring through the tens of thousands of messages; we believe that "Juicy Fruit" is a generic name for a usable 0-day exploit, and that interest in this Juicy Fruit was high.

"[Name] is interested in the Juicy Fruit you told him about yesterday," one e-mail reads. "Next step is I need to give [name] a write up describing it." That writeup includes the target software, the level of access gained, the max payload size, and "what does the victim see or experience."

Aaron Barr, who in late 2009 was brought on board to launch the separate company HBGary Federal (and who provoked this entire incident by trying to unmask Anonymous), wrote in one e-mail, "We need to provide info on 12 monkeys and related JF [Juicy Fruit] asap," apparently in reference to exploits that could be used to infect a system with 12 Monkeys.

HBGary also provided some Juicy Fruit to Xetron, a unit of the ma.s.sive defense contractor Northrop Grumman that specialized in, among other things, "computer a.s.sault." Barr wanted to "provide Xetron with some JF code to be used for demonstrations to their end customers," one e-mail noted. "Those demonstrations could lead to JF sales or ongoing services work. There is significant revenue potential doing testing of JF code acquired elsewhere or adding features for mission specific uses."

As the deal was being worked out, HBGary worked up an agreement to "provide object code and source code for this specific Juicy Fruit" to Xetron, though they could not sell the code without paying HBGary. The code included with this agreement was a "Adobe Macromedia Flash Player Remote Access Tool," the "HBGary Rootkit Keylogger Platform," and a "Software Integration Toolkit Module."

The question of who might be interested in these tools largely remains an unknown-though Barr did request information on HBGary"s Juicy Fruit just after asking for contacts at SOCOM, the US Special Operations Command.

But HBGary Federal had ideas that went far beyond government rootkits and encompa.s.sed all facets of information warfare. Including, naturally, cartoons. And Second Life.

Psyops In mid-2010, HBGary Federal put together a PSYOP (psychological operations) proposal for SOCOM, which had issued a general call for new tools and techniques. In the doc.u.ment, the new HBGary Federal team talked up their past experience as creators of "multiple products briefed to POTUS [President of the United States], the NSC [National Security Council], and Congressional Intelligence committees, as well as senior intelligence and military leaders."

The doc.u.ment focused on cartoons and the Second Life virtual world. "HBGary personnel have experience creating political cartoons that leverage current events to seize the target audience"s attention and propagate the desired messages and themes," said the doc.u.ment, noting that security-cleared cartoonists and 3D modelers had already been lined up to do the work if the government wanted some help.

The cartooning process "starts with gathering customer requirements such as the target audience, high level messages and themes, intended publication mediums... Through brainstorming sessions, we develop concept ideas. Approved concepts are rough sketched in pencil. Approved sketches are developed into a detailed, color end product that is suitable for publishing in a variety of mediums."

A sample cartoon, of Iranian President Ahmadinejad manipulating a puppet Ayatollah, was helpfully included.

The doc.u.ment then went on to explain how the US government could use a virtual world such as Second Life to propagate specific messages. HBGary could localize the Second Life client, translating its menu options and keyboard shortcuts into local dialects, and this localized client could report "valuable usage metrics, enabling detailed measures of effects." If you want to know whether your message is getting out, just look at the statistics of how many people play the game and for how long.

As for the messages themselves, those would appear within the Second Life world. "HBGary can develop an in-world advertising company, securing small plots of virtual land in attractive locations, which can be used to promote themes using billboards, autonomous virtual robots, audio, video, and 3D presentations," said the doc.u.ment.

They could even make a little money while they"re at it, by creating "original marketable products to generate self-sustaining revenue within the virtual s.p.a.ce as well as promote targeted messaging."

We found no evidence that SOCOM adopted the proposal.

But HBGary Federal"s real interest had become social media like Facebook and Twitter-and how they could be used to explore and then penetrate secretive networks. And that was exactly what the Air Force wanted to do.

Fake Facebook friends In June 2010, the government was expressing real interest in social networks. The Air Force issued a public request for "persona management software," which might sound boring until you realize that the government essentially wanted the ability to have one agent run multiple social media accounts at once.

It wanted 50 software licenses, each of which could support 10 personas, "replete with background, history, supporting details, and cyber presences that are technically, culturally and geographically consistent."

The software would allow these 50 cyberwarriors to peer at their monitors all day and manipulate these 10 accounts easily, all "without fear of being discovered by sophisticated adversaries." The personas would appear to come from all over the world, the better to infiltrate jihadist websites and social networks, or perhaps to show up on Facebook groups and influence public opinion in pro-US directions.

As the cyberwarriors worked away controlling their 10 personas, their computers would helpfully provide "real-time local information" so that they could play their roles convincingly.