Hacking and Malware Analysis Have More In Common Than You Think
One would be surprised to find how much hacking and malware analysis have in common. On the surface, they seem like two completely different tasks: hacking is all about identifying flaws in computer systems with the goal of bypassing their security. Malware analysis, instead, has as its goal the identification of malicious programs and their infrastructure.
Hacking and Malware Analysis Commonality
However, these two tasks have some common DNA: program analysis. In both cases, a program (binary code, scripting code, byte-code) needs to be carefully analyzed in order to understand what it does. This knowledge is then used differently in the two fields: In malware analysis one uses the information about the (possible) actions of a program in order to classify it as malicious or benign, while in hacking (or, using a more scientific term, “vulnerability analysis”) one uses this knowledge to find flaws that can be triggered in order to bring the program in an erroneous state (e.g., by corrupting its memory and forcing it to execute code provided by the attacker).
Another difference between these two worlds is that while there are no malware competitions (at least no competitions that are public!), there are many hacking competitions throughout the year. They are called Capture The Flag (CTF) competitions, and hacking teams participate in these competitions like soccer teams would participate in championships and the World Cup (one can find the list of all competitions at ctftime.org). In fact, there is a World Cup of hacking: the DEF CON CTF. This competition happens during the three last days of the DEF CON hacker convention, every year around the end of July. Some teams qualify for this CTF through a gruesome qualification round in May, others qualify by winning other CTF competitions (for example, the one that I have been organizing since 2003, called iCTF, is a qualifying event). As a result, this year 15 of the top hacking teams in the world got together in Las Vegas, sat for three days in a dark room with loud music, and hacked the hell out of each other.
One of these teams is Shellphish, a team I created in 2005, with my students at UCSB to participate in these competitions. Shellphish has since then grown as students graduate but stay connected, new students join, and friends are invited to participate. It’s now a loosely defined (and chaotically organized) group of people who love hacking (the good kind) and perform novel research in this field. In fact, Shellphish participated in the DARPA Cyber Grand Challenge in 2016 by creating an autonomous hacking system, called Mechanical Phish, that fought against six other similar systems in a CTF competition where no humans were involved. Mechanical Phish placed third (but was the most successful at hacking!) and brought home $1.5M in cash prizes ($750K for qualification and $750K for third prize). Now when my students spend late nights working in the lab, they can afford sushi instead of burritos.
Even though the DARPA Cyber Grand Challenge pushed the limits of automated hacking, the competitive hacking scene is still dominated by humans, and, in most cases, the CTF competitions follow a familiar scheme: each team is given a server with a number of vulnerable services. The teams have to find flaws in these services, and leverage this information to: (1) develop exploits to compromise the services of other teams, and (2) patch their own services. When a team breaks into an opponent’s service, it has to steal a file (the flag) to demonstrate the effectiveness of the exploit. This flag (which changes constantly) is then redeemed for points
When a team breaks into an opponent’s service, it has to steal a file (the flag) to demonstrate the effectiveness of the exploit. This flag (which changes constantly) is then redeemed for points.
Every attack/defense CTF follows this scheme (although there are other competitions that are jeopardy-style, with challenges to be solved off-line), but organizers always try to introduce variations to keep the game interesting. This year, the organizers of the DEF CON CTF (the LegitBS hacking group) came up with a real curve ball: a novel CPU architecture, called cLEMENCy . This made everything harder because every hacker working on binary exploitation is familiar with x86, MIPS, and ARM (at least) and has a set of tools to be used in the vulnerability analysis process. These tools would become useless if a new architecture is introduced, as happened in this competition. In addition, the architecture was very unusual: bytes were composed of 9 bits (instead of 8) and words were composed by three bytes in “middle endian” order. ”Little endian” and “big endian” are the two ways in which multi-byte values typically are interpreted by CPUs.
Middle endian is a new multi-byte order introduced by this new CPU architecture, (which each team had to first figure out). This made the competition crazier than usual, with network traffic that didn’t make any sense since the network speaks in 8-bit bytes, but the services were talking in 9-bit bytes and effective tools that stopped working. Many had to resort to simple debugging sessions and long nights in front of a terminal, but of course, this made the whole hacking thing more fun. At the end of the competition, Shellphish finished 7th out of the 15 participants teams.
Participating in these competitions might seem silly, or a waste of precious (sleep-deprived) time, or just a source of trouble, but, instead, these events are educational and foster team building. After three days in a room hacking together, collaborating on projects under incredible time pressure, people become closer and more effective at coordination. Also, new ideas are born in these long nights of hacking. Looking at binary code that often resists analysis (similar to evasive malware) has often inspired the development of novel techniques for the automated extraction of program behaviors and the ability to detect truly malicious activity. Finally, these competitions are a fantastic recruiting tool: most experts in vulnerability analysis make great malware analysts. It’s no surprise that several people at Lastline are (or were) competitive CTF players!
Latest posts by Giovanni Vigna (see all)
- Adapt Security Processes in Response to COVID-19 - May 12, 2020
- Detecting Malware Without Feature Engineering Using Deep Learning - February 26, 2020
- Countering the Rise of Adversarial ML - October 16, 2019