Five security lessons to learn from the Twitter worm

On 14 August 2010, Japanese developer Masato Kinugawa reported a scripting vulnerability in Twitter’s automatic URL-linking functionality. Twitter reportedly fixed the problem, but with an upgrade to the service in September, the problem appears to have been reintroduced.
Comprehensive test coverage is a means developers use to ensure code quality. In test-driven development, programmers create tests before there’s even any code to test, then write code to satisfy the requirements of the tests. Less extreme approaches to test coverage are probably more common in practice, and are sometimes a better approach to tackling a particular development project. For a major project like maintaining the Twitter codebase, though, at least some kind of testing coverage is important — including regression testing and other means of checking for the re-emergence of previously fixed errors in new code.
A particularly common means for the reappearance of old errors is poor version control practice. Perhaps even more important for protecting against the reintroduction of old bugs is proper use of a version control system. For those who are not familiar with the concept, version control software is like a highly optimized means of keeping backups during software development that can help do things like back out of changes that turned out to be a bad idea, keep all developers on a project up to date, merge changes between separate development branches, and manage patches. If new code is improperly merged with old, allowed to become out of sync then committed in a way that clobbers previous fixes, or otherwise mismanaged in version control, or if version control is simply not used at all and human beings have to manually handle the tasks automated by version control, previously fixed bugs can all too easily be reintroduced with new code.

Kinugawa noticed that the same bug had been reintroduced, and created a proof of concept exploit for it that caused “tweets”, or Twitter messages, to appear in a rainbow of colors. At the time, he did not think it was a serious vulnerability, but once the scripting vulnerability became known to others, they too began playing with the concept. Soon, tweets were retweeting themselves automatically any time someone moused over the link. It went from a simple weird trick to a worm. Soon, the exploit had evolved to the point where it effectively turned the entire Webpage in the browser into a mouseover sensitive area, so that simply having your mouse within the browser window would cause it to retweet the exploit.
Because this directly targeted the way the Twitter site itself handled URLs, it apparently did not affect RSS feeds, third-party clients, and other means of reading and sending tweets outside of twitter.com’s standard Web interface.
A number of important lessons should be taken from this chain of events:
  1. Sanitize all input, and always prefer sanitizing methods that are already tested and proven effective, all else being equal.
  2. Double-check your output to make sure it does not affect the end user in surprising ways, such as the mouseover effects in Web browser clients.
  3. Use version control when developing software to help protect against errors creeping into code through source mismanagement.
  4. Use automated testing suites to protect against regressions and other errors that might otherwise slip by your developers.
  5. Do not underestimate the effect of a given vulnerability when it falls into the hands of someone with a more devious mind than yours.
It is always a good idea to learn from your own mistakes. It is usually a better idea to learn from others’ mistakes, so you do not have to make them yourself.

0 comments: