Preventing social network worms

I woke up yesterday morning and had a sudden flash of inspiration to stop all social network worms. I dunno why I wasn’t even researching them, I’ve no idea how my mind works it’s funny like that. Anyway sometimes I have bad ideas and sometimes they’re good. I like to discuss them all because that’s what an idea is, something to discuss.

The concept, asta la vista wormy

So the concept works like this, you have a social network crawler client/server side that acts as a normal user it could even masquerade as an existing friend. This crawler continuously goes to different accounts with the goal of being exploited. Once the crawler is infected with some XSS code, it proceeds to follow whatever method the attacker uses to propagate the code but instead of posting an update, adding a friend or updating it’s profile it logs the results and freezes the originating account.

The special crawler account must look and act like a normal user but normal user functionality is replaced by logging, think of it as a robot administrator I like to call them Terminators. Depending on the type of social network you could either prevent all new updates from happening until the flaw is closed or simply crawl the history of that user until all have been enumerated and repeat.

Crawling intervals could be changed depending on your requirements and you could specifically target accounts to crawl for example “user a” is posting 3 updates every minute or “user b” friend count is increasing every x minutes etc you get the idea.

Once the terminator encounters an infected account it is effectively mission accomplished as the site has a XSS worm but importantly you have some vital information, you know which page the infection occurred and the account is originated from and the method used to infect. I’d disable the site functionality here and place a maintenance message or something. You could continue crawling and try and find more infected accounts but you are always fighting against the time your crawler(s) takes to enumerate all accounts.

You may have noticed that I said “masquerade as an existing friend” this is because I already thought of a way to bypass detection by using worm code to detect if the crawler is a valid account or not. You need to make sure that these crawler accounts appear to be real in every way.

To create a crawler server side you’d need to use a server side js parser and browser environment depending on the complexity of your site you might find it easier to create a network of VM’s and Selenium and run everything client side. Designing a crawler user account would also have to be done carefully, each operation needs to be intercepted and logged, the account itself shouldn’t be able to authenticate and even if compromised shouldn’t allow any operation other than to freeze the account where it was compromised from.

I HEREBY PLACE THIS IDEA IN THE PUBLIC DOMAIN

3 Responses to “Preventing social network worms”

  1. Andy B writes:

    Your premise relies on the “honey pot” user knowing it has been infected…not so easily done…

  2. Gareth Heyes writes:

    @Andy B

    That’s the whole point of the idea, the honey pot user knows it’s infected because it should never update it’s profile or do any normal operation, once it attempts to perform one of those operations then the server knows it has been infected and the origin user frozen.

  3. flow r3direction writes:

    some xss worms spread by browser dependent xss tricks (-moz-bind,behaviror,ect) you need a comp that sits around and logs in to many users and as many browsers, while possible, expensive and time consuming.
    this idea might catch on at first, but much like the “aiert” in facebook, once the truth comes out the game is on (check user moves mouse, onfocus overwrites and so forh)