Sliceya CAPTCHA

If you’ve read this blog from the beginning you’ll know I like to write CAPTCHAS. The reason being is that it is a technical challenge to write something that a computer has difficulty reading. I think the Codetcha I wrote a while ago was successful in concept because the code errors would be very difficult for a computer to fix (if random enough). But the Codetcha only catered for technical users

Enter Sliceya! This new CAPTCHA relies on the fact that humans can identify a image or face easier than a computer can. The idea is you assemble slices of a picture in the correct order and type what you think the picture is. To make this hard for the computer to solve the keywords would have to be random enough and the picture should be different each time.

I’ve done a proof of concept which searches the web and brings back images that match the keywords you’ve sent. The picture is then dynamically sliced and you have to solve the puzzle and enter the correct keyword.

I would like to thank Ronald for testing and giving me some excellent suggestions.

Here is the proof of concept. Enjoy!
Sliceya!

Update…

I’ve removed the hint and it now accepts either part of the keyword. For example barac will be accepted even though the spelling is incorrect. Example here:-
Updated sliceya

8 Responses to “Sliceya CAPTCHA”

  1. Daniel writes:

    Good idea, my only concern would be the bucket of images used. How would you ensure they cater for all ages, nationalities and territories?

  2. Gareth Heyes writes:

    @Daniel

    The images are generated from a web search so they could be used depending which country they are from and even match specific content

  3. Vinu Thomas writes:

    My concern is that the name has to be typed as is so even capitalizations, spaces and hypens count which filter humans as well.

  4. Gareth Heyes writes:

    @Vinu Thomas

    Those issues can be fixed quite easily. It already accepts any case and I could allow any word e.g. Homer or Simpson.

  5. DG writes:

    I like it.

    Is it not sufficient for just the image to be put ‘together’ without the textual question at the bottom. Image assembly is pretty much unambigious, while the question isn’t (Is the answer, ‘[A ]president’, ‘Obama’, [insert misspellings etc])?)

  6. ef writes:

    As much as I applaud your idea from a “what if” approach, every captcha that needs this amount of javascript to work is pure fail.

    Images are already hard for some target audiences, but this is just insane if you’re surfing with noscript or something.
    Plus the image of Marge Simpson’s back wasn’t recognised as “marge” – maybe that was the old version, but I’d have to think some minutes to try to come to “simpson(s)” as answer to “I think the image looks like a/some”

  7. Gareth Heyes writes:

    @DG

    The question is incorrect, I didn’t update it. I admit it is confusing.

    @ef

    A non-javascript version could be coded because it’s just swapping image positions. Marge should be accepted now as well as mis-spellings like marrge.

  8. kaes writes:

    it is in fact easier for a computer to align the sliced images than it is for a human. a very simple computer vision algorithm would take the least squares color difference along the adjacent edges over all permutations.

    if you’d up the amount of slices, so that enumerating all the permutations becomes computationally infeasible, you could do it with a hillclimbing or simulated annealing algorithm. you’d also have a puzzle that would take a human several hours to solve (so if they give the correct answer too quickly, you know they’re a machine …)