Bijective Programming

In 1987 a friend of mine asked for some help with his (Turbo) Pascal home assignment. He had to write a program that would convert Roman numerals to Arabic. A classical exercise. He had written some 200 LOC. He noted that graceful handling wrong input necessitated a lot of if statements and cleanup code. It made the code bloated and hard to read. He still was unsure if his code would catch all possible wrong input—it did not. I explained to him that the bijection (i.e., converting from Arabic to Roman) was straight forward. So we wrote 20 LOC that would, for any Arabic integer input, generate the correct Roman string. Than we refactored his code until about 30 LOC were left. It would generate the correct Arabic integer for any well formed Roman string, and some integer value for any other input. Than we wrote some 10 LOC that would take a string convert it to a number and back to a string and compare it to the original input. If they were the same, it would proclaim the answer, otherwise it would print something like this:
I'm sorry Dave, 'IM' isn't Roman, did you mean 999 (CMXCIX)?
His solution would baffle his classmates, yet he got one of the lowest notes, because the (math) teacher was totally confused about the Arabic to Roman part—plus we did not know his name was Dave. The teacher failed to see that the bijective programming, as usual, resulted in very clean and robust code, with output that bordered on AI.