Rails is very much on my mind after hearing about the huge hole in the application. If you made the right call, you could wipe out data. I can't help but be disappointed that rails didn't have a security process before such a big problem.
Rails 1.1.6, backports, and full disclosure
Read this fantastic comment by planetmcd. I wish I could just link to it on a blog. I've taken liberty on formating it but all wording is from this commenter.
# planetmcd on 10 Aug 18:46:
DHH et al., First, I hope you get some much deserved rest today. Thanks for the hard work and the disclosure. Let me layout what I believe to be the source of frustration for many posters over the last day.
With a security breach there are 3 discreet tasks.
1) plug the hole,
2) assess the damage if there has actually been a breach,
3) take steps to correct the damage.
As a framework developer, all you can do is work on Step 1. And assuming things are smoothed out, your job is done and in quick fashion to boot.
For the people using your framework, steps 2 and 3 are equally if not more important to be handled in a timely fashion. Think compromised bank accounts or credit cards. The sooner clients know about this, the sooner they can protect themselves, and the sooner they will get over their anger.
By issuing a dire warning and then not revealing the problem, developers had no way to judge whether they should shut down their app, do nothing, put in place other security measures. And they had no means to judge whether the fix actually worked or not.
I do sympathize that hiding the attack vector to reduce detection by lowerlevel crackers, while you and the team feverishly worked on a solution might have been the most logical approach from the framework standpoint, it was a tough position for some members of the community. And while you’ve primarily created a framework, you’ve also created a community.
Let me also say that I also regret that many who disagreed with your decision expressed that disagreement in an immature fashion. How people state a point can diminish the validity of that point, and I hope that is not the case here. Some posters on both sides should really take some time and think about what they say before they hit send. This isn’t a black and white issue and treating as such reflects poorly on the posters and the community.
Thanks for your effort (in this case and in general), handling the situation with aplomb, and taking proactive measures for future security issues.
My only addition to planetmcd, is a possible solution. The biggest concern with disclosing the vulnerability was that big sites that have used rails (odeo, second life's map, off the top of my head) needed to have that full disclosure. Actually any Internet facing website need this information but as soon as the public knows about it, then anybody can type the right url into a browser and delete parts of databases.
Stream of consciousness, such as it is
I believe the only solution for this type of disclosure is through a fee-based support model. I can't think of any other way to let the good people know and keep the bad people from knowing. If DHH offered a security support model that companies could pay for the quicker more direct information, then you could mitigate some of the risk of full disclosure.
Of course, it would only take one person to bring that information to the press, or the bad guys could even subscribe to the security support contract
Create a security support contract that costs $300 to get "trusted" by the core development team. On the contract, you could bind users to huge fines for disclosure which would prevent users from going to the press publically. Then you only have the untrustable good guys and bad guys.
Maybe there is no easy way. Previously, I had thought that all the money transactions could be replaced by trusted GPG keys, but I'm not sure. If there was only a way to encrypt a message that would show who opened the message. Naw, you could always circumvent the process by cutting the decrypted information out of the message. Unless there was always a block that would decrypt with the other parts of the message that could identify who had decrypted the message.
=====GPG Decrypted Message====
ajdflkj029384oiuweu0293480980 This is a short asymmetrically encrypted hash
Here is the decrypted text.
=====end of Message===========
The blocks (hashes) could always be different depending on who's public key opened the message. Really what you would have here is a signed message encrypted within a signed message. The block could originally be something that only the sender can create. When a user strips off the encryption with their key the block doesn't get decrypted because it was never encrypted to that user's key. In effect, the block gets changed in a way only the recipient could have changed it. I don't know if there is any way that you could use the recipient's public key to determine who decrypted it. Maybe if the sender used their key that they encrypted the block, they would get something that you could compare to other public keys.
I think this could almost work and if you required recipients to send a copy of the decrypted text as an acknowledgment request (Not that you can force anybody to do anything but if you don't receive their acknowledgment then you could talk to them)
It's more like a signature within a signature.