I agree with floyd, but I think that who's the fault depends on the circumstances.
Let me be more clear, I'll take a moment to state what I learn from my job as security analyst and my experiences about OSS. Then the OT will finish on my part
promise.
There is absolutely no way to write bug-free software.
There is absolutely no way to have an active server with no security breach.
The effort to maintain security and privacy must be on all levels (proactive, passive, active), from everybody (developer, admin), constant and possibly flawless. I'm talking about the effort of course, everyone should do what's possible to bring security at the wanted level; security itself can. not. be. flawless.
Now, what's to understand is that people have different needs.
For example, there is absolutely no need for any security if a student wants to study someone's daemon/web code and build up a lab's standalone server just to see what it looks like.
Differently, one must obtain a very high level of security if running the same code on a server reachable via Internet which resides in a high risk network, like a bank.
This means that the developer must pay attention to the security, but since it's not possible to secure a software just by coding it right, there is no sense putting too much effort on it either: integrators (people that puts the software where it resides, or simply administrators) are anyway meant to use other means to secure the service at the wanted level.
If an OSS developer spends all of his time worrying about security, well the results are: the software won't be written or will be written too slowly, and the security won't anyway be good enough for high profile "customers" for the stated reasons. It's really that simple.
Now, of course this brings us to the responsibility of the integrator to judge if a software is secure enough, and also the responsibility of the developer to follow at least the basics rules of security, maybe take some security-related lessons about the language he's writing with.
The integrator will have to check, or make someone check, for public disclosures on the software's vulnerabilities (for example by subscribing to Bugtraq mailing list), that's the error most people here has done, and the developer will have to ask someone expert when including some language he doesn't really master, that's probably the error that the Roundcube developer has done (the vulnerable code is in perl).
There are thousands of small to huge things that either the integrator or the developer can do to improve security beside what I just told, but once again it depends on what the wanted security level is.
My 2 cents.