Hacking of banks and identities is big business. An estimated 17.6 million Americans were subject to identity theft in 2014, mostly through breached bank accounts and credit cards. At this point, bank hackers are probably not looking for biometric data when attacking a bank. But even if it leaks as a by-product of a financial breach, criminals will find ways to abuse biometric data or resell it for further exploitation. And biometric data is more sensitive than other personal information banks store on behalf of their customers because unlike a credit card number (or even a name!), stolen biometric data cannot be replaced: It corresponds to a person's face or fingerprints.
In general, financial institutions tend to invest more in security when they are mandated to do so and, even then, their efforts are mostly focused on minimizing their own financial loss. For example, when credit card data is stolen, other personal data of the customer can also be compromised, but credit card issuers do not specifically address that. It can take customers months to resolve various issues that result from identity theft. If the compromised data happens to be biometrics, issues of identity theft may simply be unresolvable. So any regulation of banks’ use of biometrics should be designed to impose sufficient financial loss on the banks to incentivize them to design systems that effectively safeguard biometrics.
It is not enough for banks to simply avoid storing images of fingerprints, faces or irises. The biometric data that they get from processing those geometries (what banks call “templates”) can also be abused if they are accessed in combination with the algorithm used to extract the templates from the original images. One exception is when banks use the iPhone Touch ID, which stores all fingerprint data locally on the phone in an encrypted format. In that case, banks rely on Apple to tell them whether there is a fingerprint match, and do not access any biometric data themselves.