A recent Guardian article reminds us that computer crime laws may be applied toward cybersecurity researchers disclosing vulnerabilities in modern technology products, services, and systems. So perhaps it's not the issue 'resurfacing' per se as it is reminding us of the current state of cybersecurity laws in America -- and especially how those laws relate to the perpetual 'debate' over cybersecurity research and disclosure.
Whether it's the vaguely worded - and controversial - Computer Fraud and Abuse Act (CFAA)[1] or a creative interpretation of other statutes invoked against cybersecurity disclosures and their related information flows (such as the DMCA was several years ago), government and corporate entities apparently still do not understand that independently developed knowledge about cybersecurity vulnerabilities is not easily constrained by legal or technological controls. Ironically, efforts made to restrict the flows of such information tends to expand, not reduce, its visibility to the public via the well-known Streisand Effect. Neither do they acknowledge the utility of such third-party developed insight into the risks presented by modern technologies -- many of which otherwise would go undiscovered, unreported, and/or unresolved until after an adverse incident takes place.
While there are risks to disclosing cybersecurity vulnerabilities to the public, a far greater risk is creating an uninformed, "just trust us" mentality where the perceived reality of cybersecurity conditions are crafted not by objective experts but corporate and government entities whose motivations may not match those of the broader community they purport to serve. Restricting or threatening those conducting such research represents a fear-based, risk-averse approach to security, not an objective and forward-thinking one. Therefore, updating and modernizing both the text of such laws and the spirit in which they might be applied is a major step towards improving American, if not also global, cybersecurity and digital well-being. However, attacking the existence (or potential existence) of knowledge related to this topic does not acheive that goal.
"Just trust us" is not an effective security mindset, and blind faith alone is not a viable security posture.
[1] For example, see Jennifer Granick's in-depth thoughts about the CFAA's controversial role in the Aaron Swartz case here and here.