ai可能像有用的帮助一样有害 - 具体取决于你如何使用它
Balancing Public Health and Individual Liberty
Despite the substantial benefits that the technology promises, AI deployment without safeguards poses risks at all levels of business, especially for traditional, non-tech companies. To limit severe financial and reputational harm, it is crucial that companies weigh the many benefits of AI use against the risks intrinsic to its use, as well as associated concerns from the broader community. Consider, as one particularly pertinent example, the myriad ways wherein AI has been deployed in response to the global pandemic: from contact tracing to enhanced infection risk profiling, those who develop and use such cutting-edge techniques must carefully balance the dual imperatives of公共卫生和个人自由。
Given the self-learning and automated nature of AI, a well-known concern associated with the technology is that of “explainability,” especially with public-facing “black box” AI models that make decisions on sensitive or consequential issues such as job recruitment, credit risk assessments and medical diagnoses. A lack of transparency and traceability, particularly when using externally procured applications, exposes businesses to significant reputational harm.
Cybercriminals Exploiting AI
Cyber risk is also a significant threat to companies using AI, especially with the rush toward digitization during the COVID-19 lockdowns. In fact, participants in a民意调查of more than 12,000 business executives rated cyber risk as the top risk for doing business in the U.S., the U.K., and Canada — among other developed economies — over the next decade. The growing use of AI in critical business operations will only increase vulnerability to cybercrime as hackers can gain control of entire systems simply by manipulating their underlying algorithms. AI can moreover directly enhance the arsenal of cybercriminals who can now cause disproportionate levels of harm by leveraging the speed of decision-making enabled by automated programs. Smarter cyber threats, coupled with industry’s growing reliance on digital capabilities, only escalate the risks to operations and revenue streams.
鉴于技术的复杂性以及其潜在危险的普遍性，在运营的各个方面，多方面和动态approach to governance需要管理AI风险。
除了这些技术灾害,ado的企业pt AI solutions, also risk reputational harm and revenue erosion if consumer data is used inappropriately or otherwise exposed. Some major tech companies have drawn sharp criticism over the last few years for allegedly misusing sensitive voice data recorded by their AI-powered digital assistants. Given Big Tech’s enduring ability to generate insights from big data and exploit personal profiles in ways that consumers have not anticipated or accepted, such scrutiny will surely persist. This public outcry for data privacy will no doubt extend to non-tech firms in the future.
Finally, due to the emergent nature of this technology, companies may find themselves deploying AI in rapidly evolving regulatory environments, complicating compliance efforts. The global fragmentation of data standards creates additional regulatory discontinuities across jurisdictions. Non-tech firms that are less familiar with international differences in AI-specific legislation may struggle to align their use of AI with shifting regional mandates, thereby necessitating decentralized, and often difficult and costly, policy rollouts.
这些只是企业揭露自己的威胁，他们应该试图在不实施有效和整体治理措施的情况下实现AI的益处。鉴于技术的复杂性以及其潜在危险的普遍性，在运营的各个方面，多方面和动态approach to governance需要管理AI风险。It is important that businesses evaluate their use of AI technology across five areas:
- 问责制：承担严格的审计和合规保证程序，以缓解各利益攸关方的关切 - 立法者，审计师，客户，商业伙伴和股东等人。