Eagle Alpha Legal Wrap - May 2023

Eagle Alpha rounds up some of the most relevant legal and compliance articles surrounding the alternative data space over the past month.

2 years ago   •   5 min read

By Dallán Ryan

Eagle Alpha rounds up some of the most relevant legal and compliance articles surrounding the alternative data space over the past month.

US

Montana became the first US state to pass a full ban on TikTok, citing concerns that the Chinese government could access Americans' data through the popular video-sharing app. The ban would make it illegal to download the app in the state with penalties of up to $10,000 a day for any entity that makes the app available. The ban would not come into effect until January 2024. You can access the full article here.

Senate Majority Leader Chuck Schumer has revealed a new legislative framework to regulate AI. Schumer's proposal includes requiring companies to allow independent experts to review and test AI technologies ahead of public release or update, as well as giving users access to findings. The plan will require approval from Congress and the White House. You can access the full article here.

On the topic of ChatGPT and LLMs, Peter Green, Partner at Schulte Roth & Zabel said: "From a securities law perspective, a key for ChatGPT right now is whether the data that ChatGPT “spits out” is public in the same way that search results on Google are public."

The US Commerce Department has requested public comment on creating accountability measures for AI, seeking help on how to advise US policymakers to approach the technology. The National Institute of Standards and Technology has also published an AI risk management framework, and many federal agencies are looking at the ways current rules may be applied to AI. You can access the full article here.

Alan Davidson, the Head of the National Telecommunications and Information Administration (NTIA), commented on potential AI regulations: “In the same way that financial audits created trust in the accuracy of financial statements for businesses, accountability mechanisms for AI can help assure that an AI system is trustworthy.”

Washington state has introduced a new health data privacy law, the "My Health My Data Act," which expands protections for residents, including restrictions on the sharing of location data to shelter abortion seekers who might face legal trouble in their home states. The law reflects protections offered by the EU's General Data Protection Regulation and is the first of its kind in the US. You can access the full article here.

The FTC Chairwoman Lina Khan pledged that the agency will be vigilant in monitoring the "unfair or deceptive" use of AI as the technology is increasingly used across critical sectors. In an op-ed published in The New York Times, Khan stated that the FTC will monitor the risks of AI to ensure that the hard-learned history of the rise of Web 2.0 does not repeat itself. You can access the full article here.

The Consumer Financial Protection Bureau (CFPB) and the National Labor Relations Board (NLRB) have signed an information-sharing agreement to address practices that harm workers in the "gig economy". Employer surveillance tools are one of the focus points as they can continue to track workers outside of working hours, and the companies that own these tools might sell worker data to financial institutions, insurers, and other employers. These practices may be violating the Fair Credit Reporting Act and other consumer financial protection laws. You can access the full article here.

China

WIND Information, one of China's largest data aggregators, has reportedly blocked offshore users from accessing certain business and economic data, including business registry details such as company shareholding structure and macroeconomic data like land sales in certain cities. The reasons for the blocking of access were not known, but it comes amid China's tightening focus on rules related to data usage and transfers. You can access the full article here.

Peter Greene, Partner at Schulte Roth & Zabel said: "China’s crackdown on the availability of alternative data is worth watching. If the quality and breadth of data sets offered by alternative data vendors deteriorates, fund manager clients may cease to purchase certain data sets, seek to pay less and/or seek to attempt to exit existing contracts."

China's Cyberspace Administration issued a draft of Administrative Measures for Generative Artificial Intelligence Services with proposals covering issues like non-discrimination, bias, and data protection. The document states that the development of generative AI and international cooperation is encouraged while requiring security assessments for new technologies, content moderation, and algorithmic transparency.  You can access the full article here.

Europe

European legislators proposed new copyright rules for generative artificial intelligence (AI), requiring companies deploying such tools to disclose any copyrighted material used to develop their systems. Under the proposals, AI tools will be classified according to their perceived risk level, with companies required to be highly transparent in their operations when deploying high-risk tools. Though such tools will not be banned, areas of concern could include biometric surveillance, spreading misinformation, and discriminatory language. You can access the full article here.

The European Hospital and Healthcare Federation (HOPE) called for a health sector-specific approach to AI to ensure that its deployment benefits patients and consumers. It highlighted the need to prevent seemingly "low-risk" AI systems from harming individuals by revealing their identities or drawing conclusions based on biased data. HOPE urged the health community to co-shape AI policies to reflect the diversity of healthcare provision. You can access the full article here.

Meta disclosed that it expects to receive a fine and a suspension order from the Irish data protection authority in relation to its transatlantic data transfer arrangements for Facebook. This comes after the European Data Protection Board issued a binding decision regarding the legality of Meta’s reliance on standard contractual clauses for EU-US data transfers. You can access the full article here.

UK

On April 17th, The UK's Data Protection and Digital Information (No 2) Bill passed its second reading in the House of Commons. While the introduction of the Bill seeks to provide a business-friendly regime, without causing regulatory disruption, concerns were raised around the significant expansion of the Secretary of State's powers, replacement of the Information Commissioner's Office, increased scope for surveillance, and AI and automated decision making. You can access the full article here.

The UK Department for Science, Innovation, and Technology (DSIT) released a white paper proposing a framework that promotes public trust in AI. The approach will create rules proportionate to the risks associated with different sectors’ use of AI and will establish a regulatory sandbox to bring together regulators and innovators. You can access the full article here.

Industry Commentary

Parry Malm, CEO at Phrasee, on AI regulations: “AI absolutely needs to be regulated on an industry-by-industry basis, because there are degrees of consequences we’re dealing with here. If AI generates a bit of content that just isn’t very good, and nobody clicks on it, it’s not that big of a deal. But if you’re giving medical advice and it hallucinates a fact or gets it wrong, that’s high stakes.  Given today’s news on the UK government’s careful stance on regulation, I think the EU is going to be the first to bring out AI legislation. In the US, passing any sort of regulations these days is rather difficult. I think that the regulations in the US are going to be de facto regulations from case law.” You can access the full commentary here.

Lori Witzel, Director of Thought Leadership at TIBCO, on generative AI and privacy breaches: “The rapid commercialization and deployment of ChatGPT and similar generative AI continues to raise ethical questions, particularly with regards to data privacy. ChatGPT is trained on vast amounts of data – potentially including personal data. Without comprehensive data privacy laws in place for generative AI, this could lead to serious privacy breaches. It is critical for generative AI providers to equip generative AI tools with guardrails, such as tangible ethics policies and clearly outlined creator and intellectual property rights. And for users of generative AI, it’s equally important to have guardrails and legal review to reduce risk.” You can access the full commentary here.

Spread the word

Keep reading