RFG is proud to recognize Melissa Alvisi (Maxwell 2024) as the winner of the Dr. Michael Schneider Professional Writing Award for the first quarter of 2024. Her award-winning piece is titled, “The Biden Administration and the First Executive Order on Artificial Intelligence: An Assessment on AI Watermarking Regulations.” Please note that the views expressed in this publication are entirely those of the author and do not reflect the views, policy, or position of the United States Government, the Government Accountability Office, or the U.S. Department of Commerce.

Overview

The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 2023) offers items of discussion and solutions through regulations and establishments of taskforces related to AI, engaging a wide range of government agencies and entities collaborating with the latter.

The tradeoffs and requirements to mitigate issues raised are many and make watermarking as a technique to regulate AI more complex. The EO is clear with its purpose to address the irresponsible use of AI by realizing that, although there are benefits, it may also lead to risks that need to be mitigated appropriately and with proper timing before damages such as fraud, discrimination, bias, and disinformation take place. As such, AI-policy now gains importance as we get closer to the 2024 election, and many political events take place in the world.

Finally, this piece assesses that (1) watermarking may be used as a tool rather than a solution to tackle disinformation, and that, before studying it further, (2) the U.S. Department of Commerce and affiliated agencies will need to elaborate on the concept of voluntary cooperation, the collaboration with tech companies, as well as the cost effects on small and medium-sized enterprises.

Summary

  • With the executive order (EO) establishing safety and security when using artificial intelligence (AI), labeling and watermarking are introduced as tools to combat disinformation and enforce security. Although the EO tasks the Department of Commerce to report on standards, methods, and practices to authenticate, label, audit synthetic content, and detect watermarking, some legal issues and policy implications are raised and explored further in this piece.
  • Watermarking is a technique with potential to manage disinformation by giving accountability to synthetic and non-synthetic content produced by the Federal Government (or on its behalf). In the context of the EO, watermarking would verify the authenticity and the origin of the output created by AI.
  • The EO’s focus on watermarking and its introduction two days before the UK AI Safety Summit in London may have played a crucial role in designing the USG’s position concerning AI – raising its benefits while acknowledging the importance of risk mitigation, also considering the 2024 presidential campaign approaching. Watermarking is also a tool used by tech companies in the US, showing that addressing disinformation is on the agenda for both the private and public sectors. This is pivotal step to strengthen public-private cooperation in the United States.
  • Although AI-focused policy development is picking up steam, research on watermarking is still a work in progress, and the study of standards, tools and methods may not be as reliable as policy makers have forecasted. Studies on false positives and negatives also demonstrate vulnerabilities around watermarking, creating additional costs and investments in security systems that small and medium-sized companies may not be able to afford.

Background

According to the EO on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the Biden Administration defines content as “images, videos, audio clips, and texts modified or generated by algorithms, including by AI,” and watermarking as “the act of embedding information into outputs created by AI for the purposes of verifying the authenticity of the output or the identity of its provenance, modifications, or conveyance.” Watermarking is an important process when it comes to copyright protections and marketing of digital work.

In practical terms, with this executive order, federal agencies producing AI-generated or AI-enabled content will need to watermark media and text to provide the source and authenticity of the product. To achieve the latter, the EO highlights the need to study existing methods and tools to detect and level such content.

In addition, to establish the authenticity and origin of digital content, the Secretary of Commerce and other relevant agencies “shall submit a report to the Director of OMB and the Assistant to the President for National Security Affairs identifying methods and practices and potential development of further science-backed standards and techniques for using watermarks.” Secretary Raimondo will have eight months to dedicate to this task.

Implications

  1. The EO is the first of its kind. Not only does it assume a pivotal role by introducing the concept of regulating AI in the US government, but it also aims to send a message to foreign governments as well as the tech sector. The EO was strategically announced on October 30, two days before the UK AI Safety summit, where Vice President Harris emphasized the Administration welcomes tech companies to “create new tools to help consumers discern if audio and visual content is AI-generated.” Ultimately, the EO represents one of the first official stances from the USG pertaining to AI regulations, considering actions that will be taken in the next year by the Administration.
  2. The order allows the USG to build a solid base for a positive relationship with its developed tech companies. For instance, OpenAI’s marking on DALL-E images and Google’s beta version of SynthID, introducing a watermark directly inserted into the pixels of an image, show how leading enterprises in the market understand the importance of this practice and how it can be useful to identify what is AI-sponsored and what is not.
  3. The timing of the release is pivotal. Watermarking is one of the first regulations that will challenge deepfake tech and set a higher bar for tech companies and government agencies and it will be interesting to see how this plays out in the next few months. It does so with the 2024 presidential election approaching, as there is fair concern that AI-generated images may be used to spread false information. The coalition of tech companies in the attempt to find ways to create digital watermarks solely legible by a computer and hard to detect by the human eye may be a new tool that the Department of Commerce should explore as they fulfill the tasks given under the EO.
  4. There are costs and security implications related to watermarking that are a “black box” to some. While big tech companies can afford the costs related to studying false positives and reads, speed, and adopting the most robust and reliable security system related to watermarking and labeling, such practices may hurt emerging start-ups and small to medium sized tech companies. Moreover, the Department of Commerce and its strategic planning team have the task to explore how such implementations will affect domestic economic development organizations (EDOs) investing in smaller companies, as they might not be able to afford such changes and abide by the new regulations.

Issues and Limitations

Watermarking is a vulnerable technique. As investigated by a research team at the University of Maryland, there is a risk of finding watermarks that are false positives or negatives. When the images are harmless this may not be an overwhelming issue, but they may be when used for dis- or misinformation (again, as we approach a new presidential election in the United States). This is one of the biggest concerns that the EO has not specifically mentioned, requiring the invested stakeholders to investigate further.

Watermarking and labeling synthetic content cannot be perceived as a one-size-fits-all solution. The Administration needs to carefully evaluate tools and methods studied before applying them and acknowledge that watermarking and content authentication operate on an opt-in model. In addition, cases show there are no fully reliable ways to investigate whether content was machine generated, suggesting the need to re-evaluate watermarking as a supplemental tool, rather than a solution.

The EO is indirectly addressing all future commercial AI models in the United States and implies reliance on voluntary cooperation by tech companies. Voluntary cooperation is one of the gray areas in policy-making that does not allow certainty. Although major tech companies such as Google and Adobe have expressed excitement and proactiveness in creating a framework for responsible AI practices, the Department of Commerce will need to go beyond identifying existing standards and tools. It is also imperative for the agency’s taskforce to focus its research on all AI models, including the ones already in use, and go from there.