Artificial Intelligence


In the 2023 Texas legislative session, House Bill 2060 created an artificial intelligence advisory council to deal with the emerging technology and processes surrounding artificial intelligence. As defined by the Bill, artificial intelligence systems are defined as those generative “systems that are capable of:

  • Perceiving an environment through data acquisition and processing and interpreting the derived information to take an action or actions or to imitate intelligent behavior given a specific goal; and
  • Learning and adapting behavior by analyzing how the environment is affected by prior actions.”

There are many AI models in existence, including the Chat Generative Pre-Trained Transformer (ChatGPT) from the company OpenAI, which present both opportunities for advancement of the academic, research and administrative areas, and risks that should be carefully weighed and planned for. ChatGPT and other models form a generative artificial intelligence class of tools and services in that they can be used to create original content, including audio, code, images, text, simulations, and video. This class of tools, algorithms and processes represents an evolving category of machine learning, with both supervised and unsupervised (self-learning) capabilities.

Artificial Intelligence (AI) tools are becoming increasingly useful and accessible. In particular, generative AI tools have become a part of many professionals’ workflows. Generative AI tools include those that produce text or images in response to queries. They are an accessible way to interact with large language models (LLMs)–AI systems trained on enormous amounts of data. In that training process, LLMs require huge compute resources. In practice, text-to-image tools can be used by publishers to create visuals and artists to aid them through the creative process. Similarly, chatbots based on LLMs can help write blog posts, emails and reports. ChatGPT is one such LLM-based chatbot.

ChatGPT and other generative AI platforms serve as an interface to interact with an LLM that is trained on enormous amounts of data. When inputting information into an AI platform, we are basically interacting with all the data it was trained on. Through different machine learning techniques, the AI platform is also trained to predict the next word in a sentence, which allows its responses to sound natural and even compelling.

Risk Management

While there are many benefits from using AI tools and services, they are not without risk. In the academic, research and administrative areas of higher education, these risks include, but are not limited to the following:

  • Bias or pre-disposition in the responses, conclusions and generative work it does;
  • Inaccuracies or outright false information may be generated by a model;
  • Attribution risks. Some of the information coming from the AI model may be copyrighted by others not associated with the AI platform in use. The AI model may either not provide attribution or may inaccurately associate attribution with an information or training source;
  • Algorithmic discrimination or disparity impact, either unintentionally or by design. Disparate impact occurs when policies, practices, rules or other systems that appear to be neutral result in disproportionate impact on a protected group. This may create a bias in the model which results in disparate impact or inaccuracies in completed or provisional work;
  • Data privacy issues, including exposure of confidential, health or other sensitive personal information;
  • Intellectual property exposure, misuse and potential theft;
  • Any and all information/data exposed to a public version of an AI platform must be considered public – that is, it now becomes part of the training data for that AI platform and is available to all others who are interacting with it;
  • Generative work that may create unsafe outcomes or risks in outcomes.

Because of the inscrutability of AI systems in general, it is important to understand the opaque nature of the AI platform in use. The NIST AI Risk Management Framework (RMF) 1 assists in understanding and measuring such risks.


  • Members should inventory, identify, track and control all systems which have AI or AI-like capabilities.
  • Members should classify information being input into and coming out from such systems. Refer to the Texas A&M System data categorization standard 2 for further guidance.
  • Members should update their respective acceptable use policies to reflect allowable and acceptable uses of AI platforms, services and processes.
  • Teams that are involved in the design, development and implementation of automated systems that employ AI and AI-like capabilities should provide: (1) plain language notifications and clear descriptions of the overall system functions and the role that automation will play; (2) notice that such system capabilities are in use and the individual or organization responsible for the system, and (3) an explanation of outcomes that are clear, timely and accessible.
  • Members should conduct a bias or reasonableness audit of all platforms with significant AI in use to identify and understand any bias the model in use has and plan accordingly in its use.
  • Designers, developers and personnel involved in the deployment of AI systems should take proactive and continuous measures to protect individuals from algorithmic bias or discrimination, whether intentional or not.
  • AI platforms should not be used to create material that are based on, use or incorporate non-public internal information, including without limitation, unpublished research, legal analysis or advice, recruitment of employees, and personnel or disciplinary decision-making processes.
  • Where possible, members should configure security protection technologies (such as firewalls, endpoint protection, and email security gateways) to prevent communication with such technologies on all member technology infrastructure and in communication with external (such as cloud) infrastructure or subscriptions if they have not been inventoried and cleared for use.

Confidential Information

  • AI platforms are not approved for processing confidential information and therefore must not have any personally identifying, sensitive personal, regulated, confidential, proprietary, or controlled unclassified information submitted or exposed to it.