
By Kathy Hunt
Apr 30, 2026
These days, artificial intelligence (AI) pops up in conversations often. Is AI on the verge of replacing scores of human workers? Will it enslave humankind as it did in films like The Matrix or act as a trusted companion as in Her? Exactly what can and can’t AI do?
With all the buzz around AI, it’s no wonder that Collins Dictionary named it the word of the year in 2023. Yet, AI is not new to the world of technology. With the founding of the American Association of Artificial Intelligence, now called the Association for the Advancement of Artificial Intelligence (AAAI), in 1979, it became embedded in tech vernacular.
The concept of AI is even older, having been formally introduced by English mathematician and computer scientist Alan Turing in 1950. In his paper “Computer Machinery and Intelligence,” Turing proposed a test, dubbed “The Imitation Game,” to evaluate a machine’s ability to display human-like, intelligent behavior. Later renamed the Turing Test, it foresaw the everyday interactions that people now have with chatbots and virtual assistants such as Siri and Alexa.
Two years after the Imitation Game, engineering professor and machine-learning pioneer Arthur Samuel demonstrated the Samuel Checkers Playing Program on an IBM 701 computer. Along with creating the world’s first self-learning program, Samuel received credit for coining the term “machine learning.” This is where a computer learns how to perform tasks by analyzing and drawing inferences from data sets without being programmed to do so.
Toward the end of the 20th century, the world saw an AI boom that featured breakthroughs in research and increased government funding. After this boom came the 1990s bust, where interest in and money for AI research waned.
By the early 2000s, the introduction of speech-recognition software, search engines, and autonomous vacuum cleaners had recaptured attention and resources for AI. The introduction of OpenAI’s ChatGPT and other generative AI has further propelled its influence. According to the research firm Gartner, Inc., global spending on AI is forecasted to be $2.52 trillion in 2026, a 44% year-over-year increase.
These tremendous investments and fast-paced innovations have sparked concerns about AI safety and security. To help alleviate anxiety around the governance of AI, existing ASTM International committees are looking at how to address this technology in new and existing standards, while plans are currently in the works for the formation of a new, dedicated AI committee.
In March of this year, ASTM held a meeting with key stakeholders to discuss forming a technical committee on AI in manufacturing systems: the tools, data, software, and human labor that go into producing goods. Participants included equipment; automation; AI and digital technology providers; government and public-sector observers; end users; and members of academia. The attendees voted, without dissent, to continue with the creation of the committee on artificial intelligence in manufacturing systems (F50). It will identify where new standards are needed and determine which existing standards can be revised and/or realigned.
ASTM’s director of developmental operations, Pat Picariello, notes the importance of having a wide range of voices in this discussion.
“However AI evolves, we need to make sure that we’ve got the right people at the table, developing standards. It’s a stakeholder-driven process,” he says.
At the meeting, stakeholders ranked which standards are most needed to advance and enable AI in manufacturing systems. The areas they regarded as top priority were terminology and taxonomy, along with road-mapping and defining critical requirements. High-priority topics included a master data-management plan, as well as governance and validation methods and guidance.
“Standards really enable the evolution of such a specific activity, helping it evolve from the research stage to the market stage,” says Picariello. “Time is the juggling act — to come up with a way to get standards into the market, gain data and metrics, and confirm the standards are organic and relevant but also ensure that people don’t get frustrated with the pace at which we’re moving. This is an area in which ASTM excels.”
A hallmark of the Industrial Revolution was automation: automatically controlled, continuous production without the use of human labor. In recent decades, AI has been introduced into automated systems such as computer-aided design (CAD) and computer numerical control (CNC), as well as robotics and additive manufacturing (AM). Frequently used interchangeably with the term “3D printing,” AM creates an object by adding one layer of material at a time. Medical devices, aircraft and automotive parts, and construction materials are some of the items that can be designed, customized, and made quickly and on-demand with AM.

The growth of connected homes will require new AI standards.
The convergence of AI and AM promises to speed up the design process and cut down the production time and material waste in manufacturing. By analyzing the real-time and historical data produced through AM, AI can identify and reduce errors, improving consistency and reliability in 3D-printed objects.
The subcommittee on AM data (F42.08) has several work items relating to the data utilized by AI, including the standard guide for additive manufacturing — general principles — guidelines for AM security (WK78322). It aims to identify and categorize security threats in AM, highlight aspects of AM security that require special considerations, and discuss how to mitigate security threats throughout the manufacturing lifecycle.
Subcommittee chair Alex Kitt has observed an overlap with cyber-physical security and AI. “In the world of AM, the ability to secure things like part designs becomes harder because everything is digitized in the process. Because of this, machine learning can be used to attack cyber-physical systems.”
The U.S. National Science Foundation defines cyber-physical systems as engineered systems built from, and dependent upon, the seamless integration of computers and networks with physical processes and components.
Kitt, who is the director of data science at EWI, points to an incident where someone placed a microphone next to a plastic, fused deposition modeling printing system and then taped the sound generated by the system when in use. Employing a machine-learning algorithm, the person reverse-engineered the design file that had been printed. With that, a cyber-physical attack occurred. Another work item from F42.08 is the standard specification for additive manufacturing for metals — general principles — registration of data acquired from process monitoring and for quality control (WK73978). In AM, a significant amount of process monitoring takes place, Kitt says.
“In process monitoring, we look at images as a function of time and space. Where in the build was I? What was the time that this was taken? We compare the data streams against things like non-destructive testing to see where defects were and what the images were,” he says. “This comparison is so important for machine learning, but the data is often not structured in a way that makes the comparison possible. Even doing machine learning, we need standards like this.”
Similar to the work item on data registration, the standard practice for additive manufacturing — general principles — overview of data pedigree (F3490) is an example of domain-specific data structures and domain-specific challenges to data, Kitt says. It outlines how to interpret AM data and cites a common data dictionary. The dictionary enables AM data pedigree to be searchable, shared, and analyzed to improve the understanding and qualification of AM processes and parts.
While not everyone may be familiar with AI as it relates to AM, most know of AI’s prevalence in the world of consumer products. Autonomous vehicles, refrigerators that monitor and manage food inventory, and security systems with object and facial recognition are among the increasingly ubiquitous AI-powered goods.
As these devices become more advanced and omnipresent, fears around consumer safety and security have become more widespread. To address these concerns, the subcommittee on connected products (F15.75) produced the standard guide for ensuring the safety of connected consumer products (F3463). Revised in 2024 to incorporate AI terminology, the guide focuses on the hazards of connectivity. It applies to products requiring testing and evaluation to prevent cybersecurity vulnerabilities that could compromise product safety or result in noncompliance to the end-product safety standard.
Before manufacturers release AI-powered products to the public, they should carry out safety assessments for anomalies related to AI, such as issues with software updates, connectivity, and/or automated functions. If AI controls an element of the product and operates incorrectly, it could generate a hazardous condition, says Travis Norton, connected products subcommittee chair and head of content strategy and compliance innovation at Compliance and Risks.
Two scenarios where AI could create a dangerous situation are if an AI delivery robot became confused at an intersection, stopped abruptly, and caused an accident – or if the robot failed to recognize a person and collided with them. The guide (F3463) details ways to carry out risk assessments, considering some real-world challenges to AI deployment.
“When evaluating the product, think about environmental interference, such as fog or cold if it’s using visual sensors or if it’s noisy and the AI relies on audible signals,” Norton says. “Does the product have any hazard-mitigation capability or an emergency shutdown failsafe mode built in as a default if a hazardous condition occurs?”
He adds that AI is not like traditional programming, where someone can look at lines of code and make a change. When something goes awry, it may be more difficult to determine why an incident occurred.
“There’s a black-box nature to AI. The technology may not directly allow us to see why and how a decision was made,” Norton says. “Because of this, there is a burden on the model developers or manufacturers to track and assess training data used in machine learning.”
The subcommittee also considered how data sets may underrepresent or create a bias toward certain groups. It incorporated guidance on AI training models to ensure fairness, accuracy, correct metrics, and periodic retraining.
The subcommittee on pool safety standards (F15.49) has published a standard to address a specific AI-powered product: drowning-detection systems. Using cameras and AI algorithms, these detection systems can send an alert within 30 seconds to indicate that a person is struggling or submerged in a pool. The specification for computer-vision drowning detection systems for residential swimming pools (F3698) is the first standard to specify means of protection for an active pool.
In the field of unmanned aircraft systems (UAS), or drones, there is no one pivotal moment where AI entered the scene. Research into its use for recreational, commercial, law-enforcement, and military purposes has been occurring for years. This ongoing work has informed AI’s presence in UAS design, control, information processing, and behavior guidance.
Members from the committees on unmanned aircraft systems (F38) and aircraft systems (F39) have produced three technical reports (TRs) pertaining to automation and AI in aviation. The first, “Autonomy Design and Operations in Aviation Terminology and Requirements Framework” (AC377 TR-1), recently won the Charles B. Dudley Award for its impact on aviation. This report was the foundation for the new guide for exercising a contextual framework for increasingly autonomous aviation systems (WK76044). Led by Pranav Nagarajan of the aircraft systems committee, it has passed balloting.
“AC377 focuses on autonomy, the nature of automation, and how the role of humans in the use of automation is greatly diminished,” says Andy Lacher, founding member and part of the steering committee on AC377. “AI may be a technique or technology used to achieve autonomy, but the two are not synonymous.”
He adds that definitions from TR-1 were added to an appendix in the standard terminology for aviation (F3060). In it, the terms “autonomy,” “automation,” “automatic,” and “artificial intelligence” were differentiated from one another.
“Two white papers came out of AC377 as well,” says Lacher, who is also an F38 and F39 member and a former NASA, Boeing, and MITRE employee. “Some of the issues exposed in the first white paper may be resolved not by standards but by how regulations are interpreted and modified.”
AI has progressively moved into the business space and into departments such as human resources (HR). In March 2025, Forbes reported that 93% of chief HR officers at Fortune 500 companies had begun to integrate AI into their business practices. This is especially the case with AI agents: autonomous or semi-autonomous software systems that use AI to carry out what were previously human-performed tasks. Within HR, an AI agent can source job candidates, screen resumes, recommend applicants, and schedule interviews. It can also analyze workforce data to identify skill gaps and forecast hiring needs. Through continuous learning, AI agents can improve efficiency over time.
Observing how commonplace AI has become in the workplace, the human resource management committee (E63) saw the need for HR-specific AI standards. To start, the group has drafted the new guide for modern and effective hiring standards (WK91420). The proposed standard looks at the use of AI in the recruitment and hiring process. It includes practices that eliminate bias and test for behavioral tendencies within the workplace. It also aids in mitigating risks associated with AI-driven recruiting and ensuring compliance with legal requirements.
“Regarding the increasing presence of AI, we anticipate seeing more agentic AI,” Norton says. “Over time, these agents will likely have their own autonomy, understand our needs, and do things without being directly told, which has the potential to both increase or mitigate risk.”
As the presence of AI expands, so too, will the need for standards in a broad range of fields and industries. ●
May / June 2026