Adaptability is key to working with artificial intelligence. And then there’s vigilance. And determination. And a mathematical mastery of probability and statistics, data wrangling, and a host of other skills. If you can imagine a new engineering solution, you might find ways to partner with AI, including machine learning, or deep learning, to make the previously impossible possible.
As advances in AI arrive at lightning speed, engineers and other science, technology, engineering, and mathematics professionals are raising a host of questions at professional development conferences and workplace lunch tables: What does AI mean for me and my career? What are recommended sources for how to learn or grow skills in artificial intelligence? What are some of the AI job titles and opportunities?
The AI field has expanded since the mid-1980s, with a boom in deep learning more than a decade ago, when machine learning designers began to explore neural networks, inspired by biological neurons, that can accomplish “complex learning tasks such as picking out features in images and speech,” according to The Alan Turing Institute.
Recent advances in generative AI have sparked exuberance and anxiety, with a fire hose of new AI tools that generate text, images, simulations, audio, video, and code introduced in just the past several years. While new tech career opportunities will arise from these introductions, various AI advances may either enhance or eliminate existing jobs. That could include STEM and other white-collar jobs that focus on basic computing tasks that AI systems are starting to accomplish, such as transcribing, sorting, and organizing data or monitoring security camera footage, according to experts.
Underneath this river of change is the question of how to protect human control over these systems, especially in such areas as unsupervised machine learning, in which algorithms can analyze datasets to “discover hidden patterns or data groupings without the need for human intervention,” as described by IBM on its website.
Gerlinde Weger is a Toronto-based international consultant specializing in AI ethical frameworks for companies and organizations. “AI could have amazing benefits,” Weger said. “The human assesses if it’s a benefit or if it’s a harm.” Such benefits could include improved tracking and data synthesizing related to pressing environmental issues. Harms could include job application systems biased against women and other historically marginalized groups. (For more information on the potential biases inherent in AI, read part 1 of this series in the SWE Magazine Spring 2024 issue.)
Growth areas of AI
How can women engineers take advantage of the opportunities presented by the explosive growth of AI and related jobs? Recommendations from experts include:
- Understand the changing AI career landscape and learn about potential impacts of AI on STEM workplaces
- Gain insights on AI-related jobs, opportunities, and projects
- Access training and other means to incorporate AI at work
- Be aware of AI limitations and risks and ways to mitigate those risks
One of the primary AI areas of growth for engineers is in machine learning, described by The Alan Turing Institute as “computer algorithms that can ‘learn’ by finding patterns in sample data. The algorithms then typically apply these findings to new data to make predictions or provide other useful outputs, such as translating text or guiding a robot in a new setting.”
For example, Nikita Tiwari, an AI enabling engineer at Intel Corp. and recipient of a 2020 Society of Women Engineers Distinguished New Engineer award, has trained AI-aided computer vision to detect unattended minors near swimming pools and has patented the technology. The system allows users with home swimming pools to create “settings for family members and minors that would distinguish a child from an adult,” Tiwari said, while ensuring data privacy and other safety controls are in place.
“To identify someone as drowning, that is tricky. It’s hard to tell if they are swimming. Even splashing may look like drowning,” said Tiwari, who presented at a March 2024 webinar hosted by the Institute of Electrical and Electronics Engineers, “Artificial Intelligence: What Motivates the Women in Engineering at Intel?”
“With AI and deep learning models using highly complex machine learning for drowning detection, we could look at how humans’ joints work in various behaviors,” Tiwari said.
While the impact of this system could be broad, Tiwari’s research has also touched her own life. “I didn’t know how to swim because as a child I never learned, and it is a very important life skill,” Tiwari said. Also to aid the project, Tiwari took swim lessons and spoke to swim instructors “to grasp the fundamentals and nuances of swimming to be able to work on a solution that’s practical and effective.”

“We had to extract many features and do lots of analysis. We then came up with a model that could predict an injury.”
— Rita Chattopadhyay, Ph.D.
AI in the global economy
Financial forecasters valued the global artificial intelligence market size at more than $196 billion in 2023 and project it will expand at a compound annual growth rate of 36.6% by 2030, according to Grand View Research, a market research and consulting company in San Francisco. Generative AI could increase global corporate profits by up to $4.4 trillion annually, noted a 2023 McKinsey Global Institute report published on July 7, 2023.
AI-enhanced careers tend to pay well, particularly those involved in machine learning, with annual salaries averaging up to $250,000 and surpassing $300,000 in some regions of the country, according to Levels.fyi, a website that tracks tech industry salaries.
Top AI research and development areas include computer vision, natural language processing, speech analysis, and robotics.
Meanwhile, with ChatGPT’s public breakout in late 2022 — and a rash of new generative large language models, or LLMs — technologists are being challenged to improve audio commands, language processing, and accuracy — for example, determining whether text generated by prompts is factual.
Generative AI searches for patterns, such as which words would likely follow other words based on language collected largely from the internet and scanned books; it does not delineate whether the results are true. Fei-Fei Li, Ph.D., professor of computer science at Stanford University and founding director of the Institute for Human-Centered Artificial Intelligence, has noted publicly that technologists “have to recognize the messiness of it and recognize that we have a shared responsibility in ushering society into the AI era.”
As the AI revolution evolves, meanwhile, engineers — in particular women and other historically marginalized groups — are urged to consider adapting AI skills to find solutions that will help guide the tech’s trajectory.
“AI is going to become common across companies, and so as a software engineer, I need to know something in that field,” said Neethu Elizabeth Simon, a senior software engineer with Intel, who joined Tiwari at the March IEEE panel.
Engineers outside the software field also need to understand as much as they can about AI. “Maybe not everything but something specific to your particular job role that will make you productive using AI,” said Simon. Employing AI software could save time and help create more precise design models. For example, in civil engineering and construction, machine learning can be used to analyze building plans or project requirements.
AI in the STEM workplace
AI automation will likely affect all levels of STEM with not just new career opportunities but also job losses. Potential job replacement could especially affect “middle-skills” STEM positions, such as health care technicians, medical records specialists, and computer data processors, reports note. Overall, various middle-skills roles make up a substantial portion of the STEM workforce, as much as 38%, according to one report (“Diversity and STEM: Women, Minorities, and Persons with Disabilities,” National Science Foundation, 2023).
An eye-opening 2023 Goldman Sachs economic outlook paper projected that generative AI could potentially “expose 300 million full-time jobs to automation” worldwide (“The Potentially Large Effects of Artificial Intelligence on Economic Growth,” March 26, 2023). Career fields considered highly exposed include mostly knowledge-based and white-collar jobs, including STEM professionals, according to the report and others. Some of the highest-exposure fields are engineering, architecture, computer, and mathematical careers, especially web and digital interface designers and blockchain engineers.

“AI not only streamlines existing engineering processes but also opens up new avenues for innovation and efficiency, making it a crucial tool in the modern engineer’s toolkit.”
— Mrinal Karvir
Yet what “exposure” means remains to be seen. A 2023 Pew Research Center study, “Which U.S. Workers Are More Exposed to AI on Their Jobs,” noted last year that “AI could be used either to replace or complement what workers do.” In some cases, AI might take over just a percentage of tasks, such as basic computer programming and coding. In that example, predictions conflict over whether this means AI will replace programmers and coders.
Even if that happens, it won’t be soon, as the career development website Upskilled noted, since AI systems do not offer humans’ creativity and critical thinking. “While AI is set to enhance the process of coding — automating plenty of its routine, predictable tasks — human programmers are likely to remain in strong demand for the coming years and beyond,” the site reports.
AI, meanwhile, also offers job role enhancements, especially in extensive and rapid data analysis. AI can enhance the fast analysis of real-time sensor data, contribute to more precise robotics, and boost speed in engineering and software simulation.
But AI-related job displacement could affect STEM’s gender and diversity balance, with jobs especially exposed to generative AI, such as administrative and health care support, held mostly by women, according to a revelatory Kenan Institute report (“Will Generative AI Disproportionately Affect the Jobs of Women?” Kenan Institute of Private Enterprise, April 18, 2023). And certain middle-skills STEM jobs, including medical technicians and other technical workers, are often held by Black or Latino people, according to the National Science Foundation.
“If certain groups are more impacted, and that means they lose their jobs, there are a couple of different assumptions, but that could easily change the overall degree to which the workplace is diverse,” said Cary Funk, Ph.D., a consultant and former director of science and society research at the Pew Research Center, who has published widely on diversity in STEM.
For many in STEM fields, learning about AI, especially as it progresses and improves, could prove essential. “It’s exciting for me because AI is evolving so fast. Every month, we see something new coming up,” said Simon, the recipient of the 2023 Women Who Code Applaud Her Award.
Future AI engineering roles
To get a feel for the emerging AI engineering career field, one can research the primary role of an AI engineer. It’s described by the Microsoft Learn training module platform as being “responsible for developing, programming, and training the complex networks of algorithms that make up AI so that they can function like a human brain. This role requires combined expertise in software development, programming, data science, and data engineering.”
In addition to AI engineer, related jobs are cited by various reports and job sites, though the skills and definitions are fluid and ever evolving. Since a primary subset of AI is machine learning, many such roles focus on this technology. For example, a machine learning engineer works with data scientist teams to research, design, and develop AI systems that use machine learning, implementing algorithms, running tests, and maintaining and updating such systems. And a senior generative AI engineer assists in creating AI models by cleaning and preparing data and assisting in the development of machine learning algorithms.
Other jobs vary, touching on AI elements from beginning to end. An algorithm specialist, for example, is a computer scientist who researches, designs, and refines algorithms. Throughout the process, an AI ethics engineer can search and test for biases in AI systems, seeking to protect human rights and ensure accountability and safety in AI tech development.
Evolving specializations include AI-based computational analysis and quality control and improvement, including tasks such as AI model fine-tuning and data management. AI is also being developed in hardware, including computer chips and neural processing units that can foster higher-level processing to solve complex problems — for example, in transportation, manufacturing, or data-hungry LLMs.
Among the skills to consider when pursuing an AI-related career are strong programming skills and expertise in such areas as statistics and data analysis. A postgraduate education can help as can working with AI on the job you have, seeking out mentors already involved in AI-related projects, and collaborating with other technologists.
For example, in 2022, Simon and Samantha Coyle, a software engineer at software company Diagrid and formerly with Intel, used AI to build a new diagnostic imaging solution that could speed up data analysis and scale up innovative cell therapies and new biomedical tools.
In a collaboration between Intel’s Health and Life Sciences and ValitaCell, now part of Beckman Coulter Life Sciences, research teams applied AI tools to cell assays. As Coyle explained, traditional cell analysis processes can be laborious. In the lab, microscopes are used to detect anomalies in cells. There’s a “bench scientist in the lab, with older computers, taking cellular images,” Coyle said. “They would physically stain the image, which is a lengthy process.

“If certain groups are more impacted, and that means they lose their jobs … that could easily change the overall degree to which the workplace is diverse.”
— Cary Funk
“There was much older technology we had to account for,” including computers not running the latest Windows operating system or having limited power. Coyle’s team built a data pipeline architecture that was much more powerful, she said. In essence, the pipeline could pick up microscopic cellular images and send them securely via various means, including between edge computing machines, which then apply machine learning to the outputs to better detect anomalies.
This technological advance could allow for quicker diagnoses for patients, including detecting cancer cells, as well as the ability to create large databases of images to help detect wider health trends among patients.
Simon and Coyle have also worked together to share what they learned via a new open-source project, AI Connect for Scientific Data, a framework based on their AI analytics project that outlines computational processes others can use to connect data from scientific instruments to AI pipelines, allowing for possible use in biopharma as well as industrial, retail, or other sectors. “This framework design is open source and can be adapted by anyone wishing to deploy it,” Simon said.
And then there are novel industries like sports. Rita Chattopadhyay, Ph.D., a principal engineer in machine learning at Intel, has been in the field for more than 35 years and holds 55 patents. A leader in AI and mobile robotics, she has created such solutions as how to fuse data from multiple sensors so robots can better detect objects. Dr. Chattopadhyay and her team also created an AI-enabled analysis system to help predict and prevent injuries in basketball. It uses data from training regimens, play routines, and medical records. “We had to extract many features and do lots of analysis,” Dr. Chattopadhyay said. “We then came up with a model that could predict an injury.”
Incorporate AI at work
Mrinal Karvir, a senior AI software engineering manager at Intel who leads various sessions on responsible AI, including at the Silicon Valley Women in Engineering Conference this year and last, advises that technologists seek ways to upgrade their expertise and understand AI’s potential and limitations.
That could include taking AI-oriented courses and attending conferences and webinars. Some online coursework that Karvir and other experts recommend includes courses by education technology company DeepLearning.AI and Coursera’s Generative AI for Everyone. Harvard University offers online classes too, including Introduction to Artificial Intelligence with Python and Data Science: Machine Learning. To monitor the latest AI research, consider resources such as arXiv.org, Cornell University’s open-access scholarly article archive.
To learn AI hands-on for current and future projects, technologists can focus on areas they care most about. Climate change, health care, and challenges in local communities, Karvir said, are all areas that will likely be affected by AI. “This will help you gain skills not only to build your technical expertise but also showcase entrepreneurship skills, product development and deployment, and project management.”
Some skill areas to enhance within a current job include working with data scientists and others on “selecting appropriate datasets, verifying data quality, and cleaning and organizing data,” she said.
“For mechanical engineers, AI can significantly enhance design optimization processes, enabling the development of more efficient and innovative solutions through automated design tools,” Karvir noted. “AI’s predictive capabilities are also invaluable for predictive maintenance. Engineers in fields such as aerospace and manufacturing use it to anticipate equipment failures and schedule timely repairs, thus minimizing downtime and extending equipment life span.
“Additionally, AI-enhanced simulations allow engineers in environmental, chemical, and civil engineering to model complex scenarios more accurately and make better-informed decisions about projects ranging from pollution control to infrastructure development,” she said. “In sum, AI not only streamlines existing engineering processes but also opens up new avenues for innovation and efficiency, making it a crucial tool in the modern engineer’s toolkit.”
Engineers who pursue such projects can also enhance their careers by showcasing their work at engineering conferences or via developer platforms such as GitHub to demonstrate their expertise and potentially collaborate with others. They can also participate in hackathons — which offer the added benefit of working with teams on deadlines — and seek additional training through professional communities, tech industry conferences, and engineer networks such as the SWE mentor network and Advance Learning Center.
“Such visibility often leads to networking opportunities with the potential for partnerships or even job offers from companies seeking specific expertise,” Karvir added.

“Data can become stale. The model needs to be accurate, and engineers are responsible for those models.”
— Neethu Elizabeth Simon
Limitations and risks
Many experts say it’s important to not be distracted by the hype surrounding AI, including from tech developers and the media, and to understand
its risks.
In speaking about her new book, “The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI,” Dr. Li — widely known as the “Godmother of AI” — said decision-makers need to be educated “about the power, the limitation, the hype, and the facts of this technology.” (For a review of this book, see page 72 of this issue.)
To provide safeguards, there’s an increasing urgency to establish AI norms and standards, governance, and other guardrails. (See “European Union AI Act Aims to Lower the Risks,” on page 12 of this issue.) AI can pose “catastrophic risks,” Dr. Li added. “For example, disinformation’s impact on democracy, jobs and workforces, biases, privacy infringement, weaponization: these are all very urgent.”
Some recent misfires include a New York City AI-powered chatbot that “appears authoritative” but has offered city government information on housing policy, worker rights, and business operator rules that has been inaccurate and possibly illegal, according to a March investigation by The Markup and The City, an independent, nonprofit newsroom. Other concerning incidents include alarming deepfake images such as one showing a false explosion at the Pentagon. Google’s recently released Gemini AI model created controversy for generating historically inaccurate and sometimes offensive images. Many such incidents are chronicled in the AI Incident Database, which continues to accept reports for its records.
For technologists and others seeking practical parameters for employing AI, some quality control guidelines and standards certifications are being offered by various agencies and organizations, including the National Institute of Standards and Technology, IEEE, AI Now, and Responsible AI, as well as by tech employers. And developers are racing to meet public concerns, developing watermarks that can be embedded visually or within code to flag AI-generated material.
Model cards, another transparency tool, disclose AI system information, such as model training parameters, datasets used, performance evaluations, intended use, and potential limitations.
As a result of the need for responsibility in AI, additional engineering jobs include AI security engineer, a specialization in identifying and mitigating AI system vulnerabilities and risks, including biases, inaccurate results, and toxic or offensive content.
“Models will not perform 100% accurately,” said Simon. “You need to spend time training and validating them. Data can become stale. The model needs to be accurate, and engineers are responsible for those models.”
And increasing reliance on AI in higher-risk public sectors, such as health care, brings additional hurdles that could directly affect people’s health and safety. Take the MRI machine. For a traditional technological device, “we can anticipate ways it might fail and test for those things to make sure those risks are mitigated appropriately,” explained Christina Silcox, Ph.D., research director for digital health at the Duke-Margolis Institute for Health Policy. For example, because MRIs use magnets, it can be anticipated that people with metal in their bodies will require different types of tests.
But with some AI or machine learning tools, that factor could be less understood or unknown. “The developers themselves don’t know what the AI is doing,” Dr. Silcox said. “That’s when it becomes much more difficult to anticipate where it will break.”
Dr. Chattopadhyay also noted concerns in the field that some AI systems will “start learning things that we don’t want them to learn and act in a manner that we don’t want.” She also advises developers to not “just take the data.” Dr. Chattopadhyay urges developers to thoroughly analyze data quality and parameters to detect, for example, possible errors, biases, and ethical concerns and to further tune models.
In the end, technology ethics expert Weger warned against allowing AI systems or algorithms to make decisions without human control and oversight. “Humans still have a role in society,” Weger said. “These need to be human decisions. We need to be accountable for what we see and what we do — for what happens.”




