Altair Newsroom

Executive Insights

Executive Q&A: Mamdouh Refaat on Generative AI and the Future of Work

By Altair Editorial Team | March 14, 2024

Generative artificial intelligence (genAI) garners a lot of headlines these days – including in Altair’s own Newsroom, like this article from Ingo Mierswa, SVP of product development. But unlike Mierswa’s article tailored towards business decision-makers, stories around genAI in pop culture and the general media usually ask broader, thornier questions. Above all, many people see what genAI can do and wonder: Will this technology take my job or career?

Data from the Pew Research Center found that “about a fifth of all workers have high-exposure jobs; women, Asian, college-educated and higher-paid workers are more exposed.” This may sound ominous, but what this exposure to AI means in practice is often less clear. Opinions on genAI’s impact on the workforce run the gamut. For example, Morning Brew recently published an article titled, “AI probably won’t take your job soon.” But a report from the World Economic Forum found that “over the next 10 years, 90% of jobs could experience some degree of disruption. Everyone from entry-level number crunchers to heads of business units and even C-suite executives will see their jobs evolve over the next decade.” 

In other words, the reality of widespread genAI adoption isn’t clear. But what is clear is that people around the world – business leaders, lawmakers, and the general public alike – have genAI on the mind.

Recently, we sat down with one of Altair’s own to discuss genAI and the future of work. As someone on the front lines of genAI development, and as someone invested in making technology understandable and approachable, Chief Data Scientist Mamdouh Refaat was high on our list of people to consult on this fascinatingly complex topic.

 


 

Q: A lot of discussion around generative AI (genAI), especially as it relates to jobs, is framed as a “man vs. machine” situation. What are your thoughts on this? How should we conceptualize this discussion?

Mamdouh Refaat: I believe it’s more of a “man and machine” question. GenAI is a tool – humans have been living with tools for millennia. Ever since we invented the plow, we have been living alongside tools and using them in our day-to-day lives without thinking much of them. Although genAI is certainly powerful and different, it is still just a tool, albeit one we use on computers or phones.

 

Q: GenAI can produce text and imagery in ways that for many people, their first reaction is to believe that these models have minds of their own. For some, this is the root of a lot of fear and uncertainty. How do you go about allaying this fear/uncertainty?

MR: The first thing that is important to realize is that these machines cannot “think” for themselves, not in the way that humans do. I would say that most of the fear surrounding genAI stems from most people simply not knowing what the technology is and how it works. GenAI in the form of large language models (LLMs) – such as ChatGPT, Midjourney, Gemini, etc. – are essentially just massive, extensively trained prediction machines. So when we think about genAI in terms of “replacing” people, this is mistaken. GenAI will never replace the creativity people are capable of conjuring because it cannot generate something entirely “new” the way people can. It can only utilize existing data to create outputs. 

Think about it this way: Let’s say genAI is our GPS and we’re stuck in traffic. The GPS can show us a route out of the traffic, maybe even suggesting something we would have never considered. But it cannot create new roads.

 

Q: How do you react to the variety of major research reports that display the trepidation around genAI from the general public? 

MR: Personally, I don’t put much weight into these reports, especially the ones that predict the number of jobs that will be replaced, how much productivity will increase, etc. This is mostly because these are based on the opinions of non-specialist populations who, for the most part, don’t understand genAI and are either scared of it or too optimistic about what it can do. I see assertions like, “25% of jobs will disappear by 2030,” or “65% of all current jobs are at risk of obsolescence” – based on what? Where these figures come from in these reports doesn’t seem to be all that trustworthy to me. I think there is a lot of speculation and hypothesizing. 

 

Q: Nobody’s doubting genAI’s ability to disrupt the workforce and peoples’ work patterns. Do you think genAI will replace current jobs?

MR: I think it’s more a question of shifting jobs, rather than replacing them. It will change the ways we work, but it will not be an outright substitution. For example, if you see pictures of workplaces in the early 1900’s, you will see rooms full of people whose only job was to type on typewriters. Where did these typist jobs go? The answer is that these are our jobs now – we are expected to be able to type, but typing did not go away completely. It just became a standard part of new roles. I think it is wrong to think of genAI having a zero-sum impact on jobs.

 

Q: What do you think can be done to bring some order to the “chaos” that the topic of genAI can generate amongst everyday people? 

MR: First, I think people need to slow down, wait to see what we truly have before we make any assumptions or predictions. These tools are still very much in their infancy, as is our understanding of them and what they can do. It is hard to make reasonable, accurate predictions with unknown quantities. 

We have seen the cart put before the horse many times in recent years – think of the early hype that surrounded cryptocurrency, blockchain, and fully autonomous vehicles, for example. None of peoples’ most optimistic predictions came true, and the hype quickly died down once we realized these technologies’ limitations. But this gave us the clarity we needed to make more reasonable predictions. 

Second, we need to realize that these genAI tools are not “finished,” and that they are still facing significant headwinds in certain areas – especially in the legal and environmental domains. If I ask ChatGPT to write me an article, do I own its output? Am I responsible for possible copyright violations if the model uses protected material? These are questions society is still trying to answer. And that’s to say nothing of continued hallucination problems or the massive energy toll these enormous computing systems exact. After all, the more massive computing systems we want to run, the more energy we will need to power them. More energy means needing to build more windmills, hydroelectric plants, and it could also mean burning more fossil fuels.

 

Q: What can be done to better educate people on the topic of genAI and how it works? How can we get people up to speed on this fast-moving technology?

MR: First, we have to keep repeating that genAI doesn’t create any truly new pieces of information, it merely combines existing pieces of information. There is a big difference between combination and creation. 

Second, we have to be clear that genAI will have an impact on jobs – some will disappear, some will shift, and some new jobs will appear. All new technology has this potential. There is, and will continue to be, an especially pressing need for people with specialized IT skills. Working with LLMs is not easy. Managing these models and the machines they operate on is not easy. There will always be a need for people to perform these tasks. 

And keep in mind that organizations are still figuring out how much manpower they need to develop these genAI tools and systems. This is a new endeavor for many organizations – there is going to be a collective learning curve as each one figures out what best fits their needs.

 

Q: As more companies and individuals begin to use genAI tools in the workforce, what are some aspects of the technology that you feel need to be addressed first?

MR: The biggest thing is going to be the questions surrounding intellectual property (IP). If you give an LLM the permission to search the internet to inform its data, you are going to hit a lot of protected IP very quickly. This can mean these models may plagiarize or access data they should not have access to, both major problems for the people responsible for the model and the people who use it. Privacy is also a major concern. It is not realistic to think that all our sensitive data will be completely protected all the time. This is a reality we must deal with, and one that will come into play with broader LLM use. Think about it: you can barely go to the dentist or order a burrito online without having to submit your email and phone number. We cannot expect that this type of data will always end up in a top-notch database.

 

Q: Along with discussions around genAI, there also comes talk of artificial general intelligence (AGI), especially from Silicon Valley and other business executives. Do you foresee a future in which we’re working directly with AGI in our day-to-day lives?

MR: I personally believe we will never reach AGI. I think it is a real stretch to hear people say they think that we will be living with AGI soon when AI is still not even capable of driving a car on its own. Becoming an entity capable of unique, creative thought – like humans – is an infinitely more difficult challenge than driving to the supermarket. In all, I think AGI is more of a dream rather than a fixed point of development. I understand that people have different views, and that many businesses and business leaders are optimistic, but I do not quite understand where this unfettered optimism is coming from – especially to talk about AGI like it is mere years away.

 


 

Click here to learn more about Altair’s genAI technology. In addition, click here to download “A Human's Guide to Generative AI,” which gives readers a levelheaded, easy-to-understand overview of genAI – what it is, where it came from, how it works, and what it can (and can’t) do.