UMBC Mic'd Up

Building Ethical AI: Software Engineering for the Future

• UMBC Mic'd Up with Dennise • Season 4

AI is transforming the world of software engineering, but with great power comes great responsibility. 

In this episode of UMBC Mic'd Up, Dr. Mohammad Samarah, program director of UMBC's Software Engineering Graduate Program, delves into the critical intersection of artificial intelligence and software engineering. 

From the rise of ethical AI practices to the challenges and opportunities of integrating AI into software solutions, Dr. Samarah provides a deep dive into how the field is evolving. 

Learn how UMBC is equipping students with the skills to build ethical, reliable, and innovative software systems that address real-world problems while maintaining integrity and trust. 

🔗 Learn more about UMBC's Software Engineering Graduate Program: https://professionalprograms.umbc.edu/software-engineering/masters-of-professional-studies-software-engineering/

Dennise Cardona  0:00  

Welcome to UMBC Mic'd Up podcast. My name is Dennise Cardona from the Office of Professional Programs, and I am here with a very special guest. Dr Samarah, the program director for our software engineering graduate program. It is so great to have you here. Dr Samarah.


Dr. Mohammad Samarah  0:18  

Hi, Dennise. It's wonderful to have you, as always.


Dennise Cardona  0:21  

We are going to talk about a very hot topic, AI. So let's just get into the whole topic of AI and how you see it interchanging and and working with the software engineering and the world in general. Okay, yeah,


Dr. Mohammad Samarah  0:37  

it's a timely topic. And of course, it's all the rage. So if we step back a little bit and talk about the AI, this thing we call artificial intelligence, some will tell you it's a new solution to an old problem. Some will tell you it's an old problem to, you know, a situation that we have now to a new situation. So how do we sort all this out. I think we have to go back to first principle fundamentals. What is it we what is it we mean by AI, where did this thing come from? Interesting enough, you have to roll back the clock quite a bit, all the way to 1958 actually, that was the first workshop on AI. It took place in at Dartmouth College in New Hampshire, and that was kind of a launching pad for the domain called AI in academia. Now, if you ask people, What is AI, they don't know. And even the experts in the room, the academics, the industrials, people, don't have a universal definition of what we mean by AI. So what is it? Is it a thing? Is it a subject? Is it a physical object? Is it a monster? Is it meaning, right? So I think it's important to just kind of understand fundamentally what we mean by and this is not just for you know, software engineers or computer scientists who are technical people, every person on this planet, they have the right to understand what we mean by this thing called AI, because now it's littered in everything we do. You get on your computer if it doesn't say, I probably have been sleeping under a rock for a long, long time. And this is not a universal definition. Myself and a few others worked on this, and it's not necessarily the the accepted definition, but it's a good faith effort attempt at saying what it is that we talk about here. So when you say AI, it really is a sub field of computer science. It's actually at the intersection of computer science, software engineering, and data science. So that's one way to look at AI. So when my students say, AI did this, or AI did that, I look down at them and I say, No, no, you should know better. AI is a sub field of computer science and software engineering intersects with data science, because you need data to fuel AI. But the short answer for the people, perhaps watching this podcast, AI, I want you to pay attention. Ai equals software. Ai equals algorithm. So you have to ask yourself, how long did we have algorithms and software? For a long, long time. There are all these things we call AI, but they're like aI methods or techniques or tools. So if we were to define it and this, we wrote this definition, really, AI can be a system or an application that somewhat exhibits intelligent behavior. It's not intelligent on its own, and it does that on, you know, tasks with different difficulty and complexity levels. And how does it do it by applying advanced methods and techniques from the fields I mentioned, and typically it has to have at least five elements. A few of those five elements, I should say, at the top of them, you've probably heard of it. It learns from previous experiences. So we give it data, we train it, and it learns from that. It's called pattern recognition. Another is we don't have to tell it everything how to do it. It acts on its own, autonomously. It makes decisions. It's more like a dynamic, hip agent, if you like, right the difference between a novice assistant and a trained agent who can make decisions on behalf of their client, and, thirdly, adapts to different situations, because life is changing. Every day is different, every moment is different. So it has that adaptation. And of course, with adaptation becomes dealing with uncertainty, you will never have complete information as a human being. Same thing for this machine, we call AI method. And lastly, it is reliable enough under most conditions, because if we have something that's not reliable, then it's is that intelligence? No, we won't say it about anything. It's intelligent if it's not reliable. So it's that independence, adaptation and reliability kind of based on the fundamentals of computer science, software engineering. Learning from experiences, applying pattern recognition. This whole AI that we have been talking about for a long time, it's been around when I say a long time. It seems like a long time to two years when kind of this thing exploded. And now kids, you know, my daughter is 11, she knows about AI. Everybody that I speak with, the first thing they want to ask me is, what is AI? What is it going to do to the world? I say it depends on what you want it to do to the world. We can use it for meaning, or we can use it for something else, for money or profits. You use it for peace or for destruction. At the end of the day, AI is software. AI is an algola, myself and others who are at the heart of software engineering, working on in that field and trying to recruit and attract different people in that field, understand that. And so this is the best time to be in software engineering if you want to learn about AI, because AI is software. Nothing is superficial intelligence does not have software empowering it. The smartphone is called that because it has software in it. The previous iteration of that device, it was just a dumb device or a plane device because it didn't have software in it. If you look at everything that's happening in the world, we have to take it back to its initial founding, and it's the fundamentals, if it hasn't changed.


Dennise Cardona  6:21  

It's interesting because when AI first came onto the marketplace, I'll say back in I think it was 2021 November of 2021 I feel like that's when chatgpt became available to the public, and that's when everybody started talking about it, and there was this big apprehension. And it's just interesting to see three years later, how so many organizations, including government universities, have adopted it, have accepted it, and are starting to embrace it instead of fear it. Because I know just speaking with people at the beginning, there was a lot of fear around the use of AI, especially in the classroom and things of that sort. And now it seems like there are courses out there at university level teaching us how to use and embrace AI technology, how to make it so that it's more accessible to people using AI. And I find that very, very fascinating, because I remember when I first started using say, chat, G, P, T, I thought, oh my gosh, I'm feeding the devil all the stuff I'm putting into it and everybody else is putting into it. We're just sort of feeding this, maybe this monster. I think as time goes on, a lot of the sentiment seems to be that people are starting to get comfortable with it and actually starting to rely on it more and more for doing their everyday tasks, administration, things they don't really want to do, things that they could off onto an assistant, and this is sort of a virtual assistant for many people. There's still that part of me that fears, in a way, what is it going to become? What? What is it going to look like three years from now, will it make a lot of things obsolete, and on this, on the flip side of that, will it actually create more opportunity for us as human beings? I'm curious to see what your thoughts are on that.


Dr. Mohammad Samarah  8:13  

Clearly, when ChatGPT first came on the scene, many people have raised the alarm. Were fearful. What is that going to do, not just in education, but everything else. The thing is, about new technology. Anytime a new technology is introduced, we oftentimes, almost always overestimate its impact in the short term and underestimate its impact in the long term. It's actually a principle in economics, so we tend to hype it and over exaggerate it. I get students telling me, professor, AI is changing every day. Like no, AI is not changing every day. The hype machine is clearly the tools are becoming better, but they're not vastly better from one day to the other. If we look at it as a tool to make meaning and to make our life better and to help us, then we can utilize it for that purpose. The challenge is to not over rely on it and not treat it as a replacement for our judgment, whether we are, you know, compressed for time, or whether we lack the desire or motivation. So those would be the challenges, and that's why it's important to teach students at all levels, K through 12, college students, adults, senior folks who've been in the room for a long time, how to use it, how to use this tool to make them better. Because it's generating things, chatgpt. People call it AI, but it's just one kind of method of AI itself, and that is to generate text based on a question or based on a prompt, if you like. We have been doing that for quite some time, and I think I may have shared this story with you in the late 90s, myself and a few other engineers about four or five men and women. We were tasked to create a similar product for a client, which will remain unnamed, for about $5 million to do exactly what chat GPT is doing, even without a prompt. This was targeted toward media companies if you're writing a story about a specific thing, the machine this AI method, looked at all these repositories and databases and your contact lists and organization lists and so on so forth, and suggested to you all of these things that you should take into account automatically, automatically, as we say, but it only use authoritative sources that you consent to use, and you agree to use. So that was technology back in 1999 I think, or so. So we have been generating text and images and video and these type of things for a long time. In fact, the underlying technology behind computer science is a generator, a transformer. So when engineers and programmers write code, they're writing code, but that code does not get executed on your computer. It's taken by another program called compiler, which transforms that code to assembly and into binary, and then that gets executed on your device. This idea of transforming text, it's been around for a long time, some of us spent decades working in that domain, and there are many tools. What's different is now accessibility, and the fact that this tool now is available to the masses, and some of the uses are done without understanding its strength and its limitations, and so that's what we need to do. We need to make sure that our students and us collectively are using these tools to be better. And so in our classes, we encourage its use as long as the student is using it to enhance, an augment and challenge their learning and understanding. Yeah, that's


Dennise Cardona  12:00  

very interesting that you said that. Yesterday. I was watching a football game and there was a commercial. It was a famous actor who was reciting like a script, and he goes, Oh my gosh. And it was part of the commercial. He said, Oh my gosh. Can AI stop writing this and have, can we have a human write this, please? I so resonated with that, because I used to really enjoy watching YouTube instead of television, just being able to learn new things. I'd like to just explore different topics. And lately, it just seems like everything is written by AI, and there is a commonality to that language that I'm starting to actually perceive. But there's a certain way, a cadence or something that when it's written by AI, it uses certain words that are just overused or overly inflated, and it doesn't have that human criticality. I like to call it that, that beautiful use of the human language that human beings have become so good at, and now I just fear that it's going to be watered down. What we see out there, what we read, is going to water down that beautiful aspect that makes us human beings, that makes art worth looking at, that makes words worth reading. I feel like it's a saturated look out there, where everything I pick up, or everything I look at, it feels like it's aI now that disappoints me as a human being, but I also see great potential in that too, because if there is an awareness out there, and I've heard other people with that same sentiment, that hopefully that helps us as as a species, to level up and realize, like you had said, if we can learn to embrace it by having it augment what we've already created, by helping us to embellish what we've already created. Then I think that's whether the magic happens. But when students or writers solely rely on it, for it to just create an output that they can use without marinating it, without massaging it into something that is more complex, then I think we have a real problem.


Dr. Mohammad Samarah  14:01  

Yeah, I mean, in many ways, and I don't have the name of the person to attribute this to. We were at the International Conference in software engineering in Lisbon, Portugal, in April. One of the speakers mentioned this, and she was attributing this to someone else. Is, if you look at AI and kind of go back to all the things that people are talking about in terms of chat GPT. We have had ChatGPT for so long. It's auto complete. You know, Word and other tools gave you auto complete. Even software engineering tools, coding tools, would auto complete your actual statement or expression. She said, this other person said it's really like a spicy auto complete. That's really what ChatGPT is, right? So at the end of the day, it's just, you know, predicting the next phrase or the next word for you, albeit it's doing it very quickly, very fast, and has a vast amount of text to draw. Upon some is great, some is rubbish, and some is not so exciting. So the result is going to be a blend, a mix, if you like, of all of these things. So we can take this technology and make it better by training it on authoritative sources. So the idea of having a personal large language model. We want to do that to avoid the pitfalls, but also to ensure that there is no monopoly in this space, that this is not controlled by the big companies in the space, and it's also not being used for pure profits or pushing under sort of communities even further right. Currently, that's not the case. Most large language models are produced by companies that have billions of dollars that are able to do that right. So we want to change that and ensure that we have a high level of trustworthy of the model that's being fed to this machine, and also that the rest of us are able to create these models, people in America, people in Africa, people in the Middle East, people in inner city, Baltimore,


Dennise Cardona  16:14  

yeah. So AI has become, as we've discussed, a significant force, specifically, as we were discussing today in software engineering, how is UMBC software engineering graduate program adapting its curriculum to incorporate these AI technologies?


Dr. Mohammad Samarah  16:33  

You know, we have seen this coming for a long time. The best thing to do is to set the trends, rather than just chase them or follow them. About two and a half years ago, we launched a new lab called the ethical software lab, because we can see how critical that is. In fact, our software engineering graduate program is based on this idea that we need to change the narrative about how we write software, how we build it, how we design it to be first ethical, then reliable and then beautiful. All the other attributes are important. But now that software has entered into the equation of all life, and now we know that AI is just really software pretending to have intelligence, and software always had some level of intelligence. I don't know about you, I would never use a piece of software that has no intelligence in it, the average of, you know, a list of numbers very quickly. Would I use it? No, we started this lab two and a half years ago, and we're continuing that work, and I'll talk a little bit about that in a bit here. In addition, we're developing some new courses to address the emergent need and also to ensure that our students have clarity on what it means to have AI in software engineering. So we're offering a new course in the spring of 2025 called Software Engineering for AI. That is, how do you design architect, build software engineering products with AI for the AI hyped and AI reality that we live in today? In other words, how do you use AI to assist you in building it? But how do you incorporate AI methods in it? And if you look at the AI methods, I mean, I can probably list a dozen or more, from machine learning to pattern recognition to computer vision to sentiment analysis, all of these techniques are part of what people just say AI, but they're just AI methods, and you can combine them and blend them in different ways to create a system, a software system, that has or exhibits intelligent behavior. Your spreadsheet is intelligent. This studio recording software is intelligent. I think it's that's an important angle is to look at it from that view. In addition to that, we believe in order for AI to become a force for good, we have to raise awareness of the public teachers at the K 12 level and policy makers, administrators, non technical people on the power and pearl of AI and what it means to have AI. Recently, we were given an award by ACM, the Association of computer machinery to conduct a series of workshops, where our first workshop on ethical software will take place online on the first week of January of the coming year, January 8 through the 10th, and we're planning another one in the summer as well. The idea behind the workshop is pretty simple, is to demystify what software is, demystify what AI is, because AI is software, and then figure out how to use it to make your life, my life, and everyone's life better and educate others of the. Importance of using these tools, but use them responsibly.


Dennise Cardona  20:04  

Indeed, you talked about the ethical software lab. Could you elaborate a little bit more on what that is?


Dr. Mohammad Samarah  20:10  

You know, we launched the program with this idea in mind, that software needs to be ethical first, and then we realize what is it we're going to do about it? We can, you know, offer classes ethics and software engineering that's important. We can have our students wear the Code of Ethics when they first arrive. The first week of classes is sort of a analogous to a medical student being inducted to become a physician and doing the white coat ceremony. Because we believe software engineering can be as critical as caring for someone, a patient's life. But we said, we need to do more. We need the public to understand what software is doing for them or against them. And so we were brainstorming, myself and one of my colleagues, Melissa Saul, over lunch, I think that was two and a half years ago or so, and we realize, you know, there's a lot of information, but a lot of it's confusing or too technical or too long. For example, the terms of service. Who reads that? Right? It's 20 pages of legalese, and you sign here. I never encountered anyone who said, Hold on, I need to read it. What would happen to the cube? It would be madness. It would be a riot, right? Same thing for privacy policies. Then we realized, well, but we do some things like that for other domains. So for example, food, if you go to the supermarket and buy a box of cereal, you can flip it to the side, and we'll give you the food fats label. Similarly with hardware, most hardware devices this one included. This is an iPhone. If we take the cover off, you'll see that it has certain certifications in the back, see or UL or other entities that says this device is safe to use. It will not catch fire, will not shock you under normal conditions. We don't do that with software. You don't know whether it spies on you, whether it collects your data, whether it turns on your camera without you knowing it, whether it's listening to you right now, the idea has sparked us to do more brainstorming. We came up with this approach to give the end user, regardless of their technical abilities, a clear way and a concise way of seeing what the software does for them. We call it the digital nutrition label, and if you look at at a glance, it looks like the Food Facts label, but it has information in regard to what the software is doing for you. It's a prototype. We're hoping to have a production level of that available in the spring of next year, we have five categories, and we're doing more usability studies to see what else would be more helpful for users. The first category is interruptions. Tells you, on average, how many times going to interrupt you, because we believe that's important, because our lives are fragmented. If we have too many interruptions, it would become death by 1000 cuts, right? Of course, immediately after that is my privacy, my rights. So what we are doing is we're taking Terms of Service and Privacy Policy, we talk about AI, and applying AI methods like natural language processing, NLP, to that, to summarize it, to extract the relevant information so the user doesn't have to read 20 pages of text. We want to remove that burden. That burden is wrong. In, you know, some people would say it's a criminal act to, you know, force users to take that time out of their day. They can't afford to take that time out of their day. It's a burden, and it needs to be lifted from users. So we're taking Terms of Service and Privacy Policy and telling you in a concise way what it means for you, and then we look at the usage of the software on your device and your data. Does it monetize your data? Does it sell it? Does it keep it does it use your sensors with your consent or without your consent? What is the impact of that software on your device and the environment? How much energy is it consuming? Most of us do not think about all these AI methods, but they consume a ton of energy. Many of us do not know the fact that Google, some years ago, applied to become a utility authority in the state of California because of how much power they use. Right? So we are responsible for using power in an efficient manner and responsible manner, and we demand that of our software, and we have the right to know that without having to spend hours researching. Even those of us who are computer scientists, we don't have time for that. When I'm using an app or a piece of software, I just want to use it. I call myself a smart, dumb user. I just don't want to deal with that, right? So we want to do the same for our end users. So that's the digital literature. Label. We actually wrote a paper and sent it to the International Conference on software engineering, and we're hopeful that it will be accepted. We should hear back in two, three weeks, they have this track where you can submit ideas that are like new ideas. It's called new ideas and emerging results track. So that's kind of the the mission and the vision behind the ethical software lab.


Dennise Cardona  25:23  

Wow, the digital nutritional label. That just sounds like an amazing, amazing thing. Gosh, I hope that really works out well, and I think that's really going to help a lot of us deal with some of these things that we just don't have time to deal with. And I would say that's probably 99% of the human race using who's using technology and software?


Dr. Mohammad Samarah  25:45  

Yeah, we're excited about it. You can check it out at the Lab website, esl.umbc.edu,


Dennise Cardona  25:51  

Okay, excellent. I will put a note in the description of this podcast as well so that you can easily get to it. AI, how do you think it's changing the job landscape for software engineers, and what skills do you believe are going to become increasingly important for our graduates.


Dr. Mohammad Samarah  26:10  

Obviously, having the awareness of these AI methods, understanding how to use it properly, but if we step back a little bit, the fundamental skills that are needed for a software engineer are still the same. You must be grounded in the fundamental principles of computer science, software engineering and data science, to be an effective software engineer. That probably even accelerated now because of the use of AI to generate code, which means someone could generate code and not understand what that code is doing, whether it's the proper algorithm, the appropriate algorithm, or not. Chat, GPT and other tools that generate code are utilizing code that was written on the internet by a variety of people with varying with degrees of skills, the code might be just fine or good, but it may not be appropriate for a particular situation. If you're building a an embedded device for patient care or for healthcare or a medical device, the code that goes into them is a lot different than the code that goes into an e commerce website or a game, for example, that is used by, you know, third graders. The code that goes into your robotic vacuum cleaner is a lot different than the code that goes into the, say, a self driving car, in terms of safety, in terms of reliability. So all these things come into it. So it's now it's more important for software engineers to be grounded in the fundamentals of the field and to be those architects right to understand that I can apply this AI method and this AI method, and this was done from computer science and this technique from information systems, to build a larger solution. These are the things we do in the program where we say you need to combine and blend multiple components, libraries, methods, techniques together to come up with a solution. The solution must be tailored to the problem, so chatgpt generated code may be only tailored to your prompt. It doesn't understand the problem. It did not sit with the shareholders and the stakeholders and the users. It did not monitor and observe the environment that you're creating the software for. If it did, perhaps a code generated by it would be better and different, but it's unable to do all of that at the end, what a these AI methods are being now popular and hype, to some extent, is even an increased signal to say you need to have a mindset of growth and continuous improvement, and to say, if a new technique comes in, we will embrace it, then extend it. We will take the best of it and use it well, put to the side, or throw away the parts that do not apply, or perhaps the parts that are not fully ready in some ways, in a short answer, embrace and extend, continue to use AI methods to build better products, but Do not fall into the hype that AI methods can create effective solutions on their own.


Dennise Cardona  29:25  

It really goes back to that whole premise, the quality of your of the inputs will directly affect the quality of the outputs. And so if you are not giving the AI tool that you're using the right directive, the right criteria, all of the inputs that are necessary for it to create what you're asking it to create, then the outputs not going to create what you need it to create. Precisely, I know, with my own personal use of it, I've used it in course development and, of course, marketing stuff. Oh, help me write this YouTube description? The. Maybe I'm pressed for time that maybe that's okay to do, but the more I feed it, like with course development, you know, if I'm trying to develop a course for my students, and I am not putting in the correct, correct criteria, the things I want to evaluate, the learning objectives, all of that stuff, if I don't have that performance agreement that I'm trying to get, AI is not going to know how to know how to do that on its own unless I tell it.


Dr. Mohammad Samarah  30:22  

Right, precisely. We would sit here for a long time if we are, you know, let's say we ask AI, an AI generative tool, to design the next generation software engineering program for graduate students. It will never come up with this idea that, you know, software should be ethical, reliable and beautiful. We'll come up with a whole bunch of things that people wrote and kind of magma made it in some way, and boom. You know, you look at it at a first glance, it looks pretty good. It has bullets and numbers and all of these things, and it reads like English or whatever the language is written in. But beneath the surface, it's very shallow, right? So use it for basic things which it really excels at. Use it for automating, you know, mundane things. Use it to automate code that is not critical. Make sure you test that code right, and the truth of matter we have been doing that, you know, doing templating is, is an aspect of that auto complete is another aspect code generation has been around. I have used code generation myself since I was, you know, a young engineer in my 20s. But the difference was, we knew when to use code generation when not to use it, right? Fact, one of the companies I worked for, we won a huge contract because we knew not to use code generation for a specific task, and we were a smaller company, and we'd be the larger company because we knew that that particular code needed to be written by hand. We do not even rely on the compiler to generate the assembly code. We hand tuned the assembly code ourselves because the difference between our code being faster than the competition was about three or four lines of code, because that code needed to execute very quickly and be able to handle high volume of communication at high speed. And this is in the 90s. Dennise, so Wow, generation is not a new idea. It's just now we have made it available to the masses, and they can type a few words, and because we have a mass also a lot of code, Stack Overflow, website has a lot of code many others. So it's not that ChatGPT is generating the code, right?


Dennise Cardona  32:39  

Yeah, it's taking it from what's already out there. That goes back to my first initial reaction, my emotion toward this whole thing is, I fear that we're going to get lazy as a society, and if generations are not taught now to use it the way you just said, for things that are not critical, making sure that you put your human element to it. That's where I fear things can go awry, and teaching children, K through 12, teaching college students that ethical concern of making sure that you are the person who is in charge of this AI, you are the one who's giving it the inputs, to get the outputs, not to rely so heavily on it, because it will just become a very, a very shallow vehicle for all of us, for society. It'll just become very watered down. I find it disturbing on some levels, but at the same time, the educator in me, I feel challenged by it too, because I think there's so much potential and so much opportunity to bring that ethical education to K through 12 and to college universities, to be able to make sure that this is this is not what's going to happen this. We are not going to get lazy. We are going to continue to use our beautiful brains. That's one of the things I talk about a lot in some of the courses I teach, is make sure you're using your beautiful brains. You know, that's part of it. It's part of that whole input cycle. We need to continue to use our human brains as a society.


Dr. Mohammad Samarah  34:16  

Yeah, thinking is important, and now it's more important than ever before. And you know this is an opportunity to bring more people into more beautiful thinking and ensure the machine is placed in its proper place. And we, the humans are, we're always in charge, but sometimes we forget. We get distracted.


Dennise Cardona  34:40  

With the rise of AI, ethical considerations, as we talked about, are critical. How does the program educate its students about the ethical implications of AI in software development specifically?


Dr. Mohammad Samarah  34:55  

Yeah, so we talked about the ethical software lab. We are. Probably one of the first programs in the region, if not in the nation, where we insist on offering a full course, a full semester course in software engineering ethics, and all of our students take that course in their first semester. We also talked about starting with the end in mind. So our students swear the code of ethics, the IEEE and ACM code of ethics, the Institute for Electrical and Electronic Engineers and the Association of Computing Machinery, which are the asperative societies in this field, which in In summation, there are eight phrases, if you like, or eight testimonies there, but they all say that everything that I do as a software engineer will be in the public interest, and it will dedicate myself, my life and my learning and my work toward that. And we haven't swear that from day one, because in two years, they'll be inducted to be software engineers, and their work can affect people's life, just like a physician's work. Beyond those two aspects, is to say in all of our work, we need to ensure transparency and put the users first, so all our courses have that in mind. Why are we doing this? Why are we collecting this data? What does this function do? We don't use technology for the sake of technology. We try to understand the user and the problem, and we solve for the problem, not for the technology. Just because we have aI methods or generative AI or whatever it is, it doesn't mean that's the proper ingredient. That's the proper component of the system. So put yourself in the shoes and the skin of the user and make their life better. You want the user to be excited about using a piece of software. How many pieces of software we touch today, we can say that about we're so excited, we're so delighted and thrilled to use it. We have made quite a bit of progress, but there is more to be done. And so these are, you know, the things that we teach, and just being grounded in the fundamentals, because the fundamentals are constant. And what gives us that grounding, computer science fundamentals, self engineering fundamentals, mathematical fundamentals, engineering fundamentals. And those are the pillars of the program, the techniques, the method, they change, and some will change and say, well, they change every day. You can believe that or not, that's okay, but the fundamentals are always there. If we teach those fundamentals, we are teaching and graduating engineers, not technicians. In other words, when a new technique comes in, they can pick it up quickly in about a week time or three days. Or, you know, if you have a lot of energy in one one night, right? Technicians, they have to go back to school and learn again that procedure, because they lack the fundamentals, they lack the principles, and so that's how we differentiate ourselves.


Dennise Cardona  38:12  

Beautifully stated. Is there anything that I have not asked you, that you'd like to say as we conclude this episode?


Dr. Mohammad Samarah  38:19  

I think at the end I would perhaps share three things. The most effective people that I have ever worked with or associated with had this idea of change is always there. Last hour is different than this hour. And the best of us take advantage of that, and they say, you know, I'm going to be my best every hour, because I want to be the finest. And if it's the final hour, it will be the finest in the final hour. And to do that, you embrace and extend you embrace any change. Not all change is great. Some of it is perhaps too early. Some of it is not worthy. Some of it is hype, but you embrace the good things about it. You keep an open mind. You have that beginner's mind, which is sometimes hard to do, but that opens up your mind to many things. You see, many things, that allows you to add more and more to your frame preference. Otherwise, everything we see is based on this kind of constant or limited frame preference. Secondly, I would say, learn, learn every day, right, grow, give, right. We say the best people are generative. And this was being said long, long before chat, GPT and generative, AI, they generate tonight ideas, and also they share it. They give it, right? They're not afraid to give it, even if it's not ready for prime time, because they feel by sharing it will make it better, or perhaps someone else have a better idea who will replace it. And thirdly, you know the future is humans with AI. So these AI methods and tools and techniques will make us better. The high will decline. And then we'll look at AI methods and techniques, as you know, the robotic vacuum cleaner 10 years, 15 years ago was, you know, the range. Now you can, you know, pick it up from any you know, store and you know, now these things are also becoming part of everyday appliances. If you go to a store, you can pick up a clothes washer that has literally obscene the sign, I think it was Home Depot or Lowe's. It says bespoke AI, and it's a a, you know, a new washer dryer kind of thing. So five years from now, we won't say bespoke AI, because we expect it to be, you know, very efficient, very intelligent, right? I don't have to tell it these are delicate clothing, or, you know, run it, you know, full cycle or fast spin. It will figure that out on its own, by means of sensors, software and hardware, and by means of us teaching it to do that. So the future is bright. It's humans with AI, and the choice is ours.


Dennise Cardona  41:09  

I feel so inspired after this conversation, and I hope that all of the listeners feel the same way. Thank you so much, Dr Samarah, for being here and sharing your insights with us, it's been fascinating. Thank you to our listeners for tuning into this episode of UMBC Mic'd Up podcast. If you'd like to learn more about our offerings, please visit the link in the description. Thank you so much.


Dr. Mohammad Samarah  41:34  

Thank you, Dennise, take care.