woman working in media and communications

Artificial intelligence is reshaping how stories are written, produced, and experienced. In this Q&A, Assistant Professor of Interdisciplinary Liberal Studies, Dr. Brandon Loureiro, shares how his experience as a film and television executive at companies such as Paramount Pictures, Lionsgate, DreamWorks Animation and others informs his approach to teaching AI in the Bachelor of Arts in Communications and Media at University of Massachusetts Global. 

Drawing on recent Hollywood happenings, ongoing questions about creative rights and job security, and his own classroom practices, Dr. Loureiro offers guidance on when and how AI can enhance creative work, ways to address ethical concerns around authorship, copyright, and creative labor, and why guardrails and accountability are essential in the classroom and beyond. He also touches on which AI skills are critical for today’s communications students and how this learning can translate into meaningful career skills in the future. 

Q1. You recently presented at UMass Global’s Inaugural Gen-AI Day on how educators can learn from the film and television industry’s experiences with artificial intelligence. Where did the inspiration for that topic come from, and what were some of your key points?

Dr. Brandon Loureiro: Yes, that was a fantastic event, and I really appreciated the university’s interest in fostering conversations around AI. In terms of my own inspiration, my background is in the film and television industry, specifically working in script development at Paramount Pictures and Jonah Hill’s Strong Baby Productions.

In recent years, there have certainly been conversations about how AI could be used to come up with ideas, storyboard scenes, and expedite the editing process. However, the entertainment guilds understandably wanted to pump the brakes, which in part led to the 2023 WGA (Writers Guild of America) and SAG-AFTRA (Screen Actors Guild – American Federation of Television and Radio Artists) strikes. Since those strikes were resolved, we’ve seen AI used in films like the Brutalist, where the AI software Respeecher improved specific pronunciation of Hungarian dialogue. I mentioned during my presentation that this use of AI primarily became controversial once the film was nominated for an Oscar. 

For educators, this can help teach us that the stakes of an assignment, like a discussion board versus a final paper, may have some bearing on how appropriate AI usage is. I also talked about marketing posters that were made with AI for the film Civil War. One poster didn’t accurately portray the city it was meant to represent, while another poster featured a car with three doors. My point to educators was that even if we allow students to use AI, we should rigorously assess how well they have checked their outputs. Overall, the presentation was meant to display that Hollywood features many instances of AI usage, and we can use these as case studies for students and educators to evaluate both the quality of the product and the public’s reaction. Hopefully, this can jumpstart some interesting debates around authorship and ethics, while ultimately helping us all be more informed about AI.

Q2. The introduction of AI actress Tilly Norwood in late 2025 sparked debate around AI in Hollywood. What was your reaction? Will AI replace actors in Hollywood?

Dr. Brandon Loureiro: The general reaction to Tilly Norwood was fear — that this felt like technology replacing art. What I found most interesting about the debate was how many publications outside of the film and television industry picked up on that story. There had been many other instances of artificial intelligence in actual film and television projects, including the examples I gave previously. However, those conversations were mostly contained to industry trade publications like Deadline and Variety. With Tilly Norwood, it was suddenly a discussion that local news anchors were having and that an average Joe was posting about on social media. 

It remains to be seen where we go from here. Will we be watching someone like Tilly Norwood in the next Mission: Impossible movie? Probably not, but at the very least it shows people are engaging with what’s happening in the media and communications field. 

Q3. Many workers in the entertainment industry are concerned that AI is stealing creative work and jobs. Do you think this is the case? How is AI changing Hollywood jobs?

Dr. Brandon Loureiro: It certainly can be the case without guardrails in place. We first need to think about where generative AI platforms are drawing from. For example, if they’re drawing knowledge from copyrighted works, then this is likely theft of creative work. Even the AI technology that people are using to alter their social media pictures, say in the style of caricatures, could be trained on an artist that normally would be paid for their work. 

Now, the larger question is whether we’re losing creative jobs. On one hand, we have sweeping layoffs in the entertainment industry. Considering conglomeration has coincided with AI, it’s not like this is entirely AI’s fault. However, fear over potential lost revenue from other forms of entertainment, including AI-generated media, likely has at least some part as well. We also have many workers in the industry who feel less job security because of artificial intelligence, which can cause them to leave the field. 

Then, there are clear instances where AI has taken over a job function, like in visual effects, where there’s no longer a need to hire as many people for one film or television project. So, someone didn’t get fired or leave the industry in this case, but they weren’t as likely to be hired the next time around. All three of these considerations, which are happening at once, do point to some creative jobs being lost to AI.

Q4. If you were to look into a crystal ball, what do you think the future of AI in Hollywood and in Media and Communication looks like? Will AI really “take over” like everyone expects?

Dr. Brandon Loureiro: I’m not sure we need a crystal ball at this point, since we are already seeing the impacts of AI in media and communication. I would expect a lot of recent developments, like the proliferation of AI technology into the various applications we use every day, to continue. 

If I could make one somewhat bold declaration: I can’t help but feel like the tone around AI feels much like what was said about search engines in the 1990s. For many of us, there is a natural desire to see value in what has worked in the past. With search engines, we fondly thought of the library and how in-person research skills would be lost. AI feels similar, where it makes many things easier, but it also means we are letting go of how we’ve done things before. 

In turn, I do expect that some skills will be lost, or at least diminished, like our creative ability to make something out of nothing. With this in mind, you can look at what Amazon MGM is doing with their new AI Studio division. The idea is that AI tools can be used to assist creative work, not replace it. However, the concern is that once you open Pandora’s box, how do you avoid the human desire to use this tool to its full capability? That’s exactly why creatives in entertainment, from Emily Blunt to Lukas Gage, have been speaking out against AI. There’s real fear (and risk) we’re losing something we can’t get back.

Q5. Your thoughts around losing our ability to make something out of nothing really stand out. If creatives lose the ability to develop an idea from scratch, where does that put us in terms of creative rights, licensing, and even personal accountability in terms of authenticity?

Dr. Brandon Loureiro: Great question, and I’m glad this came up, since we had a discussion around the ownership of creative materials after my presentation. To start, we’re very likely to see court rulings and guild rules around AI change substantially in the next 10 years. I think, where we are now in terms of rights, licensing, and accountability is going to change. 

So, what are the legal implications of AI? The influential case I like to point to is Zarya of the Dawn, in which the United States Copyright Office ruled on a partially AI generated graphic novel. The Copyright Office specifically said you can copyright the human-generated text, you can copyright the arrangement of images in the graphic novel, but you can’t copyright the AI-generated images themselves. This means that even though it took human input and maybe even several rounds of inputs to get those images right, they weren’t allowed to be copyrighted. There have since been other cases with similar results, though it’s worth noting that challenges and lawsuits are ongoing, so we haven’t settled the matter by any means. 

At the moment, you probably don’t have rights to that image you generated using AI. So far, AI licensing has yet to be seen. However, if AI authors are eventually granted rights when certain conditions are met, then certainly the questions around personal accountability and authenticity become more pressing.

Q6. I know you are a professor for the BA in Communications and Media program, how do you talk about AI? How do you teach about AI in your communications classes? Is it a taboo subject, or something that’s part of your lessons and assignments?

Dr. Brandon Loureiro: As educators, to make any meaningful progress in our understanding of AI, we must talk about it. I can understand the perspective of not wanting to use AI or thinking it’s a net negative for the communications and media industry, but that doesn’t mean we should treat it as a taboo subject. 

When teaching AI in media education, I really try to contextualize AI, rather than speaking too generally. For example, if I’m putting together a lesson on using multi-media tools to present ideas, the reality is that many of these programs have AI integration. I might point out that a student doesn’t necessarily need to go onto a generative AI website to be using AI, they could be in Microsoft PowerPoint and take a suggestion from the Designer feature. The important thing is that if a student is interested in using AI they should be communicating with their instructor as much as possible. Asking for permission and guidance is so much easier than asking for forgiveness.

Q7. How do you personally feel about your students using AI, and what parameters do you set for them? Do you think AI benefits their work? 

Dr. Brandon Loureiro: I’m open to students using AI within certain contexts. I try to be clear on what is and is not allowed, but a good rule of thumb is that AI use is generally more acceptable for tasks which aren’t the main purpose of the activity. For example, if an assignment involves analyzing an advertisement, I’m fine if a student wants to use AI to come up with a sample advertisement. I also think AI can be useful for actions that fall outside the scope of the course. So, if an assignment centers on students proposing a new product related to media and communication, the scope of the course might cover writing a paper and making a presentation. However, if a student uses AI to create what that product would look like, then that’s additive to their work within the class, despite being beyond what I’m teaching. In that fashion, I can see how AI can benefit a student’s work by helping them think of real-world deliverables. 

Q8. How do your personal thoughts on AI translate to your own actions? Do you use AI? How do you think students would feel about their instructors using AI?

Dr. Brandon Loureiro: The phrase “practice what you preach” comes to mind here. If we as instructors are telling our students to be cautious but curious, we should take a similar approach. I spent a long time researching and talking about AI before I started using it with any particular purpose in mind, meaning I often was just seeing how inputs led to outputs and thinking about where information came from. 

In terms of my own use, there’s a lot of course development that can be augmented by AI, from creating animated videos that illustrate concepts, to summarizing key takeaways from my lessons, to developing quiz questions based on my content. This gives me more time to meet with students, stay up to date on industry news, and all these other tasks that ultimately advance my teaching. I would certainly hope students feel like they’ve benefitted from me focusing my attention on higher-level tasks that require more human intervention. 

Q9. What tips might you have for students who are either just starting to use AI, or who have been using it for a while?

Dr. Brandon Loureiro: For students who are just starting to use AI, the first tip is to look for university-vetted tools and understand how each differs. There are so many websites and platforms, but you want to make sure you’re using a tool that is ethical and sustainable. Another tip is to be careful of what you input into generative AI, since this can not only impact what responses you receive, but also how that platform is trained for others. A good guideline here is that any personal information is probably not something to share freely with an AI platform. 

For students who have been using AI for a while, you may very well be ahead of the curve regarding your understanding of AI. However, I would avoid a false sense of confidence over what instructors can or cannot detect. We’re gaining experience with AI and AI detection tools all the time, and if you’re using AI beyond what is allowed in a course, you could face considerable issues academically.

Q10. What do you think the workforce demands around AI will be in the coming years? What AI workforce skills will be most valuable for Communications graduates in the next 3-5 years? Which should students prioritize?

Dr. Brandon Loureiro: I suspect a lot of companies will be looking for employees who are AI literate in the coming years, so it’s absolutely becoming a job skill. This goes back to my first tip of understanding the different platforms and what they do, but then actually applying it to tasks in the workforce. If you can come in and show a company how to do something 10% more efficiently by using AI in an ethical manner, that’s going to make you a more appealing candidate. This is why AI literacy for students is important to teach.

Now, I also think there will be companies, especially in the creative industries, that are strictly against AI usage. I’m fully on board with these companies wanting to maintain an AI-free workplace and even using this to help advertise the quality of their content or services. 

So, the next job skill is understanding when you can or should use AI, the same way that a company might want you to be comfortable presenting virtually or in-person. Preparation and adaptability are how graduates succeed, which is really something I’ve emphasized in the Communications and Media courses I teach.

Explore More at UMass Global

Thank you to Dr. Brandon Loureiro for these valuable insights into a future living and working with AI. Learning how and when to use AI in the classroom and the workplace, when to take a step back, and how to recognize when human intervention is necessary are key factors in using AI both ethically and effectively.

You can learn more about the BA in Communications and Media at UMass Global on our website.

Considering a liberal arts education?

Explore our resources for various degrees in liberal arts such as Applied Studies, Communications and Media, and General Education and learn about career options that fit your future.

0

Save Time and Money on an Online Degree

Have questions about enrollment, degree programs, financial aid, or next steps?