I recently earned a certificate in the Responsible Generative AI Specialization offered by the University of Michigan on Coursera. 🎓💡
Here’s the link to the full certificate: Responsible Generative AI
Why I Took This Course
Before I enrolled, I was already concerned with how generative AI was being misused—particularly in creative communities. One moment that pushed me toward this course happened during my time in the HOLOSTARS/vTuber fandom. There was increasing frustration among artists due to AI-generated fan art that was passed off as original, without any disclosure or transparency. As someone who respects the time and talent of artists, it felt wrong to see others reap the credit while bypassing the effort.
But as I went through this specialization, I realized:
Responsible AI isn’t just about ethics in art—- it goes much, much deeper.
What I Learned
While I initially expected the course to focus on copyright violations and unethical media like deepfakes, I was pleasantly surprised by the depth and breadth of topics covered. Some key highlights included:
- ✊ Bias and discrimination in training data and model outputs
- 🧠 Anthropomorphism and parasociality, especially when people project emotions onto AI (something I’ve seen both in fandom and now in AI adoption)
- 🚨 Prompt injection and malicious misuse of LLMs
- ⚠️ The future of labor and automation: how AI is already impacting workers
- 💬 Communication transparency, human-AI boundaries, and governance principles
- 🔍 The importance of model interpretability and explainability
Honestly, the course opened my eyes to how much of Responsible AI is just expanded versions of what we already know: ethics, fairness, intellectual property, privacy, and security — but recontextualized for the AI age.
What I Hope This Certificate Brings Me
This program wasn’t designed to teach you how to build with AI or write expert-level prompts—but that’s okay. Courses like these are foundational. They remind us that no matter how advanced our tools become, they must still align with human values.
It’s easy to chase technical skills and leave ethics for “later.” But as AI becomes more embedded in society, understanding responsible practices becomes just as critical as learning Python or writing prompts.
That’s what I hope to bring into the AI space:
a blend of thoughtful, human-centered design and practical prompt development.
What’s Next?
Although I want to dive deeper into the hands-on skills of prompt engineering, LLM tooling, and applied generative AI, I also want to continue strengthening my foundation in ethics and AI governance. This specialization gave me a strong start, but there’s still so much more I want to explore.
At the end of the program, the course offered a list of resources for further learning—- which I plan to take seriously. I’m also considering more courses in:
- ✅ QA testing (especially for AI tools)
- 🔒 Cybersecurity and safety testing
- 🧭 AI for creatives
- 🛠️ AI Engineering
- 📊 AI data anlytics … and maybe a little more.
Because being a prompt engineer isn’t just about clever prompts—- it’s about responsibility too.