Trustworthy Artificial Intelligence Research Lab (TAILab) is focused on understanding trust in machine learning algorithms from an information security perspective. While we are interested in data security and privacy research in its classical sense, in TAILab, we see data in the AI systems as a dynamic entity whose characteristics change throughout its lifecycle, from sensing, collection, and use in the learning phase to the inferencing time when it turns into a decision product. The main focus of our lab is on devising provable methods that provide algorithmic security and privacy guarantees for the data and its life cycle in machine learning algorithms, including privacy guarantees and robustness certifications. In TAILab, we are also interested in research on quantifying machine learning model uncertainty and misclassification risks when security and robustness guarantees are not attainable. Our theoretical research on trustworthy machine learning and uncertainty quantification plays a pivotal role in ensuring responsible AI deployment, particularly in safety-critical systems, such as diagnosis and health outcome prediction models, which are the applied focus of our trustworthy AI research lab.
I seek to recruit highly qualified individuals pursuing a graduate degree and postdoc. Please check my research interests prior to sending an email. Your email should include (as attachments and not URLs), your CV, transcripts (both undergrad and graduate), and a research statement describing a research topic and how the topic is related to my research interests. Due to the volume of emails only potential candidates will be contacted.
If you are currently an MEng student and interested on Security and Machine Learning, I might be able to help you explore topics and projects that would suit your background and supervise your project.
TAILab Research Group is strongly committed to upholding the values of Equity, Diversity, and Inclusion (EDI). Consistent with the Tri-Agency Statement on EDI, and the Dimensions Pilot Program at Toronto Met. University, our group will foster an environment in which all will feel comfortable, safe, supported, and free to speak their minds and pursue their research interests. We recognizes that engineering culture can feel exclusionary to traditionally underrepresented groups in STEM fields. By acknowledging the EDI issues that exist in our field, we aim to validate the challenges faced by each group member, and continually strive to improve our group’s culture for all members.
We meet bi-weekly to discuss research topics on AI and Machine Learning Security, Privacy. Please see the meeting schedule and discussion topics here. If you are interested to attend please contact Reza Samavi.
Security & Privacy
Trustworthy Machine Learning
Safe and Secure Machine Learning
Optimization
Machine Learning Robustness
Secure Machine Learning
Optimization
Machine Learning
Medical AI
Machine Learning
ML Robustness
Machine Learning
Medical AI
Security, Privacy & Trust
Optimization
Machine Learning
Differential Privacy
Differential Privacy
ML Robustness
Security & Privacy
Machine Learning
Blockchain
Medical AI
LLM Privacy
LLM Robustness
Security
Cryptography
Machine Learning
Medical AI
Machine Learning Security
Optimization
Machine Learning
Generative Adversarial Networks
Machine Learning
Medical AI
Semantic Web
Machine Learning
Social Good
Security & Privacy
Optimization
Machine Learning
Security & Privacy
Machine Learning
Social Networks
Security & Privacy
Optimization
Machine Learning
Security
Privacy
Copyright © Reza Samavi. All rights reserved.