Seminars and Defenses

Jul. 4, 10A, ENG471
Saba Sedghizadeh • PHD Internal Defense
Subspace Predictive Control: Stability and Performance Enhancement
In the absence of prior knowledge of a system, control design relies heavily on the system identification procedure. In real applications, there is an increasing demand to combine the usually time consuming system identification and modeling step with the control design procedure. Motivated by this demand, data-driven control approaches attempt to use the input-output data to design the controller directly. Subspace Predictive Control (SPC) is one popular example of these algorithms that combines Model Predictive Control (MPC) and Subspace Identification Methods (SIM). SPC instability and performance deterioration in closed-loop implementations are majorly caused by either poor tuning of SPC horizons or changes in the dynamics of the system. Stability and performance analysis of the SPC are the focus of this dissertation. We first provide the necessary and sufficient condition for SPC closed-loop stability. The results introduce SPC stability graphs that can provide the feasible prediction horizon range. Consequently, these stability constraints are included in SPC cost function optimization to provide a new method for determining the SPC horizons. The novel SPC horizon selection enhances the closed-loop performance effectively. Note that time-delay estimation in system modeling has been a challenging step in applications and industry. Here we propose a new approach denoted by SPC RE-based TDE that simultaneously and efficiently estimates the time-delay for the SPC procedure. Moreover, we propose an artificial intelligence approach denoted by Particle Swarm Optimization Based Fuzzy Gain-Scheduled SPC (PSO-based FGS-SPC). The method overcomes the issue of on-line adaptation of SPC gains for systems with variable dynamics in presence of the noisy data. The approach eliminates existing tuning problem of controller gain ranges in FGS and updates the SPC gains with no need to apply any external persistently excitation signals. As a result, PSO-based FGS-SPC provides a time efficient control strategy with fast and robust tracking performance compare to conventional and state of the art methods.
Jun. 29, 1P, ENG460
Qiang Wei • PhD Final Thesis Defense
Current Source Converter-Based Offshore Wind Farm: Configuration, Modulation, and Control
Offshore wind power is attracting increased attention because of considerable wind resources, higher and steadier wind speeds, and smaller environmental impact. Recently, a current source converter (CSC)-based series-connected configuration is proposed and it is considered a promising solution for offshore wind farms as the offshore substation used in existing systems can be eliminated. However, such a CSC-based configuration has disadvantages in terms of size and weight, dynamic performance, cost, reliability, and efficiency. Therefore, this thesis proposes new configurations, modulation scheme, and control schemes to improve the performance of the CSC-based offshore wind farm. First, a new configuration is proposed for the CSC-based offshore wind farm. Compared with existing CSC-based configurations, the new one is smaller in size and weight. Second, another concern of the CSC-based configuration is that conventional space vector modulation (SVM) with fast dynamic response cannot be used for grid-side CSCs because of its high-magnitude low-order harmonics. To solve this issue, an advanced space vector modulation with superior low-order harmonics performance is proposed. For example, the 5th harmonic under unity modulation index is reduced from around 10% (conventional SVM) to almost zero (proposed one). Third, the power balancing of series-connected CSCs is an important consideration for system reliability. The possible imbalance of power among series-connected CSCs is investigated and quantitatively defined. A power balancing scheme is proposed that an equal power distribution among CSCs is ensured. Fourth, to lower the system insulation requirement, a bipolar operation with an optimized dc-link current control is proposed for the CSC-based configuration. Compared with conventional monopolar mode, bipolar mode features lower insulation level, thus contributing to lower cost and higher reliability. In addition, the optimized dc-link current control gives higher efficiency. Fifth, an optimized control strategy with reduced cost and improved efficiency is proposed for the CSC-based offshore wind farm. The nominal number of onshore CSCs is optimized which gives lower cost. And an optimal control scheme is introduced to onshore CSCs which improves the efficiency under inconsistent wind speeds. Finally, simulation and experimental results are provided to verify the performance of the proposed configurations, modulation scheme, and control schemes.
Jun. 15, 11A, ENG460
Lei Gao • PHD Internal Defense
A Discriminative Analysis Framework for Multi-modal Information Fusion
Since multi-modal data contain rich information about the semantics presented in the sensory and media data, valid interpretation and integration of multi-modal information is recognized as a central issue for the successful utilization of multimedia in a wide range of applications. Thus, multi-modal information analysis is becoming an increasingly important research topic in the multimedia community. However, the effective integration of multi-modal information is a difficult problem, facing major challenges in the identification and extraction of complementary and discriminatory features, and the impactful fusion of information from multiple channels. In order to address the challenges, in this thesis, we propose a discriminative analysis framework (DAF) for high performance multi-modal information fusion.
The proposed framework has two realizations. We first introduce Discriminative Multiple Canonical Correlation Analysis (DMCCA) as the fusion component of the framework. DMCCA is capable of extracting more discriminative characteristics from multi-modal information. We demonstrate that optimal performance by DMCCA can be analytically and graphically veri_ed, and Canonical Correlation Analysis (CCA), Multiple Canonical Correlation Analysis (MCCA) and Discriminative Canonical Correlation Analysis (DCCA) are special cases of DMCCA, thus establishing a unified framework for canonical correlation analysis.
To further enhance the performance of discriminative analysis in multi-modal information fusion, Kernel Entropy Component Analysis (KECA) is brought in to analyze the projected vectors in DMCCA space, and thus forming the second realization of the framework. By doing so, not only the discriminative relation is considered in DMCCA space, but also the inherent complementary representation of the input data is revealed by entropy estimation, leading to better utilization of the multi-modal information and better pattern recognition performance.
Finally, we implement a prototype of the proposed DAF to demonstrate its performance in handwritten digit recognition, face recognition and human emotion recognition. Extensive experiments show that the proposed framework outperforms the existing methods based on similar principles, clearly demonstrating the generic nature of the framework. Furthermore, this work offers a promising direction to design advanced multi-modal information fusion systems with great potential to impact the development of intelligent human computer interaction systems.
June 7, 1P, ENG460
Farheen Fatima Khan • Ph.D. Final Defense
Towards Accurate FPGA Area Models For FPGA Architecture Evaluation
Field Programmable Gate Array (FPGA) devices are known for their high performance and post fabrication re-programmability by the end user. With technology scaling and human innovativeness, FPGA architectures have evolved into Heterogeneous System on Chips (SOCs) devices in order to meet the diverse market demands. Integrating reconfigurable fabrics in SOCs require an accurate estimation of the layout area of the reconfigurable fabrics in order to properly accommodate early floor-planning. Hence, this work provides an evaluation on the accuracy of the minimum width transistor area models in ranking the actual layout area of FPGA architectures. Both the original VPR area model and the new COFFE area model are compared against the actual layouts with up to 3 metal layers for the various FPGA building blocks. We found that both models have significant variations with respect to the accuracy of their predictions across the building blocks. In particular, the original VPR model overestimates the layout area of larger buffers, full adders and multiplexers by as much as 38% while underestimate the layout area of smaller buffers and multiplexers by as much as 58% for an overall prediction error variation of 96%. The newer COFFE model also significantly overestimates the layout area of full adders by 13% and underestimates the layout area of multiplexers by a maximum of 60% for a prediction error variation of 73%. Such variations are particularly significant considering sensitivity analyses are not routinely performed in FPGA architectural studies. Our results suggest that such analyses are extremely important in studies that employ the minimum width area models so the tolerance of the architectural conclusions against the prediction error variations can be quantified. This work proposes a more accurate active area model to estimate the layout area of FPGA multiplexers by considering diffusion sharing and folding. In addition, we found that comparing to the minimum width transistor area model, the traditional metal area based stick diagrams, in lieu of actual layout, can provide much more accurate layout area estimations. In particular, minimum width transistor area can underestimate the layout area of LUT multiplexers by as much as a factor of 2-3 while stick diagrams can achieve over 85% -95% percent accuracy in layout area estimation. Based on our work, we present correction factors to the commonly used FPGA building blocks, so their actual layout area can be used to achieve a highly accurate ranking of the implementation area of FPGA architectures built upon these layouts.
Jun. 2, 10A, ENG 471
Dr. Trevor McKee • Image Analysis Core Manager, STTARR Innovation Centre, Princess Margaret Cancer Centre
Quantifying drug, gene, and oxygen transport within tumors using imaging and pathology analytics helps to design better cancer therapies
There is a growing need for better—and more personalized—cancer treatments, to provide oncologists with the tools they need to best treat their patients. Biomedical engineers are key players in this process, by using fundamental engineering principles and quantitative imaging tools to study biological processes, delivering insights that may translate to improved therapies.

In particular, transport phenomena play a critical role in many aspects of tumor biology and treatment. We have developed methods to study transport within tumors using intravital image analysis methods and quantitative digital pathology analytics. Using intravital microscopy and fluorescence recovery after photobleaching with spatial Fourier analysis, we have shown that diffusive transport of nanoparticles and gene vectors within the tumor is limited by the effective pore size of the extracellular matrix, in particular collagen. Degrading the tumor collagen enzymatically can improve delivery of oncolytic viruses, resulting in better therapeutic outcomes in preclinical models. In addition, inefficient cancer blood vessels combined with metabolic demands of proliferating tumor cells results in transport limitations for oxygen within many solid tumors. Hypoxia (i.e., lack of oxygen) in the tumor leads to reduced radiation effectiveness, and more aggressive disease. We have developed methods for quantitatively imaging tumor hypoxia in preclinical models, and shown that hypoxia is positively correlated with tumor proliferation; however, hypoxic tumors are also more sensitive to treatment with hypoxia-activated prodrugs. Additionally, we have combined preclinical imaging with quantitative flow cytometry and pathology to show that drugs that alter metabolic demand in tumor cells, such as metformin, can reduce tumor hypoxia, and improve survival in preclinical models. Both of these studies have provided valuable preclinical rationale for extending these therapeutic strategies into clinical trials in patients.

Our group uses the tools of machine learning and multiplexed digital pathology to build a generalized analytical framework to perform “tissue cytometry”. This new technology can extract quantitative image-derived features in a reproducible and robust fashion, providing clinicians and biological scientists with tools to measure previously inaccessible phenomena, like measuring the hypoxic gradient directly within tumor sections, or comparing glucose uptake to lactic acid production in the same tumor sample. Future applications of this tissue cytometric approach include quantification of immune cell transport within cancer, to help improve the promising new treatment of cancer immunotherapy.
May. 30, 10A, ENG471
Timothy Liang • MASc Thesis Defense
Analysis of Electrooculogram (EOG) Signals in Studying Myasthenia Gravis
Myasthenia Gravis (MG) is a neuromuscular disorder which weakens the muscle system by affecting the neuromuscular junction. When the affected muscles are from a vital organ group, it could lead to fatal conditions. In Canada, over the course of 17 years (1996-2013) it had been reported that the crude prevalence of MG has doubled. There are many forms of MG, however the ocular form of MG (OMG) is commonly the initial precursor before the patients progresses into more severe forms of generalized MG. A majority of the OMG patients progress into generalized MG within 2 years of being diagnosed with OMG. Early detection of MG and thereby timely treatment of MG could save lives and preserve the quality of life. In this thesis, we explore and present signal processing methodologies that could assist in the early detection of MG using electrocoulography (EOG) signals as a non-invasive alternative approach.

A database consisting of 62 control and 16 MG (mild to moderate MG) data samples were obtained from Sunnybrook Health Sciences, Toronto, Canada. Eye movement counts, pulse characteristics, and EOG signal morphologies were analyzed using time domain and wavelet domain techniques. A linear discriminant analysis (LDA) based classifier was used to quantify the discriminating ability of the features in separating MG from control samples. The results were cross-validated using the leave-one-out approach. Average overall classification accuracy of 82.5% (P < 0:001, AUC=0.887) was achieved using the time domain features. Likewise, using the wavelet based features, the average overall classification accuracy of 83.8% (P < 0:001, AUC=0.893) was achieved. The obtained results demonstrate strong potential for EOG based analysis as a viable and non-invasive alternative to the current methods for screening MG.
May. 26, 10A, ENG LG104
Dr. Larissa Schudlo • Post-Doctoral Fellow, Bloorview Research Institute,University of Toronto
Autonomic Detection of Non-Invasive Diagnostic Markers Across the Lifespan
Functional neuroimaging can provide a direct assessment of the brain to detect changes in one’s brain state. This can be used for a variety of applications, including brain-computer interfaces (BCIs) and detecting/assessing neurological conditions. Near-infrared spectroscopy (NIRS) is a non-invasive, hemodynamic-based imaging modality that is practical for these applications as it is low cost, portable and can accommodate subject movement. NIRS is, however, an emerging imaging modality and its consideration as reliable clinical tool is in its infancy. Dr. Schudlo will discuss her past and current research exploring BCI and clinical applications of NIRS, and the potential of advancing the diagnostic utility of this modality through automatic detection of diagnostic markers.


Dr. Larissa Schudlo holds a masters of applied science and doctorate from the University of Toronto in Biomedical Engineering. Her graduate work focused on brain-computer interface development as means of communication using near-infrared spectroscopy (NIRS). Prior to her graduate work, Larissa completed her bachelors of engineering in Electrical and Biomedical Engineering at McMaster University. Currently, Larissa is a post-doctoral fellow in the Autism Research Centre at Holland Bloorview. Her current research focuses on physiological signal processing for ambulatory health-monitoring applications and technology development for children with Autism Spectrum Disorder (ASD), as well as exploring the utility of NIRS in assessing children with ASD or concussions.
May. 25, 10A, ENG101
Dr. Mahla Poudineh • Department of Pharmaceutical Science, University of Toronto
On-chip Phenotypic Profiling of Circulating Tumor Cells for Next-Generation Diagnostic Technologies
Cancer is a leading cause of death and disability. Early detection can significantly improve long-term survival in cancer patients. During cancer progression, tumors shed circulating tumor cells (CTCs) into the bloodstream. CTCs that originate from the same primary tumor can have heterogeneous phenotypes and, while some CTCs possess benign properties, others have high metastatic potential. Deconstructing the heterogeneity of CTCs is challenging and new methods are urgently needed to characterize and sort CTCs according to their detailed phenotypic profiles so that the properties of invasive versus noninvasive cells can be identified. High levels of sensitivity and high resolution are required to generate profiles that will provide biological and clinical insights. Herein we describe a powerful new capability for the monitoring of cancer progression. We developed a novel fluidic chip that selectively isolates rare cancer cells that exhibit different levels of phenotypic surface markers. We show that the device successfully profiles the surface expression of very small numbers of cells; and it accomplishes this directly from whole blood. We couple the surface marker profiling approach with a migration platform with single cell resolution: this allows us to characterize more deeply, still on-chip, the biological behavior of invasive cancer cells. We deploy these new techniques to reveal the dynamic phenotypes of the rare cells. We prototype the system and prove it out using samples of unprocessed blood from mice. We characterize the samples as a function of tumor growth and aggressiveness and prove that the new profiling technology provides powerful and relevant information that correlates with tumor stage and aggressiveness. The strategies presented offer to guide the development of sensitive and specific approaches for cancer diagnosis that provide new information not available using prior methods.


Dr. Mahla Poudineh completed a PhD degree in Electrical Engineering (with Biomedical focus) at the University of Toronto, where her research focus was on developing new diagnostic technologies for early cancer detection. Mahla’s research has made it possible to distinguish cancer cells that have high versus low metastatic potential. She is now pursuing a postdoctoral fellowship in the Department of Pharmaceutical Science at the University of Toronto.

May. 19, 10A, ENG LG04
Dr. Ali Sadeghi-Naini • Assistant Professor, Department of Medical Biophysics, University of Toronto
Imaging Innovations for Personalized Cancer Therapeutic
Cancer patients respond heterogeneously to identical treatments. As such, a predefined standard therapy is not often effective for all patients. Personalized medicine is predicated on changing ineffective therapies to more efficacious treatments on an individual patient basis. Therapy response monitoring is an important component of personalized medicine. In this seminar, emerging technologies based on three relatively-inexpensive quantitative imaging techniques proposed recently for evaluation of response to cancer-targeting therapies will be presented. Such techniques which do not rely on any exogenous contrast agents include quantitative ultrasound (QUS) imaging to quantify tumour cell death, ultrasound elastography to measure changes in tumour biomechanical properties, and diffuse optical spectroscopy (DOS) to evaluate alterations in tumour perfusion and metabolism, all in response to treatment. Findings from the preclinical investigations on animal tumour models will be presented, followed by the results obtained through clinical studies on locally advanced breast cancer patients receiving chemotherapy. Effective signal and image processing strategies to derive non-invasive biomarkers of response with high sensitivity and specificity will be discussed. Also, future research directions to incorporate other modalities for therapy outcome prediction and monitoring as well as to develop robust hybrid biomarkers of treatment response will be presented.

Dr. Ali Sadeghi-Naini is an Assistant Professor in the Department of Medical Biophysics at the University of Toronto, and a Scientist within the Physical Sciences Platform and Odette Cancer Research Program at Sunnybrook Research Institute. He earned his PhD in biomedical engineering from the University of Western Ontario in 2011, enriched by his participation in the NSERC-CREATE program in Computer-Assisted Medical Interventions (CAMI). Dr. Sadeghi-Naini completed his postdoctoral fellowship in medical biophysics and radiation oncology at Sunnybrook Research Institute, University of Toronto. His postdoctoral research was supported by a Canadian Breast Cancer Foundation postdoctoral fellowship and a CIHR Banting postdoctoral fellowship. Dr. Sadeghi-Naini's areas of research interest include computer-aided image-guided theragnostics, quantitative multimodal imaging, inverse imaging techniques, medical image analysis and machine learning.
May. 17, 10A, ENG101
Dr. Parisa Shooshtari • Broad Institute, Yale University
Integrative genetic and epigenetic analysis uncovers regulatory mechanisms of autoimmune disease
Genome-wide association studies in autoimmune and inflammatory diseases (AID) have uncovered hundreds of genomic loci mediating risk to disease. These associations are preferentially located in regions of open chromatin marked by DNase I hypersensitivity sites (DHS). Whilst these analyses clearly demonstrate the overall enrichment of disease risk variants on gene regulatory regions, they are not designed to identify individual regulatory regions mediating risk or the genes under their control, and thus uncover the specific molecular events driving disease risk. In this talk, I will present my recently developed computational model that addresses this problem through integration and analysis of several types of biological data. In this model, we first use genetics association data to identify a set of single-nucleotide polymorphisms (SNPs) likely to be causal in each risk locus. Through overlapping these SNPs with DHS in the locus, we identify regulatory regions that may harbor risk. We then identify genes controlled by each risk-mediating regulatory region, and compute the pathogenicity factor for each gene, a parameter that we use to prioritize disease genes in each risk locus. We successfully applied our model to publicly available genetics association data for nine AID. We found substantial evidence of regulatory potential in 78/301 of risk loci across these AID, and were able to prioritize individual genes likely to be pathogenic in 53/78 of such cases. Our method thus enables generating mechanistic hypotheses about molecular changes to the regulation of individual genes. As these genes are often at some distance from the region of maximal genetic association, these effects may be mediated through long-range DNA looping, and our study cautions against approaches where the closest gene is considered pathogenic. One major advantage of our model is that it is general and can be applied to all common complex diseases.

Dr. Parisa Shooshtari is a postdoctoral associate in Dr. Chris Cotsapas’ lab at Yale University and a postdoctoral scholar at the Broad Institute of MIT and Harvard.
May. 11, 10A, ENG101
Dr. Eric Strohm • Ted Rogers Centre for Heart Research, University of Toronto
Biomedical Applications of Ultrasound and Photoacoustic Imaging
Ultrasound is a versatile imaging modality that has a broad range of biomedical applications, where the image contrast depends on variations in the biomechanical properties of tissues. Photoacoustic imaging is an emerging imaging modality where the contrast predominantly depends on the optical absorption properties. Through an appropriate selection of laser wavelength, endogenous chromophores (blood, lipids, melanin) and exogenous contrast agents (dyes, nanoparticles) can be selectively imaged. A quantitative analysis of the ultrasound and photoacoustic frequency dependent signals can be used to extract information about the structural and biomechanical properties for tissue characterization applications. In this talk, I will discuss ultrasound and photoacoustic analysis techniques in the 20-1000 MHz frequency range, including imaging and classification of single cells, in vitro assessment of blood, tissues, and tumors, and translation to high throughput screening applications. Improved ultrasound and photoacoustic quantitative methods can help our understanding of cell and tissue based mechanisms in disease, and ultimately lead to better clinical diagnostic imaging techniques.

Eric Strohm received his B.Sc. degree in Physics from McMaster University in 1999. From 2002-2007, he was employed as a member of research staff at the Xerox Research Centre of Canada. He received his M.Sc. degree in 2009 and Ph.D. degree in 2013 in Biomedical Physics from Ryerson University, where he was supported through a NSERC postgraduate scholarship. He is currently a Postdoctoral Fellow in the Cellular Mechanobiology Laboratory at the University of Toronto, where his research interests focus on the development of quantitative ultrasound imaging techniques for high throughput screening applications in tissue engineering.
May. 11, 11A, ENG460
Nafiul Hyder • MASc Thesis Defense
Minimizing the Layout Area of 2-Input Look up Tables
This work investigates the minimum layout area of multiplexers, a fundamental building block of Field-Programmable Gate Arrays (FPGAs). In particular, we investigate the minimum layout area of 4:1 multiplexers, which are the building blocks of 2-input Look-Up Tables (LUTs) and can be recursively used to build higher order LUTs and multiplexer-based routing switches. We observe that previous work routes all four data inputs of 4:1 multiplexers on a single metal layer resulting in a wiring-area-dominated layout. In this work, we explore the various transistor-level placement options for implementing the 4:1 multiplexers while routing multiplexer data inputs through multiple metal layers in order to reduce wiring area. Feasible placement options with their corresponding data input distributions are then routed using an automated maze router and the routing results are then further manually refined. Through this systematic approach, we identified three 4:1 multiplexer layouts that are smaller than the previously proposed layouts by 30% to 35%. In particular, two larger layouts of the three are only 33% to 45% larger than layout area predicted by the two widely used active area models from previous FPGA architectural studies, and the smallest of the three layouts is 1% to 11% larger than the layout area predicted by these models.
May. 4, 2P, ENG460
Ghazal Zamani • MASc Thesis Defence
The Effects Of A Fault Management Architecture On The Performance Of A Cloud Based Application
Increasingly, the application providers are using a separate fault management system that offers out-of-the-box monitoring and alarms support for application instances. A fault management system is usually distributed in nature and consists of a set of management components that does both fault detection and can trigger actions, for example, automatic restart of monitored components. Such a distributed structure supports scalability and helps to ensure that an application meets its quality requirements. However, successful recovery of an application now depends on the fault management architecture and the status of the management components. This thesis presents a model that accounts for the effect of management-architecture based coverage on the mean throughput of an application. Such a model would benefit the application providers for choosing the right fault management architecture for their applications.
May. 4, 10A, ENG101
Dr. Dafna Sussman • The Hospital for Sick Children (SickKids)
Decoding developmental physiology: A multidisciplinary engineering approach
At its core, my research aims to employ advanced engineering techniques to uncover relationships between maternal lifestyle choices, such as exercise and diet, and the well-being of the child. In the last decade, medical imaging and image processing have revolutionized health care by providing a window into the function of internal anatomy, and a non-invasive means to track fetal development. In this talk I will illustrate how I combine recent innovative tools, techniques, and algorithms from a range of engineering disciplines to bridge clinical experimentation and engineering research. I will describe my work on the use of both optical and magnetic resonance (MR) imaging to measure and accurately compare anatomical, blood flow, and blood oxygen changes in the growing fetus. I will discuss a range of techniques, from optical projection tomography (OPT), phase-contrast MRI, and MR-oximetry, to image registration and to cardiac gating, and how I employ them in the context of my research. Results from this body of work are being used to push the boundaries of imaging research and clinical applications, and to define evidence-based recommendations for lifestyle during pregnancy so as to improves fetal health and development.


Dr. Dafna Sussman is a research fellow from the Hospital for Sick Children in Toronto. Dr. Sussman completed her undergraduate in Engineering Science with honours at the University of Toronto. She then pursued a Master's in Biophysics and Biophotonics at the University of Waterloo, and a PhD in Medical Biophysics and Clinical Imaging at the University of Toronto, in collaboration with Princess Margaret and Sunnybrook Hospitals. Since then, Dr. Sussman has completed a postdoctoral fellowship in Diagnostic Imaging at the Hospital for Sick Children, and is currently a research fellow there, in the Translational Medicine program.
May. 3, 2P, ENG460
Young Jun Park • PhD Final Thesis Defence
Time-Interleaved Pulse-Shrinking and All-digital ∆Σ Time-to-Digital Converters
This dissertation deals with the design of sub-per-stage-delay time-to-digital converters (TDCs). Two classes of TDCs namely pulse-shrinking TDCs and ∆Σ TDCs are investigated. In pulse-shrinking TDCs, a two-step pulse-shrinking TDC consisting of a set of coarse and fine pulse-shrinking TDCs is proposed to increase a dynamic range without employing a large number of pulse-shrinking stages. A residual time extraction scheme capable of extracting the residual time of the coarse TDC is developed. The simulation / measurement results of the TDC implemented in an IBM 130 nm 1.2 V CMOS technology show that the TDC offers 1.4 ns conversion time, 1 LSB DNL and INL, and consumes 0.163 pJ/step. To further improve the conversion time, a time-interleaved scheme is developed to extract the residual time of the coarse TDC and utilized in design of a two-step pulse-shrinking TDC. Residual time extraction is carried out in parallel with digitization to minimize latency. The simulation and measurement results of the TDC show that it offers 0.85 ns conversion time, 0.285 LSB DNL, and 0.78 LSB. In ∆Σ TDCs, a 1-1 MASH ∆Σ TDC with a new differential cascode time integrator is proposed to suppress even-order harmonic tones and current mismatch-induced timing errors. Simulation results show that the proposed TDC offers 1.9 ps time resolution over 48-415 kHz signal band while consuming 502 µW. Finally, an all-digital first-order ∆Σ TDC utilizing a bi-directional gated delay line integrator is developed. Time integration is obtained via the accumulation of charge of the load capacitor of gated delay stages and the logic state of gated delay stages. The elimination of analog components allows the TDC to benefit fully from technology scaling. Simulation results show that the TDC offers first-order noise-shaping, 10.8 ps time resolution while consuming 46 µW.
Apr. 28, 12:30P, ENG465
Maryam Rastgar-Jazi • MaSc Thesis Defense
Analytical and Experimental Solution for Heat Source Located Under Skin: Chest Tumor Detection via IR Camera
Infrared (IR) imaging could be used as both noninvasive and nonionizing technology. Utilizing IR camera, it is possible to measure skin temperature with the aim of finding any superficial tumors. Since tumors are highly vascular and usually have a higher temperature than the rest of the body, using thermograms, it is possible to assess various tumor parameters, such as depth, intensity, and radius. In this study, we have developed an analytical method to detect tumor parameters in both spherical and cubical tissues to represent female breast and male chest tissue. This includes development of analytical solution for solving inverse bio-heat problem as well as laboratory set up for further validation of the analytical achievements. The model was developed by solving for Penne's Bioheat equation for each tissue under certain conditions. two of the most important assumptions were:
1. The tumor was assumed as separate heat source
2. The model does not change with time (steady state condition)
The model was tested and optimized by data library generated by COMSOL. To optimize the tumor parameters, and Artificial Neural Network (ANN) was used.
Finally, the analytical findings were validated by utilizing a laboratory set-up containing an IR camera, 1% Agar solution (tissue phantom), and a heater of variable powers. The models were set to test by adjusting the heater (0.9W) in various depth and imaging the tissue phantom. Comparing the analytically obtained results with the experimental results, it could be concluded that the the method is able to detect superficial tumors of small size only by measuring the body surface temperature and ambient temperature. Also, results enforce the fact that inferred cameras could be used as mean of tumor detection.
Apr. 27, 10AM, KHW061
Professor Patrick M. Boyle • Johns Hopkins School of Medicine
Computational Cardiology: Engineering Radical New Approaches for the Treatment of Heart Rhythm Disorders
Cardiac arrhythmia is a leading cause of morbidity and mortality worldwide. Although great strides have been made towards effective clinical treatments, procedure success rates remain unacceptably low in large patient sub-populations. As such, there is an immediate need to develop new therapeutic paradigms that enable safe, effective, and permanent arrhythmia termination. In this seminar, we will discover how computational modelling of the heart is ideally poised to address this challenge. Simulations executed in image-based finite element models can reveal the mechanistic underpinnings of arrhythmia dynamics in each patient, enabling the development of personalized treatment strategies. Moreover, modelling can be used to assess the feasibility of novel anti-arrhythmic devices based on emerging technologies, helping to guide and accelerate the engineering design process. Three cases studies will be presented to illustrate the potential power of this approach: (1) development of custom-tailored ablation plans for patients with persistent atrial fibrillation, (2) identification of arrhythmogenesis mechanisms in a child with a rare form of genetic mosaicism, and (3) exploration of light-based optogenetic defibrillation of lethal ventricular arrhythmia. To conclude, we will discuss the future of cardiac modelling and outline how it will influence a new generation of arrhythmia research and treatment.

Dr. Boyle is currently an Assistant Research Professor at the Institute for Computational Medicine at the Whiting School of Engineering at Johns Hopkins School of Medicine. He received his Ph.D. in Biomedical Engineering from University of Calgary in 2011. His research interests include omputational modeling of the cardiac arrhythmia, custom-tailoring of treatment plans for persistent atrial fibrillation ablation based on patient-specific simulations, exploring the feasibility of optogenetics-based alternatives to shock-based anti-arrhythmia therapy.
Apr. 26, 9:30AM, ENG471
Norhan Mostafa Mansour • MASc Thesis Defence
Lightning Environment in the Vicinity of the CN Tower During Major Storms
In this thesis, based on the North American Lightning Detection Network (NALDN) data and the return-stroke currents recorded at the CN Tower, the lightning environment is thoroughly investigated within 100 km from the CN Tower during two years (2011, 2005), when major storms took place at and in the vicinity of the tower. It was possible to time-match the tower’s return-stroke current records with those detected by the network in order to exclude the tower’s return strokes from the network data for properly investigating the lightning environment in the vicinity of the tower. An extensive statistical analysis of non-CN Tower flash/stroke characteristics (e.g., flash and stroke density, monthly and hourly rates of occurrence, flash multiplicity, stroke location, polarity, and peak current estimate) has been accomplished, especially when the tower was heavily struck by lightning. On Aug 24, 2011, video records showed that the tower was struck with 52 flashes within about 84 minutes, pointing out to the most intense storm that has ever been observed at the tower. Based on video and return-stroke current records, 20 of these flashes proved to only contain initial-stage currents. The remaining 32 flashes were found to contain a total of 161 return strokes, resulting in average flash multiplicity (number of return strokes per flash) of 5, which proved to be markedly higher than the average multiplicity of non-CN Tower flashes occurring in the vicinity of the tower of 2.8. Furthermore, on Aug 19, 2005, the Tower was struck with 6 flashes, containing 38 strokes, within about 90 minutes, resulting in an average multiplicity of 6.3, which is substantially higher than the average multiplicity of flashes occurring in the vicinity of the tower of 2. Based on NALDN-reported data, excluding CN Tower recorded return strokes, an extensive statistical analysis of non-CN Tower flash/stroke characteristics (e.g., flash and stroke density, monthly and hourly rates of occurrence, flash multiplicity, stroke location, polarity, and peak current estimate) has been accomplished, especially when the Tower was heavily struck by lightning. Therefore, since the tower is repeatedly hit by lightning and its flashes produce markedly higher number of strokes, then it definitely pose an electromagnetic (EM) interference risk to nearby sensitive installations, including those in downtown Toronto. One my ponder on the possible lightning protection of Rogers Center due the existence of the nearby CN Tower, however the EM interference that covers a large extended area does not balance the possible benefit of protecting Rogers Center, unless it is full of spectators.
Apr. 21, 12PM, ENG460
Md Atiqul Islam • M.Eng. Project Defense
Design and Implementation of Automatic Phase Changer
Automatic Phase Changer (APC) automatically changes the phase as the name suggests. In three phase power system 3 inputs of APC circuit are connected to three phases of the system and its three outputs are connected to three different loads. These three loads always need their normal rated voltage for proper operation. If voltage of any phase goes below the nominal rating the loads may malfunction. Here, Automatic Phase Changer comes into action. When a phase voltage goes below its nominal rating, APC provides correct level of voltage to the load connected to that phase. In APC circuit all phases are connected to each other through relays. Phase – 1 will be connected to phase – 2, phase – 2 will be connected to phase – 3 and phase – 3 will be connected to phase – 1 when their respective relays will be ON. Relays are turned ON when voltage of the related phase goes down the standard value. Relays are operated by Op – amp and BJT. Op – amp provides voltage to switch ON or OFF the BJT. When BJT is ON relay gets its threshold voltage to operate. Op – amp acts as a comparator and its inverting and non-inverting terminals are connected to a voltage divider circuit. Op – amp and Relay gets their DC or biasing voltage from step down transformer of rating 220V ac to 12V – 0 – 12V dc. Three inputs of APC circuit are connected to these three points which represent two phases have voltage below the nominal rating. Power rating of the loads must be considered because in worst situation only one phase takes loads of three phases which can create overloading problem. Hence, that phase must be capable to handle three loads. Impact of this circuit on total system needs to be examined. Therefore, this circuit demands further improvement to make it more robust
Apr 13, 12PM, ENG106
Prof. Hong Zhang • U of Alberta
Visual place Recognition with Deep Convolutional Neural Networks
Visual place recognition (VPR) answers the question of whether the current view - of a robot or a mobile device - comes from a place or location that has been visited in the past. The ability to recognize a place visually is a crucial component algorithm in solving many problems in robotics and content-based image retrieval, among others. VPR is challenging when the current camera view has changed significantly from that in previous visits to the same place, due to variation in the camera viewpoint, illumination, seasonal and weather conditions, etc. In this talk, I will describe our recent research that addresses VPR by exploiting the remarkable performance of deep convolutional neural networks (ConvNets). ConvNets have recently been shown to exhibit remarkable condition-invariance in solving object detection and recognition tasks. Building on this success, our work further shows how to use ConvNet to solve VPR accurately and efficiently. I will also highlight how to integrate VPR in a robot navigation system for effective robot localization.
Dr. Hong Zhang received his Ph.D. degree from Purdue University in 1986 in Electrical Engineering, with a thesis on robot manipulation and force control. Upon completing post-doctoral training at the University of Pennsylvania, he joined the Department of Computing Science, University of Alberta, Canada in 1988 where he is currently a Full Professor. Dr. Zhang’s research interests span robotics, computer vision, and image processing, and his current research focuses on visual robot navigation in its indoor and outdoor applications. Dr. Zhang holds an NSERC Industrial Research Chair in Intelligent Sensing, and is a member of the NSERC Canadian Strategic Network on Field Robotics (NCFRN). He is an associate editor of IEEE Transactions on Cybernetics, and the General Chair of 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). He is a Fellow of the IEEE, and a Fellow of the Canadian Academy of Engineering.
Apr. 7, 4PM, ENG471
Muhammad Mohsin Babar • M.Eng. Project Defense
Designing of an Algebraic Signature Analyzer for Mixed-Signal Systems and Testing
While the design of signature analyzers for digital circuits has been well researched in the past, the common design technique of a signature analyzer for mixed-signal systems is based on the rules of an arithmetic finite field. The analyzer does not contain carry propagating circuitry, which improves its performance as well as fault tolerance. The signatures possess the interesting property that if the input analog signal is imprecise within certain bounds (an inherent property of analog signals), then the generated signature is also imprecise within certain bounds. We offer a method to designing an algebraic signature analyzer that can be used for mixed-signal systems testing. The application of this technique to the systems with an arbitrary radix is a challenging task and the devices designed possess high hardware complexity. The proposed technique is simple and applicable to systems of any size and radix. The hardware complexity is low. The technique can also be used in algebraic coding and cryptography.
Apr. 7, 3PM, ENG471
Mohammed Faruque Ahmed • M.Eng. Project Defense
Mixed Signal Testing System Technique with Algebraic Signature Analyzer without Carry Propagation
Signature Analyzer is an analyzer which is widely used for mixed-signal system testing. But it’s hardware has high complexity in implementation as the application technique is for the system with rules of an arithmetic finite field with arbitrary radix. It’s a challenging task. To avoid this complexity the project is presented based on Algebraic Signature Analyzer that can be used for mixed signal testing and the analyzer doesn’t contain carry propagation circuitry. It improves performance and fault tolerance. This technique is simple and applicable to a system of any size or radix. The hardware complexity is very low compared to the conventional methods; the approach can be used in arithmetic / algebraic cryptography as well as coding.
Apr 06, 12PM, ENG106
Prof. Chintha Tellambura • U of Alberta
Wireless energy harvesting for underlay device-to-device (D2D) networks
Energy harvesting underlay device-to-device (D2D) networks are a promising solution to increase spectral and energy efficiency of wireless systems. However, to what extent is the performance of such networks affected by spatial randomness, temporal correlations, power control procedures, and channel uncertainties?
To answer this question, we consider two environments. First, we consider an environment with a multi channel primary user network whose nodes and D2D transmitters are spatially distributed as a homogeneous Poisson point process and the wireless signals are subject to log-distance path loss, Rayleigh fading, and path loss inversion based power control. We derive expressions for the ambient radio frequency power available for harvesting at a D2D transmitter. Furthermore, we derive the probability of a successful energy harvest for single slot and multi slot harvesting schemes, and derive the coverage performance of a D2D receiver when a D2D transmitter gets assigned to a sub-band randomly. We determine suitable paramters for energy harvesting.
In the second case, we similarly consider primary and underlay nodes distributed randomly in the 2-D plane as homogeneous Poisson point processes (PPP). However, underlay transmitters scavenge power from primary transmitters only when they are within specific harvesting regions, and are able to transmit as long as they are outside a guard region surrounding a primary receiver. The primary and underlay systems are assumed to perform power control based on path loss inversion, and that a underlay transmitter requires N charging slots to fully charge its batteries after depletion. We consider two cases of power depletion after a underlay transmission: 1) full power depletion, and 2) partial power depletion based on distances to the intended receivers. We derive the probability of a successful charge by mapping the PPP of primary transmitters to an equivalent PPP incorporating random transmit powers, and use a Markov chain to derive the probability of a successful transmission while incorporating temporal effects for the two aforementioned power depletion scenarios. We determine suitable conditions for energy harvesting.
Mar. 17, 2017, 10AM, ENG460
Qiang Wei • Internal Ph.D. Defense
Current Source Converter-Based Offshore Wind Farm: Configuration, Modulation, and Control
Offshore wind power has been attracting great attention because of its considerable resources, higher and steadier wind speed, and smaller environmental impact. Recently, a current source converter (CSC)-based configuration is considered a highly promising solution for offshore wind farms as the offshore substation used in existing systems can be eliminated. This thesis develops new configurations, modulation schemes, and control schemes to improve the performance of the CSC-based series-connected offshore wind farm.
Modular medium-frequency transformer (MFT)-based configurations are proposed for medium-voltage (MV)- and low-voltage (LV) turbine systems. Voltage balance control and current balance control are developed for the two configurations, respectively. Compared with existing series-connected configurations, the proposed ones feature smaller size and weight. A natural sampling space vector modulation (NS-SVM) is proposed for grid-side CSCs. Compared with conventional SVM, NS-SVM preserves high dynamic performance and low switching/sampling frequency, but featuring superior low-order harmonics performance. Besides, the best space vector sequence in terms of low-order harmonics performance based on NS-SVM is investigated and selected for grid-side CSCs.
A power balance control is proposed for grid-side series-connected CSCs. The possible imbalance of power among series-connected CSCs is investigated and quantitatively defined. The proposed balancing scheme enables equal power distribution among series-connected CSCs in the full operation range.
An optimal dc-link current control is proposed for the CSC-based offshore wind farm under bipolar operation. Bipolar operation gives lower system insulation level and higher reliability, but leading a challenge to the dc-link current control of the offshore wind farm. To solve this issue, an optimal dc-link current control is developed, based on which higher efficiency and flexibility are obtained.
An optimized control strategy is proposed for the CSC-based offshore wind farm with reduced cost and improved efficiency. The nominal number of onshore CSCs is optimized which gives lower cost and higher efficiency to the system. In addition, an optimal control scheme is developed for onshore CSCs which enables a further reduction in operation loss under the condition of inconsistent wind speeds at turbines.
Simulation and experimental verifications are provided to verify the performance of the proposed configurations, modulation schemes, and control schemes.
Mar. 15, 2017, 10AM, ENG460
Ramyar Rashed Mohassel • Ph.D. Defense
Novel Adaptation of Optimization Algorithms for Electricity Consumption Management Using Load Moderation Centers in Smart Grids
With the introduction of new technologies, concepts and approaches in power transmission, distribution and utilization such as Smart Grids (SG), Advanced Metering Infrastructures (AMI), Distributed Energy Resources (DER) and Demand Side Management (DSM), new capabilities have emerged that enable efficient use and management of power consumption at micro level in households and building complexes. On the other hand, Integration of Information Technology (IT) and instrumentation brought Building Management Systems (BMS) to our homes to better plan and utilize available sources while considering residences preferences. The idea of combining capabilities and advantages offered by SG, smart meters, DERs, DSM and BMS is the backbone of this thesis and has resulted in introducing a unique power management unit called Load Moderation Center (LMC) as an integrated part of BMS. This device, upon successful completion, will be able to plan consumption, effectively utilize available sources including grid, renewable energies and storages and reduce the costs for end users as well as utility providers significantly. To combine these technologies and capabilities, solid mathematical tool which ensure optimal operation of such complex system is required. Game Theoretic methods along with other known optimization techniques for similar applications will therefore be developed and utilized in this proposed work. The aim of this PhD research is to apply and embed optimization techniques in the LMC at residential household level, take the results to the community level and apply Game Theoretic Optimization (GTO) method at community level for effective allocation of grid power to the demanding households in that community. GTO has been adopted for this work due to the satisfactory results in has shown in supply and demand optimization and matching applications when rational decision makers are involved. The road map for this project comprises of a comprehensive literature review on different Game Theoretic optimization methods for SG, DSM and BMS applications; identification of the best approach based on the results of simulations; adaptation and modifications of such technique, including complete revamping if needed, to best benefits the specific application of LMC. The optimization task can be defined as the best scenario of allocating available power sources to the demanding loads within the household, based on the real time constraints and, for reduced price at both ends of the supply-demand chain.
March 09, 12PM, ENG LG06
Prof. Manoj Sachdev • Past Chair, Elect and Comp Eng. Dept., U of Waterloo
Challenges and Opportunities for Engineers
Engineers are at the forefront of the technology revolution; evidence of disruptive technologies surround us. The impact of future technological innovations and development will significantly impact our lives in an increasingly interconnected, globalized world. In this talk we will discuss some of the challenges and opportunities ahead and what role engineers can play to improve quality of life for all.

Manoj Sachdev is a professor in electrical and computer engineering department at the University of Waterloo. His research interests include low-power and high-performance digital circuit design, mixed-signal circuit design, reliability and manufacturing issues of nano-metric integrated circuits. He has written five books, and has contributed to over 200 technical articles in conferences and journals. He holds more than 35 granted and pending US patents in the broad area of VLSI circuit design and test.

He, his students, and his colleagues have received several international awards. In 1997, at the IEEE European Design and Test Conference, he received the best paper award. In 1998, he was a co-recipient of the honorable mentioned award in the IEEE International Test Conference. He received the best panel award in 2004 IEEE VLSI Test Symposium. In 2011, he was a co-recipient of the best paper award in IEEE International Symposium on Quality Electronics Design. In 2015, he was co-recipient of the best poster award in IEEE Custom Integrated Circuits Conference. He is a Fellow of IEEE, Fellow of Engineering Institute of Canada, and Fellow of Canadian Academy of Engineering.
March 06, 1PM, ENG 106
Tom Murad - PhD., P.Eng. F.E.C, SMIEEE • Director, Head of Siemens Engineering and Technology Academy
Digitization - Siemens Vision
Dr. Tom Murad has over 35 years of Professional Engineering and Technical Operations Executive Management including more than 10 years of Academic and R&D work in Industrial Controls and Automation. In the last Five Years, He worked as the Head of the Expert House and Engineering Director in the Industry Sector within Siemens Canada. Prior to joining Siemens Canada, Tom was the Senior Vice President and COO of AZZBlenkhorn & Sawle, an Engineering systems Integrator and Technical Solutions Provider in Ontario, specialised in Power distribution and Controls in various Industrial, Utilities and Infrastructure Applications. He was previously holding various V.P. and Director Positions in a number of Engineering and Industrial organisations internationally, and contributed to many Large Global Industrial projects. Dr. Murad is a Fellow of Engineers Canada and a (P.Eng.) member of The Professional Engineering Ontario (PEO); APEGA in Alberta, and NAPEG in the North WesternTerritories, as well as a Senior Member of IEEE in various Technical Societies. Tom earned a Bachelor of Engineering, and a Doctorate (Ph.D.) in Power Electronics and Industrial Controls from the Loughborough University of Technology in the UK. He also received a Leadership Program Certificate from Schulich Business School, York University in Ontario, Canada. Currently, Dr. Murad is the Chair for the IEEE – Toronto Section Executive committee(2016 - 2017), an active member of the PEO Licensing “Engineering Experience Review “Committee for the last 14 Years, and serves on a number of Advisory boards in the Industry and the Academia.
March 02, 1PM, ENG 106
Hassan Kojori, PHD, FIEEE • Senior Principal Engineer with Honeywell Aerospace
More Electric Aircraft - A Closer Look at Modern Aircraft Design
Abstract: The More Electric Aircraft (MEA) is based on the concept of utilizing electrical power for driving aircraft subsystems currently powered by hydraulic, pneumatic or mechanical means including utility and flight control actuation, environmental control system, lubrication and fuel pumps, and numerous other utility functions. In this seminar Dr. Kojori presents an overview of More Electric Aircraft, discusses related challenges and potential opportunities and highlights the importance of power electronics and various specialized magnetic components. These advanced technologies enable the more electric aircraft and significantly reduce the size, weight and life-cycle-cost of the overall system, improve reliability and result in ease of manufacturing and maintenance and are applicable not only in aerospace but also have wide applications in general industry.
Bio: Dr. Hassan Kojori has over 30 years of experience in the field of power conversion, power distribution, energy optimization and related advanced systems control. Currently as a Senior Principal Engineer with Honeywell, he is the Conversion Portfolio Leader for Aero Advanced Technologies responsible for research, development and technology demonstration of advanced Electric Power Systems for More Electric Aircraft and tactical vehicles. His original work on numerous technology firsts has resulted in more than 45 patent disclosures (26 granted), several trade secrets and more than 50 technical papers and proprietary industry reports. Dr. Kojori is actively engaged in collaborative research in the general area of energy optimization and systems control and teaching and supervising graduate students with several leading local and international universities for the past 20 years. He was adjunct professor in the Department of Electrical and Computer Engineering at the University of Toronto and Ryerson University since 2000 for over 10 years. He is currently an industry professor in the Institute for Automotive Research and Technology at McMaster University.
Feb. 16, 2017, 12P, ENG106
Thia Kiruba • Distinguished Engineering Professor and Canada Research Chair in Information Fusion
Object tracking, Sensor Fusion and Situational Awareness for Assisted- and Self-Driving Vehicles
Abstract: The automotive industry has been undergoing a major revolution in the last few years. Rapid advances have been made in assisted- and self-driving vehicles. As a result, vehicles have become more efficient and more automated. A number of automotive as well as technology companies are in the process of developing smart cars that can drive themselves. While totally self-driving cars are still in their infancy, some features like self-parking, proximity detection and lane identification have already made it into production in high-end vehicles. In spite of these recent developments, significantly more research is needed in order to perfect these nascent technologies and to make them ready for mass production.
In this talk, we aim to discuss a number of problems related to assisted- and self-driving vehicles, potential solutions and directions for research & development. The issues discussed in this talk will span multitarget tracking, multisensor fusion and situational awareness within the context of smart cars. We will also present some of the algorithms that are available in the open literature as well as those we have developed recently. In addition, we will also discuss related computational issues and sensor technologies. Finally, we will present some results on real data.
Bio: Professor T. Kirubarajan (Kiruba) holds the title of Distinguished Engineering Professor and holds the Canada Research Chair in Information Fusion at McMaster University, Canada. He has published about 350 research articles, 11 book chapters, one standard textbook on target tracking and four edited volumes. In addition to conducting research, he has work extensively with government departments and companies to process real data and to transition his research to the real world through his company TrackGen. Currently, he is working with a major auto manufacturer on developing tracking, fusion and situational awareness algorithms for assisted- and self-driving vehicles.
Jan. 25, 2017, 11AM, ENG460
Praneeth Sakhamuri • M.A.Sc. Defence
Deployment Of Virtual Machines For Tiered Applications in Cloud Systems With Optimized Resource Allocation based On Availability SLAs
Deploying and managing high availability-tiered application in the cloud is challenging, as it requires providing necessary VMs to make the application available. An application is available if it works and responds in a timely manner for varying workloads. ASPs need to allocate specified number of working VM copies for each server with at least a given minimum computing power, to meet the response time requirement. Otherwise, we may end up with response time failures. This thesis formulates an optimization problem that determines the number and type of VMs needed for each server to minimize the cost and at the same time guarantees the availability SLA (Service-Level Agreement) for different workloads. The results demonstrate that having a mixture of different types of VMs is more cost-effective than restricting them to run on a single type of VM, and buying only the cheapest VMs for an application is not always better cost-wise.
Jan. 25, 2017, 10AM, ENG471
Udaya Regmi • M.A.Sc. Defence
Energy efficient operation of a base transceiver station using markov decision process
Around 3% of the total world energy consumption accounts for Information and Communication Technology (ICT) sector. This sector was expected to grow by 4.9% last year which implies that this trend will further increase ICT sector's present contribution of around 2% of the global greenhouse gas emissions.
Base transceiver station (BTS), an important but energy hungry component of access network in a cellular communication system, is usually resourced to serve busy hour traffic but remains under-utilized for most of the 24-hour period irrespective of the tra ffic load. Hence, self organizing networks (SON) that react to the variable tra ffic load are being studied to minimize energy consumption without compromising the QoS of the network.
Discrete time Markov decision process (DTMDP) as an optimization tool to manage the operation of BTS is investigated in the thesis. MDP finds an optimal policy that takes state speci c optimal decisions and gets immediate rewards which maximizes the long term expected reward. Hence, BTS operation has been formulated as an MDP problem where channel occupancy is grouped to form states in a birth-death process which are mapped against di different modes of operation through actions. The modes are defined as the conguration of the BTS site depending on how many sectors are turned ON or OFF. The rewards obtained are related to the energy savings when BTS operates as a SON by means of dynamic sectorization against the operation of BTS in uppermost mode irrespective of the tra ffic load. Further, transition cost to address mode switching cost and delay cost to address QoS are also discussed and elaborated through appropriate simulations to realize the actual energy savings.
Jan. 20, 2017, 2PM, ENG460
Roozbeh Manshaei • Ph.D. Defence
Tangible Visual Analytics: the integration of tangible interactions and computational techniques for biological data visualization and modelling with experts in the loop for biological data visualization and modelling with experts-in-the-loop
Understanding and interpreting the inherently uncertain nature of complex biological systems, as well as the time to an event in these systems, are notable challenges in the field of bioinformatics. Overcoming these challenges could potentially lead to scientific discoveries, for example paving the path for the design of new drugs to target specific diseases such as cancer, or helping to apply more effective treatment for these diseases. In general, reverse engineering of these types of biological systems using online datasets is difficult. In particular, finding a unique solution to these systems is hard due to their complexity and the small sample size of datasets. This remains an unsolved problem due to such uncertainty, and the often intractable solution space of these systems. The term"uncertainty" describes the application-based margin of significance, validity, and efficiency of inferred or predictive models in their ability to extract characteristic properties and features describing the observed state of a given biological system. In this work, uncertainties within two specific bioinformatics domains are considered, namely "gene regulatory network reconstruction" (in which gene interactions/relationships within a biological entity are inferred from gene expression data); and "cancer survivorship prediction" (in which patient survival rates are predicted based on clinical factors and treatment outcomes). One approach to reduce uncertainty is to apply different constraints that have particular relevance to each application domain. In gene network reconstruction for instance, the consideration of constraints such as sparsity, stability and modularity, can inform and reduce uncertainty in the inferred reconstructions. While in cancer survival prediction, there is uncertainty in determining which clinical features (or feature aggregates) can improve associated prediction models. The inherent lack of understanding of how, why and when such constraints should be applied, however, prompts the need for a radically new approach. In this dissertation, a new approach is thus considered to aid human expert users in understanding and exploring inherent uncertainties associated with these two bioinformatics domains. Specifically, a novel set of tools is introduced and developed to assist in evidence gathering, constraint definition, and refinement of models toward the discovery of better solutions. This dissertation employs computational approaches, including convex optimization and feature selection/aggregation, in order to increase the chances of finding a unique solution. These approaches are realized through three novel interactive tools that employ tangible interaction in combination with graphical visualization to enable experts to query and manipulate the data. Tangible interaction provides physical embodiments of data and computational functions in support of learning and collaboration. Using these approaches, the dissertation demonstrates: (1) a modified stability constraint for reconstructing gene regulatory network that shows improvement in accuracy of predicted networks, (2) a novel modularity constraint (neighbor norm) for extracting available structures in the data which is validated with Laplacian eigenvalue spectrum, and (3) a hybrid method for estimating overall survival and inferring effective prognosis factors for patients with advanced prostate cancer that improves the accuracy of survival analysis.
Jan. 18, 2017, 2PM, ENG471
Alejandro Emerio Alfonso Oviedo • M.Eng. Project Defense
Stereo Vision System for Depth Computation of Moving Object
This work targets one real world application of stereo vision technology: the computation of the depth information of a moving object in a scene. It uses a stereo camera set that captures the stereoscopic view of the scene. Background subtraction algorithm is used to detect the moving object, supported by the recursive filter of first order as updating method. Mean filter is the pre-processing stage, combined with frame downscaling to reduce the background storage. After thresholding the background subtraction result, the binary image is sent to the software processing unit to compute the centroid of the moving area, and the measured disparity, estimate the disparity by Kalman algorithm, and finally calculate the depth from the estimated disparity. The implementation successfully achieves the objectives of resolution 720p, at 28.68 fps and maximum permissible depth error of 4 cm (1.066 %) for a depth measuring range from 25 cm to 375 cm.
Jan. 18, 2017, 12N, ENG460
Ghassem Tofighi • Final Ph.D. Defence
DAIA: ENGAGEMENT DETECTION FRAMEWORK FOR HAND GESTURE AND POSTURE RECOGNITION
Hand gesture and posture recognition plays an important role in Human Computer Interaction (HCI) applications. They are main attributes in object or environment manipulations using vision-based interfaces. However, before interpreting these gestures and postures as operational activities, a meaningful involvement with the target object should be detected. This meaningful involvement is called engagement. Upper-body posture gives significant information about the user’s engagement. In this research, as our first contribution, a novel multi-modal model for engagement detection, called Disengagement, Attention, Intention, Action (DAIA) framework, is presented. Disengagement happens when the user is disengaged from the target object. Attention occurs when the user pays attention to the target, but doesn’t have the intention to take any actions. In Intention state, the user intends to perform an action, but still does not. Finally, in Action state, the user is performing an action with his/her hand. Using DAIA, the spectrum of mental status for performing a manipulative action is quantized in a finite number of engagement states. The second contribution of this research is in designing multiple binary classifiers based on upper- body postures for state detection. 3D skeleton data are extracted from depth image and are used to extract body posture information. One of these binary classifiers is Facing classifier, and is designed based on the body’s direction relative to the target object. This classifier is used to detect the transition between Disengagement and Attention states. In addition, by combining the output of all binary classifiers an engagement feature vector is created. This feature vector could be extended using other channels of biometric information such as voice or gaze. Using the engagement feature vector, an SVM classifier is trained.
Jan. 16, 2017, 10AM, ENG460
Nipu Barai • M.A.Sc. Defence
Human Visual System Inspired Saliency Guided Edge Preserving Tone-Mapping For High Dynamic Range Imaging
With the growing popularity of High Dynamic Range Imaging (HDRI), the necessity for advanced tone–mapping techniques has greatly increased. In this thesis, I propose a novel saliency guided edge-preserving tone-mapping method that uses saliency region information of an HDR image as input to a guided filter for base and detail image layer separation. Both high resolution and low resolution saliency maps were used for the performance evaluation of the proposed method. After detail layer enhancement and base layer compression with constant weights, a new edge preserved tone-mapped image was composed by adding the layers back together with saturation and exposure adjustments. The filter operation is faster due to the use of the guided filter, which has O(N) time operation with N number of pixels. Both objective and subjective quality assessment results demonstrated that the proposed method has higher edge and naturalness preserving capability, which is homologous to the Human Visual System (HVS), as compared to other state-of-the-art tone-mapping approaches.
Jan. 12, 2017, 10AM, ENG471
Maryam Nematollahi Arani • Ph.D. Defence
Robust Image labeling using Conditional Random Fields
Object recognition has become a central topic in computer vision applications such as image search, robotics and vehicle safety systems. However, it is a challenging task due to the limited discriminative power of low-level visual features in describing the considerably diverse range of high-level visual semantics of objects. Semantic gap between low-level visual features and high-level concepts are a bottleneck in most systems. New content analysis models need to be developed to bridge the semantic gap. In this thesis, algorithms based on conditional random fields (CRF) from the class of probabilistic graphical models are developed to tackle the problem of multiclass image labeling for object recognition. Image labeling assigns a specific semantic category from a predefined set of object classes to each pixel in the image. By well capturing spatial interactions of visual concepts, CRF modeling has proved to be a successful tool for image labeling. This thesis proposes novel approaches to empowering the CRF modeling for robust image labeling. Our primary contributions are twofold. To better represent feature distributions of CRF potentials, new feature functions based on generalized Gaussian mixture models (GGMM) are designed and their efficacy is investigated. This new model proves more successful than Gaussian and Laplacian mixture models. Due to its shape parameter, GGMM can provide a proper fit to multi-modal and skewed distribution of data in nature images. Further in this thesis, we apply scene level contextual information to integrate global visual semantics of the image with pixel-wise dense inference of fully-connected CRF to preserve small objects of foreground classes and to make dense inference robust to initial misclassifications of the unary classifier. Proposed inference algorithm factorizes the joint probability of labeling configuration and image scene type to obtain prediction update equations for labeling individual image pixels and also the overall scene type of the image.
Jan. 12, 2017, 10AM, ENG460
Ramyar Rashed Mohassel • Ph.D. Defence
NOVEL ADAPTATION OF OPTIMIZATION ALGORITHMS FOR ELECTRICITY CONSUMPTION MANAGEMENT USING LOAD MODERATION CENTERS IN SMART GRIDS
With the introduction of new technologies, concepts and approaches in power transmission, distribution and utilization such as Smart Grids (SG), Advanced Metering Infrastructures (AMI), Distributed Energy Resources (DER) and Demand Side Management (DSM), new capabilities have emerged that enable efficient use and management of power consumption at micro level in households and building complexes. On the other hand, Integration of Information Technology (IT) and instrumentation brought Building Management Systems (BMS) to our homes to better plan and utilize available sources while considering residences preferences. The idea of combining capabilities and advantages offered by SG, smart meters, DERs, DSM and BMS is the backbone of this thesis and has resulted in introducing a unique power management unit called Load Moderation Center (LMC) as an integrated part of BMS. This device, upon successful completion, will be able to plan consumption, effectively utilize available sources including grid, renewable energies and storages and reduce the costs for end users as well as utility providers significantly. To combine these technologies and capabilities, solid mathematical tool which ensure optimal operation of such complex system is required. Game Theoretic methods along with other known optimization techniques for similar applications will therefore be developed and utilized in this proposed work.
The aim of this PhD research is to apply and embed optimization techniques in the LMC at residential household level, take the results to the community level and apply Game Theoretic Optimization (GTO) method at community level for effective allocation of grid power to the demanding households in that community. GTO has been adopted for this work due to the satisfactory results in has shown in supply and demand optimization and matching applications when rational decision makers are involved. The road map for this project comprises of a comprehensive literature review on different Game Theoretic optimization methods for SG, DSM and BMS applications; identification of the best approach based on the results of simulations; adaptation and modifications of such technique, including complete revamping if needed, to best benefits the specific application of LMC. The optimization task can be defined as the best scenario of allocating available power sources to the demanding loads within the household, based on the real time constraints and, for reduced price at both ends of the supply-demand chain.