Seminars and Defenses
- Nov. 16, 4P, ENG460
- Ekaterina Korolkova • Senior Teacher Siberian Federal University, Department of Radio Electronics
- Through-the-Earth Mine Communications - Theory and Practice
Safety is an important factor in the mining industry. Through-The-Earth
technology can provide communication both in everyday usage and in a case
of emergency. Developing the TTE communication systems raises a lot of
questions about main system specifications such as a transmitting
frequency, current and antenna geometry. This talk is about TTE system
experiment in the Republic of Kazakhstan (“Irtishskaya mine”), researches
in finding appropriate system specifications and antenna geometry, and
finally the implementation of designed system in October 2017. Also we will
speak about some propagation difficulties in the real mine and some
engineering difficulties connected with mine electromagnetic environment.
In this talk we will represent experimental and numerical modeling results.
We compare widely used loop antenna and grounded dipole antenna. One of the
most interesting novel results is the influence of the antenna grounding
depth to the signal level in the mine.
- Nov. 16, 9A, ENG471
- Pooya SobheBidari • PhD final defense
- Ultrasound Shear Wave Elastography: Numerical Modeling and Time Frequency Analysis with an Application in HIFU Thermal Lesion Detection
In this work, a new numerical framework is proposed and implemented to simulate acoustic wave propagation in 3D viscoelastic heterogeneous media. The framework is based on the elastodynamic wave equation in which a 3D second-order time-domain perfectly matched layer (PML) formulation is developed to model unbounded media. The numerical framework is discretized by a finite difference formulation and its stability analysis is discussed.
The proposed numerical method is capable of simulating 3D shear and longitudinal acoustic waves for arbitrary source geometries and excitations, together with arbitrary initial and boundary conditions. After validation of the framework, it was used to simulate the propagation of ultrasound shear wave in high intensity focused ultrasound (HIFU) induced thermal lesions located within soft tissue. The parameters in these simulations were obtained from standard double-indentation measurements of the viscoelastic parameters of normal and thermally
coagulated chicken breast tissue samples. A HIFU system was used to induce thermal lesions in tissue.
In this study, a new elastography procedure was also introduced to differentiate between the normal and HIFU induced thermal lesions. This method is based on time-frequency analysis of shear wave propagation within the tissue. In the proposed method, the Wigner-Ville distribution has been used as a time-frequency analytical technique to detect the location of shear wave propagating within the tissue, and to estimate the shear speed of the wave as well as its center frequency and attenuation coefficient. This method was applied to the acoustic wave propagation simulation results of the HIFU thermal lesion. It was finally used to estimate the local viscoelastic parameters of the medium. It was demonstrated that the proposed method is capable of differentiating the thermal lesions from the normal tissue based on their viscoelastic parameters.
- Nov. 14, 11A, ENG460
- Kamran Masteri Farahani • PhD final defense
- Optimization Models for Distribution Planning and Operation
Smart grid technologies, renewables, energy storage devices and
electric vehicles are going to characterize the next generation
distribution systems. It is important to note that inclusion of electric
vehicles and renewables, inherently due to their natural power profile,
result in distribution systems having a peaky load profile with lower asset
utilization factors. Optimal planning and operation of distribution systems
are important aspects and should consider this changing paradigm.
This thesis aims to develop new solutions for optimal planning and
operation of distribution systems considering these new technologies and
their implications. The thesis specifically aims to use new techniques such
as complementarity in conjunction with classical optimization techniques to
develop new algorithms for optimal planning and operation of distribution
systems. The proposed work includes the following.
Two new distribution planning algorithms are proposed that include the
installation and optimal sizing of Battery Energy Storage System units in
addition to traditional assets, such as feeders and transformers. It
incorporates plan and asset lifetimes as a means of establishing the
minimum total annualized costs of new and replacement assets, operation and
maintenance, and customer interruptions. For a fair comparison, all costs
reflect the current year and are annualized over a specific study period.
Even though the second technique has the same base as the first method, it
is a multi-objective algorithm that uses fuzzy optimization technique to
handle multiple contradicting objectives that cannot be combined into a
single objective as they are in different units. This method has been
developed due to the lack of certainty in how to calculate customer
interruption cost in literature.
Then in order to realize Smart Radial Distribution System (SRDS) of the
future, a real-time optimal reconfiguration algorithm is proposed, which
uses a classical nonlinear optimization technique and guarantees an optimal
solution in the least time. The method optimizes the system loss and is
based upon a complementarity technique that transforms a set of
discontinuous solution spaces into a single continuously differentiable
solution space, thus enabling the use of classical nonlinear optimization
techniques without resorting to heuristics.
- Nov. 13, 12:30P, ENG460
- Azar Tolouee • PhD internal defense
- Efficient Compressed Sensing Reconstruction Frameworks for Accelerated Cardiac Magnetic Resonance Imaging
Dynamic magnetic resonance imaging (MRI) requires rapid data
acquisition to provide an appropriate combination of spatial and temporal
resolution, and volumetric coverage for clinical studies. In the most
challenging clinical situations, conventional dynamic MR scanners are often
incapable of simultaneously providing images with sufficient temporal
resolution and high spatial resolution. In practice, clinicians are often
forced to compromise between these parameters, often resulting in
Cardiac MRI is the most challenging and inspiring dynamic MRI application.
In cardiac MRI, the main challenge is the sensitivity of reconstruction
methods to large inter frame motion. The reconstructions often suffer from
temporal blurring and motion related artifacts at high acceleration factors.
In this dissertation, three novel approaches are proposed to minimize the
sensitivity of the reconstructions to inter frame motion. First, a
compressed sensing (CS) based image reconstruction method in conjunction
with spiral sampling is developed for the reconstruction of dynamic MRI
data from highly accelerated/under-sampled Fourier measurements. In the
second algorithm, the problem of motion artifacts including respiratory
motion and cardiac motion in compressed sensing reconstructions is
addressed. A motion estimation/motion compensation algorithm based on a
modified search that aids block matching and results in improved residual
reconstruction is incorporated into the CS reconstruction for dynamic MRI.
In the third algorithm, a novel formulation for the joint estimation of the
deformation and the dynamic images in cardiac cine MR imaging is
introduced. The motion estimation algorithm estimates the deformation by
registering the dynamic data to a reference dataset that is free of
respiratory motion, which is derived from the measurements themselves. A
variable splitting framework is used to minimize the objective function,
and thus derive the deformation and the dynamic images.
The validation of the proposed algorithms is illustrated using a numerical
phantom and in-vivo cine MRI data to show the feasibility in precisely
recovering cardiac MRI data from extensively under-sampled data.
- Sep. 25, 4P, KHE220
- Lin Cai • Professor
- Connected Vehicles for Intelligent and Green Transportation
Electric vehicles (EVs) are a key to future clean transportation systems. Despite various incentives, the rollout of EVs has been slow, mainly due to the limited cruising range and lack of convenient charging services. The good news is that EVs are now hitting a critical mass on the market at the same time as vehicle-to-vehicle and vehicle-to-infrastructure communication technologies are maturing, and electric utilities around the globe are racing to make their power grids more intelligent by adopting information and communication technologies. In addition to solving the above range and charging problems, the nexus of the Internet, EVs, charging stations, and smart grid forms a perfect storm of opportunities for future green and intelligent transportation systems. In this new paradigm, reliable and efficient information exchanges between EVs, meters, charging stations and power grid, as well as intelligent charging services, are key issues. However, there are many research issues and
challenges remaining unsolved and beckoning further investigation. In this talk, we focus on the vehicle communication networking problems, including the theoretical breakthroughs in vehicle network connectivity and delay analysis. Hope they inspire more research efforts and advances, contributing to the new era of connected EVs that may not only revolutionize how people and goods move, but also how energy flows, leading to future green and intelligent transportation systems of both things and energy.
Lin Cai received her M.A.Sc and PhD degrees in electrical and computer engineering from the University of Waterloo, Waterloo, Canada, in 2002 and 2005, respectively. Since 2005, she has been with the Department of Electrical & Computer Engineering at the University of Victoria, and she is currently a Professor. Her research interests span several areas in communications and networking, with a focus on network protocol and architecture design supporting emerging multimedia traffic over wireless, mobile, ad hoc, and sensor networks. She has been a recipient of the NSERC Discovery Accelerator Supplement Grants in 2010 and 2015, respectively, and the best paper awards of IEEE ICC 2008 and IEEE WCNC 2011. She has served as a TPC symposium co-chair for IEEE Globecom'10 and Globecom'13, an Associate Editor for IEEE Transactions on Wireless Communications, IEEE Transactions on Vehicular Technology, EURASIP Journal on Wireless Communications and Networking, International Journal of
Sensor Networks, and Journal of Communications and Networks (JCN), and a Distinguished Lecturer (DL) of IEEE VTS society.
- Sep. 27, 2P, ENG460
- Pooya Sobhebidari • PHD INTERNAL Defense
- Ultrasound Shear Wave Elastography Numerical Modeling and Simulation with an Application in HIFU Thermal Lesion Detection
In this work, a new numerical framework is proposed and implemented to simulate acoustic wave propagation in 3D viscoelastic heterogeneous media. The framework is based on the elastodynamic wave equation in which a 3D second-order time-domain perfectly matched layer (PML) formulation is developed to model unbounded media. The numerical framework is discretized by a finite difference formulation derived by the stability analysis.
The proposed numerical method is capable of simulating 3D shear and longitudinal acoustic waves for arbitrary source geometries and excitations, together with arbitrary initial and boundary conditions. After validation of the framework, it was used to simulate the propagation of ultrasound shear wave in high intensity focused ultrasound (HIFU) induced thermal lesions located within soft tissue. The parameters in these simulations were obtained from standard double-indentation measurements of the viscoelastic parameters of normal and thermally coagulated
chicken breast tissue samples. A HIFU system was used to induce thermal lesions in tissue.
In this study, a new elastography procedure was also introduced to differentiate between the normal and HIFU induced thermal lesions. This method is based on time-frequency analysis of shear wave propagation within the tissue. In the proposed method, the Wigner-Ville distribution has been used as a time-frequency analytical technique to detect the location of shear wave propagating within the tissue, and to estimate the shear speed of the wave as well as its frequency and attenuation coefficient. This method was applied to the acoustic wave propagation simulation results of the HIFU thermal lesion. It was finally used to estimate the local viscoelastic parameters of the medium. It was demonstrated that the proposed method is capable of differentiating normal tissue from the thermal lesions were based on their viscoelastic parameters.
- Sep. 12, 12P, ENG460
- Sheraz Siddique • MEng Project Defense
- A new light weight block cipher for IOT
With the rapid evolution of internet of things (IoT), where the number of connected devices to IoT is expected to reach over 50 billion by 2020  from current 7 billion connected devices. The security industry is seeing a paradigm shift of not only managing identity and access management (IAM) of people or financial transactions but also managing hundreds of thousands of devices that may be connected to a network. With the adoption of advance encryption standard (AES) in 2001, AES became the preferred choice for any block cipher application and the need for new block cipher had greatly reduced. However for the constrained environment such as RFID tags or sensors network AES becomes over exorbitant. Rapid increase in the number of IoT devices gave an unprecedented challenge to the industry and security community to stay abreast with the development of smart, efficient and compact architecture of security algorithms in conformity with the compact architecture of IoT devices. This
paper presents a 32-bit lightweight block cipher INFLEX, supporting 64-bits key. INFLEX is designed for quite small hardware implementation similar to other leading lightweight block ciphers. This characteristic is obtained by the use of generalized Feistel structure combined with an improved block inflation feature. INFLEX follows a typical ARX (Add, Rotate, XOR) architecture with a distinguished feature of block expansion and collapse as per user selected control string, which makes INFLEX act as a tweakable Cipher. We have shown comparison of INFLEX robustness and immunity against linear and differential attacks and demonstrated that it outperforms one of the benchmark block Ciphers Speck32/64 proposed by national security agency (NSA).
- Sep. 11, 1P, ENG471
- Negar Taherian • MASc Thesis Final Defense
- K-means Clustering Based Tone-Mapping Operator For High Dynamic
Range Video and Image
Abstract: The field of high dynamic range (HDR) imaging deals with capturing the luminance of a natural scene, usually varying between 10−3 to 105 and displaying it on digital devices with much lower dynamic range. Here, we present an approach using K-means clustering algorithm for tone-mapping HDR images. We also show how to extend the method to handle video input. We display that our algorithm gives comparable results to state-of-the- art tone mapping algorithms. We test our algorithm on a number of standard high dynamic range images and video sequences and provide qualitative and quantitative comparisons to a number of state-of-the-art tone mapping algorithms for videos.
- Sep. 11, 10A, ENG460
- SeyyedOmid Badretaleh • MASc Thesis Defense
- Design and Implementation of Convolutional Neural
Networks for Low-Dose CT Image Noise Reduction
An essential objective in medical low-dose Computed Tomography (CT) imaging is how best to preserve the quality of the image. While, in general, the image quality lowers with reducing the X-ray radiation dose, improving image quality is remarkably crucial for diagnosis and also challenging. Therefore, a novel method to denoise low-dose CT images has been presented in this thesis. Different from the prevalent and traditional algorithms which utilize similar shared features of CT images in the spatial or transform domain, the deep learning approaches are suggested for low-dose CT denoising. The proposed algorithm learns directly from an end-to-end mapping from the low-dose Computed Tomography images for denoising the normal-dose CT images. The first method is based on a fully convolutional neural network with rectified linear units. By learning various low-level to high-level features from a low-dose image, the proposed algorithm is capable of creating a high-quality denoised image. The
second approach is a deep convolutional neural network architecture consisting of five parts, namely— Feature extraction, Compressing, Mapping, Enlarging, and Assembling. The results of proposed two frameworks are compared with the state-of-the-art methods. Several evaluation metrics for assessing image quality are applied in this thesis in order to highlight the supremacy of the performed method.
- Sep. 8, 10A, ENG471
- Brien East • MASc Thesis Defense
- Actibles: Design and Development of a New Platform for Active Tangibles
In this thesis, we present a new kind of active tangible call an Actible. Actibles are an open source hardware/software platform for designing and implementing active tangibles for Tangible Embodied Interaction (TEI) applications. Web technologies and a smartwatch core are leveraged for ease of development, and enables the active tangibles to act coupled to a server, or act as their own server. This kind of connectivity enables the use of the Actibles independently or in combination with other devices and displays ubiquitously. We derived an expanded set of input and output interactions based on previous work on active tangibles, including tilting, shaking, neighboring, stacking, on-screen gestures, and integrated LED feedback. We describe the design and technical implementation of these interactions, and demonstrate it's use in a multitude of published example applications.
- Sep. 7, 1P, ENG460
- Kaveh Khorramnejad • MASc Final Defense
- Time and Cost efficient Intelligent Data Pre-fetching on
Mobile Cloud Computing
Currently, many recent methods and algorithms have been proposed in caching and prefetching area. However, pre-fetching approaches have not been combined and investigated with workload scheduling as much. In this thesis, different approaches in this area are analysed, compared and discussed. Initially, this study extensively reviews the principles of the existing prefetching and caching strategies considering latency and cost factor as primary objectives. Later, it focuses on an integrated workload scheduling and pre-fetching model to enhance the performance of response time and reduce the cost. Furthermore, response time and cost problems are formulated and to overcome the total response time and cost (processing delay) problems a heuristic approach is proposed. Integrated workload scheduling and prefetching are considered to study the effects of various parameters such as processing speed and pre-fetcher's utilization. Finally, the results are provided and achievements and
pros and cons are discussed. Thus, based on the performance results and analysis this thesis can contribute to the the new solutions of web caching and integrated workload scheduling and pre-fetching.
- Sep. 7, 10A, ENG460
- Faizan Rahman • MASc Thesis Defense
- Kernal K-mace
Several unsupervised clustering methods exist that cluster data in input space. Transforming data to feature space using a kernel function can express features of data that are hidden in input space resulting in better separability for some datasets. The parameters of the kernel function govern the structure of data in feature space and need to be optimzed simultaneously while also estimating the number of clusters in a dataset. The proposed method denoted by kernel k-Minumum ACE (kernel k-MACE), estimates the number of clusters in a dataset while clustering the dataset simultaneously in feature space by finding the optimum value of the gaussian kernel parameter. A cluster initialization technique has also been proposed and is based on an existing method for kmeans clustering. Simulations show that the proposed method outperforms other unsupervised methods such as DBSCAN, G-means and index validation methods using Gap, Calinski-Harabasz, Davies-Bouldin and Silhouette indices. The
proposed method also outperforms k-MACE, the clustering scheme that inspired kernel k-MACE.
- Sep. 6, 1P, ENG471
- Isuru Dasanayake • MASC Thesis Defense
- SCHEDULING OF ELECTRICAL VEHICLE CHARGING FOR A CHARGING FACILITY
WITH SINGLE CHARGER AND MULTIPLE CHARGERS
Merchant-owned charging stations will replace gasoline stations in the near future. As charging times of electric vehicles (EV) may be significant, without optimization, customers will wait to get charged without knowing the actual period of charging. In this thesis, two optimal scheduling methods for charging electric vehicles were developed for merchant-owned charging facilities, the first with a single charger and the second with multiple chargers. In the mathematical model for the single merchant-owned charging station, the problem is formulated as a hybrid nonlinear optimization model and solved using a backward recursive algorithm with nonlinear optimization solvers. As for a single charger, a hybrid system framework was used to capture the tradeoff between demand charges and speed of charging.
For the merchant-owned multiple chargers case, the problem is formulated as a mixed integer linear optimization challenge with three-dimensional matrices characterizing the solution space and was solved using the MOSEK optimization toolbox in MATLAB. The proposed algorithms have been analyzed for different penalty factors which were imposed on total waiting time of each EV. Final results are analyzed and discussed.
- Sep. 1, 10:30A, ENG460
- Lei Gao • PhD Thesis Defense
- A Discriminative Analysis Framework For Multi-Modal Information Fusion
Since multi-modal data contain rich information about the semantics presented in the sensory and media data, valid interpretation and integration of multi-modal information is recognized as a central issue for the successful utilization of multimedia in a wide range of applications. Thus, multi-modal information analysis is becoming an increasingly important research topic in the multimedia community. However, the effective integration of multi-modal information is a difficult problem, facing major challenges in the identification and extraction of complementary and discriminatory features, and the impactful fusion of information from multiple channels. In order to address the challenges, in this thesis, we propose a discriminative analysis framework (DAF) for high performance multi-modal information fusion. The proposed framework has two realizations. We first introduce Discriminative Multiple Canonical Correlation Analysis (DMCCA) as the fusion component of the framework. DMCCA is
capable of extracting more discriminative characteristics from multi-modal information. We demonstrate that optimal performance by DMCCA can be analytically and graphically verified, and Canonical Correlation Analysis (CCA), Multiple Canonical Correlation Analysis (MCCA) and Discriminative Canonical Correlation Analysis (DCCA) are special cases of DMCCA, thus establishing a unified framework for canonical correlation analysis. To further enhance the performance of discriminative analysis in multi-modal information fusion, Kernel Entropy Component Analysis (KECA) is brought in to analyze the projected vectors in DMCCA space, and thus forming the second realization of the framework. By doing so, not only the discriminative relation is considered in DMCCA space, but also the inherent complementary representation of the input data is revealed by entropy estimation, leading to better utilization of the multi-modal information and better pattern recognition performance. Finally, we
implement a prototype of the proposed DAF to demonstrate its performance in handwritten digit recognition, face recognition and human emotion recognition. Extensive experiments show that the proposed framework outperforms the existing methods based on similar principles, clearly demonstrating the generic nature of the framework. Furthermore, this work offers a promising direction to design advanced multi-modal information fusion systems with great potential to impact the development of intelligent human computer interaction systems.
- Aug. 31, 6P, ENG460
- Liaqat Ali • MEng Final Thesis Defense
- Three Phase Digital Earth Leakage Detection
In any electrical system, protection is the most important requirement to secure both human lives and appliances from any damage. The TDELD which stands for THREE PHASE EARTH LEAKAGE DETECTION is a design which could be implemented in three phase electrical environment to provide protection to user as well as equipments against any earth leakage fault. Furthermore, as it is a microcontroller based solution, so it provides ease and luxury at the user end with the help of its auto reset and display features. This system monitor’s incoming and return currents for each phase lines by an advance hall effect current sensor and detects a fault if any difference in their values are found. Such a solution provides high speed and accuracy if compared with conventional ELCB systems. More protection of equipments is required in the industries, thus TDELD could serve as a protector to their system. This research will attempt to improve the existing ELCB design using PIC microcontroller to
automatically switch back system to its normal mode when the TDELD tripped during any electric shock or temporary earth leakage while in permanent leakage fault, it provide input control to bring back the system to its normal operation. This system provides a convenience for house’s owner especially when they are not present in the home. In this research PIC 16F877A microcontroller is used to control overall process. The results of this research after doing several tests have shown that the average sensitivity value for TDELD against leakage current is better than what could be found in a conventional ELCB.
- Aug. 31, 4:30P, ENG460
- Mohammed Nabeel Ahmed • MEng Project Defense
- Deep Vision Pipeline for Self-Driving Cars based on Machine Learning Methods
Machine vision and deep learning are increasingly inter-related topics with important application to self-driving car design and research. A detailed design and implementation of vision pipeline for self-driving cars is explored with the application of computer vision techniques, deep neural networks (DNN), and convoluted neural networks (CNN). The vision pipeline architecture is designed to recognize road lanes, traffic signs and other vehicles present in a view. Furthermore, view image and steering data is used to train a vehicle to self-drive in a simulator. The mathematical background is developed as well as the python code used in the implementation for each component of the vision pipeline. The implementation makes heavy reliance on the industry-proven TensorFlow and OpenCV libraries. Various robust academia and industry proven networks such as LeNet and VGG-16 are explored and implemented for learning and classification tasks for the pipeline. Finally, an exploration of the
hardware implementation of the vision pipeline provides a complete module that can be transplanted in a self-driving vehicle design.
- Aug. 31, 1P, ENG460
- Shuo Yu • MASc Final Thesis Defense
- Resource Allocation for Energy Harvesting Assisted D2D Communications Underlaying OFDMA Cellular Networks
D2D communications underlaying cellular networks was first brought up in 3GPP and since then has developed as a bedrock for Internet of Things. Interference scenarios are usually convoluted in D2D communications as cellular users and D2D pairs that share the same spectrum resource will impose interference on each other. Therefore, efficient resource allocation schemes are needed to facilitate fast and smooth communications for both cellular users and D2D users. Another challenging issue in D2D systems can trace back to limited battery lives. This problem may seem even more critical since energy consumption has already taken a toll on our climate yet the exponential increase of devices following next generation communications has just started. In this thesis, we address these two issues together by introducing an energy harvesting assisted D2D model. Ours is a deterministic model where D2D users will harvest energy only when they need to. We divide the whole transmission process into
multiple time slots of equal duration in our simulation. At the beginning of every time slot, D2D users will update the remaining energy level in the batteries of their sensors and make a decision on either harvesting energy or transmitting as underlay users. The objective is to maximize sum throughput for all D2D users over multiple time slots via joint resource allocation of resource blocks, transmit power under the constraints of QoS performance for cellular users, limited transmit power and complex mutual interference. To solve the problem, we first use the NOMAD algorithm in the OPTI toolbox, then provide a heuristic algorithm for the problem that is less time-consuming. Numerical results show that our model can effectively save energy and the heuristic algorithm can achieve almost the same sum throughput compared with the NOMAD algorithm at a significantly smaller cost of running time.
- Aug. 31, 10:30A, ENG471
- Dipak Patel • MEng Project Defense
- Design and Implementation of Intelligent Building / Smart Building
The intelligent building is supposed to provide the environment and means for an optimal utilization of the building, according to its designation. This extended function of a building can be achieved only by means of an extensive use of building service systems, such as HVAC, electric power, communication, safety and security, transportation, sanitation, etc. Building intelligence is not related to the sophistication of service systems in a building, but rather to the integration among the various service systems, and between the systems and the building structure. Systems' integration can be accomplished through teamwork planning of the building, starting at the initial design stages of the building. This work examines some existing buildings claimed to be “intelligent”, according to their level of systems' integration. Intelligent buildings respond to the needs of occupants and society, promoting the well-being of those living and working in them and providing value through
increasing staff productivity and reducing operational costs. Intelligent Buildings consider cultural changes affecting the way people live and work, the importance of an integrated approach to design and management and the benefits technological developments can bring in developing sustainable buildings that meet users' needs.
- Aug. 31, 10A, ENG460
- Mehak Basharat • MASc Final Thesis Defense
- Joint User Grouping and Time Allocation for NOMA with Wireless Power Transfer
Non-Orthogonal Multiple Access (NOMA) has recently been explored to address the challenges in 5G networks such as spectral efficiency, accommodating large number of devices, etc. Further, energy harvesting is a promising solution to address the challenges for energy efficiency in 5G networks due to large number of wireless devices. In this thesis, joint user grouping, power allocation, and time allocation for NOMA with RF energy harvesting is investigated. We mathematically modeled a framework to optimize user grouping, power allocation, and time allocation for energy harvesting and information transfers. The objective is to maximize data rate of a cell while satisfying the constraints on minimum data rate requirement of each user and transmit power. The proposed mathematical framework is a mixed integer non-linear programming (MINLP) problem. We adopted mesh adaptive direct search (MADS) algorithm to solve the formulated problem, which provide epsilon optimal solution. The thesis is
- Aug. 31, 12N, ENG460
- Anita Tino • PhD Final Thesis Defense
- Configurable Simultaneously Single-Threaded (Multi-)Engine Processor
As the multi-core computing era continues to progress, the need to increase single thread performance, throughput, and seemingly adapt to thread-level parallelism (TLP) remain important issues. Though the number of cores on each processor continues to increase, expected performance gains have lagged. Accordingly, computing systems often include Simultaneously Multi-Threaded (SMT) processors as a compromise between sequential and parallel performance on a single core. These processors effectively improve the throughput and utilization of a core, however often at the expense of single-thread performance as threads scale per core scale. Accordingly, applications which require higher single-thread performance must often resort to single-thread core multi-processor systems which incur additional area overhead and power dissipation. In attempts to improve single- and multi-thread core efficiency, this work introduces the concept of a Configurable Simultaneously Single-Threaded (Multi-)Engine
Processor (ConSSTEP). ConSSTEP is a nuanced approach to multi-threaded processors, achieving performance gains and energy efficiency by invoking low overhead reconfigurable properties with full software compatibility. Experimental results demonstrate that ConSSTEP is able to increase single-thread Instructions Per Cycle (IPC) up to 1.39x and 2.4x for 2-thread and 4-thread workloads, respectively, improving throughput and providing up to 2x energy efficiency when compared to a conventional SMT processor.
- Aug. 29, 1P, ENG471
- Muhammad Obaidullah • MASc Final Thesis Defense
- Application Mapping and NoC Configuration using Hybrid Particle Swarm Optimization
Network-on-Chip (NoC) has been proposed as an interconnection framework for connecting large number of cores for a System-on-Chip (SoC). Assuming a mesh-based NoC, we investigate application mapping and NoC synthesis techniques. A hybrid optimization scheme is presented that combines Tabu-search, communication volume based core swapping and Discrete Particle Swarm Optimization (DPSO) for NoC mapping. The main goal of the optimization is to map an application core-graph such that the overall communication latency of the NoC is minimal. It is assumed that the target NoC has a 2D-mesh topology. DPSO is used as the main optimization technique where each swarm particle move is influenced by global and local best, previous visited search space locations, and a deterministic methodology to reduce communication volume of the existing mapping. We employ a Tabu-list to discourage swarm particles to re-visit the explored search space and propose an alternative route towards the intended movement
direction. The application mapping technique is tested for some multimedia application core graphs as well as randomly generated large network of synthetic cores-graphs. It was found that on average our hybrid scheme generates high quality NoC mapping solutions as compared to some existing DPSO techniques.
We also explore the assignment of cores to cross-points and produce a best NoC configuration with minimum average communication traffic, power consumption and chip area. We use pre-synthesized NoC components data to estimate power and chip area of the on-chip network. NoC configuration and mapping problem belongs to NP-hard complexity set and we employ a hybrid scheme of swarm optimization that combines Tabu-search, force-directed swapping, sub-swarms, and DPSO. The main goal of the optimization is to configure the interconnection network such that the overall NoC latency, power consumption, and area occupied are minimal. The methodology is tested for real-world and synthetic application core-graphs. It is determined that our hybrid optimization technique required less number of iterations and time to reach an optimal solution when compared with the past NoC synthesis algorithms.
- Aug. 29, 11A, ENG471
- Durand Jarrett-Amor • MASC Final Thesis Defense
- Frequency Calibration of System Clock of Passive Wireless Microsystems
This thesis proposes a new ultra-low power remote frequency calibration technique that can yield very accurate tuning of the frequency of a local oscillator in passive wireless microsystems by using an ultra-low power, fast-locking frequency-locked loop (FLL). A new utra-low power integrating frequency di erence detector (iFDD) that senses the frequency di erence between the local oscillator and the reference clock to generate an output voltage that is proportional to the frequency error between the two signals is also proposed. The iFDD is implemented using a switched-capacitor network with two integrating paths and utilizes a single current source to minimize the mismatch between these two paths and can operate at supply voltages as low as 0.5 V, thus realizing ultra-low power consumption. The FLL is composed of a logic-control block (LCB) for generation of clock signals, the iFDD, and a relaxation voltage-controlled oscillator (VCO). The relaxation oscillator is implemented with a
current-starved inverter pair with a PMOS load and could also operate at supply voltages as low as 0.5 V. Thus power consumption of the FLL is minimized by operating the LCB, iFDD, and VCO in sub-threshold. A detailed analysis of the characteristics of the iFDD in both the time and frequency domains is presented. The loop dynamics of the FLL is also investigated and used for its design. The proposed FLL is implemented in IBM 0.13- m, 1.2 V CMOS technology with BSIM4v4 device models. The theoretical results and performance of the iFDD and FLL is validated through simulations performed in SpectreRF from Cadence Design Systems (CDS). The proposed remote frequency calibration technique achieves a low power consumption of 1.27 W, a calibration time of 13.53 s, a lock range of 1.2 MHz, and a frequency accuracy <0.01% in the lock state. Of the 1.27 W, the proposed iFDD consumes only 90 nW, thus accounting for only 7% of the total power. The FLL also exhibits excellent robustness in the
presence of process, voltage, and temperature (PVT) variations.
- Aug. 29, 10A, ENG460
- Saba Sedghizadeh • PhD Final Thesis Defense
- Subspace Predictive Control: Stability And Performance Enhancement
In the absence of prior knowledge of a system, control design relies heavily on the system identification procedure. In real applications, there is an increasing demand to combine the usually time consuming system identification and modeling step with the control design procedure. Motivated by this demand, data-driven control approaches attempt to use the input-output data to design the controller directly. Subspace Predictive Control (SPC) is one popular example of these algorithms that combines Model Predictive Control (MPC) and Subspace Identification Methods (SIM). SPC instability and performance deterioration in closed-loop implementations are majorly caused by either poor tuning of SPC horizons or changes in the dynamics of the system. Stability and performance analysis of the SPC are the focus of this dissertation. We first provide the necessary and sufficient condition for SPC closed-loop stability. The results introduce SPC stability graphs that can provide the feasible
prediction horizon range. Consequently, these stability constraints are included in SPC cost function optimization to provide a new method for determining the SPC horizons. The novel SPC horizon selection enhances the closed-loop performance effectively. Note that time-delay estimation and order selection in system modeling have been a challenging step in applications and industry. Here, we propose a new approach denoted by RE-based TDE that simultaneously and efficiently estimates the time-delay for the SIM framework. In addition, we use the recently developed MSEE approach for estimating the system order. Moreover, we propose an artificial intelligence approach denoted by Particle Swarm Optimization Based Fuzzy Gain-Scheduled SPC (PSO-based FGS-SPC). The method overcomes the issue of on-line adaptation of SPC gains for systems with variable dynamics in presence of the noisy data. The approach eliminates existing tuning problem of controller gain ranges in FGS and updates the SPC
gains with no need to apply any external persistently excitation signals. As a result, PSO-based FGS-SPC provides a time efficient control strategy with fast and robust tracking performance compare to conventional and state of the art methods.
- Aug. 24, 3P, ENG460
- Parth Parekh • MEng Project Defense
- All-Digital ΔΣ Time-to-Digital Converter with Bi-Directional Gated
Delay Line Time Integrator
This report presents a low-power time integrator and its applications in an all-digital first-order ΔΣ time-to-digital converter (TDC). Time-to-Digital Converter (TDC) that maps a time variable to a digital code is the most important building blocks of time-mode circuits. The time integrator is realized using a bi-directional gated delay line (BD-GDL) with the time variable to be integrated as the gating signal. The integration of the time variable is obtained via the accumulation of the charge of the load capacitor and the logic state of gated delay stages. Issues affecting the performance of the time integrator and TDC are examined. The all-digital first-order ΔΣ TDC utilizing the time integrator was designed in using IBM 130 nm 1.2 V CMOS technology and analysed using Spectre ASP from Cadence Design Systems with BSIM4 models. A sinusoidal time input of 333 ps amplitude and 231 kHz frequency with an oversampling ratio 68 was digitized by the modulator. The TDC provides first-order
noise-shaping and a SNR of 34.64 dB over the signal band 48.27 ~ 231 kHz while consuming 293.8 μW.
- Aug. 23, 10:30A, ENG460
- Mushu Li • MASc Thesis Defense
- Load Balancing for Smart Grid: Centralized and Distributed Approaches
As one of the greatest concerns in the context of smart grid, load balancing problem is addressed by improving the electrical power efficiency and stability via scheduling power loads, thereby shaping the power demand into the desired pattern. The research explores the load balancing strategies to reduce the demand fluctuations in the smart grid system. Centralized and decentralized load balancing methodologies are discussed: for centralized approaches, the offline exact power allocation method is investigated by utilizing the geometric water-filling (GWF) approach. The upper bound constraints are highlighted with respect to avoiding overload. According to the offline solution, two dynamic centralized load balancing algorithms are developed to balance the loads in real-time without losing user satisfaction. Furthermore, decentralized load balancing problem is discussed at microgrid level. Electrical vehicle (EV) fleeting among the neighbouring charging stations is regarded as a
mechanism to shape the demand in multiple microgrids. With our approach, the load balancing for the whole grid is achieved by local optimization processes via Proximal Jacobian Alternating Direction Method of Multipliers (ADMM) technique which eliminates the requirement for global information exchange. Corresponding performance evaluations show that the proposed approaches flatten the power demand significantly. Overall, facilitated by our proposed strategies, the reliability of the electric grid can be enhanced through the smoothed power demand in various control environments.
- Aug. 18, 1P, ENG460
- Salam Al-Juboori • PhD Final Thesis Defense
- Multichannel Spectrum Sensing over Correlated Fading Channels with Diversity Reception
Accurate detection of white spaces is crucial to protect primary user against interference with secondary user. Multipath fading and correlation among diversity branches represent essential challenges in Cognitive Radio Network Spectrum Sensing (CRNSS). This dissertation investigates the problem of correlation among multiple diversity receivers in wireless communications in the presence of multipath fading. The work of this dissertation falls into two folds, analysis and solution. In the analysis fold, this dissertation implements a uni ed approach of performance analysis for cognitive spectrum sensing. It considers a more realistic sensing scenario where non-independent multipath fading channels with diversity combining technique are assumed. Maximum Ratio Combining (MRC), Equal Gain Combining (EGC), Selection Combining (SC) and Selection and Stay Combining (SSC) techniques are employed. Arbitrarily, constant and exponentially dual, triple and L number of Nakagami-m correlated
fading branches are investigated. We derive novel closed-form expressions for the average detection probability for each sensing scenario with simpler and more general alternative expressions. Our numerical analysis reveals the deterioration in detection probability due to correlation especially in deep fading. Consequently, an increased in the interference rate between the primary user and secondary user is observed by three times its rate when independent fading branches is assumed. However, results also show that this effect could be compensated for, through employing the appropriate diversity technique and by increasing the diversity branches. Therefore, we say that the correlation cannot be overlooked in deep fading, however in low fading can be ignored so as to reduce complexity and computation. Furthermore, at low fading, low false alarm probability and highly correlated environments, EGC which is simpler scheme performs as good as MRC which is a more complex scheme. Similar
result are observed for SC and SSC. For the solution fold and towards combating the correlation impact on the wireless systems, a decorrelator implementation at the receiver will be very beneficial. We propose such decorrelator scheme which would significantly alleviate the correlation effect. We derive closed-form expressions for the decorrelator receiver detection statistics including the Probability Density Function (PDF) from fundamental principles, considering dual antenna SC receiver in Nakagami-m fading channels. Numerical results show that the PDF of the bivariate difference could be perfectly represented by a semi-standard normal distribution with zero mean and constant variance depending on the bivariate's parameters. This observation would significantly help simplifying the design of decorrelator receiver. The derived statistics can be used in the problem of self-interference for multi carrier systems. Results also show the outage probability has been improved by the
- Aug. 15, 2P, ENG460
- Randy Tan • MASc Thesis Defense
- Real Time System for Human Action Evaluation
This thesis presents a real-time human activity analysis system, where a user's activity can be quantitatively evaluated with respect to a ground truth recording. Multiple Kinects are used to solve the problem of self-occlusion while performing an activity. The Kinects are placed in locations with different perspectives to extract the optimal joint positions of a user using Singular Value Decomposition (SVD) and Sequential Quadratic Programming (SQP). The extracted joint positions are then fed through our Incremental Dynamic Time Warping (IDTW) algorithm so that an incomplete sequence of an user can be optimally compared against the complete sequence from an expert (ground truth). Furthermore, the user's performance is communicated through a visual feedback system, where colors on the skeleton present the user's level of performance. Experimental results prove the impact of the system, where through elaborate user testing we show that the IDTW algorithm combined with visual
feedback improves the user's performance quantitatively.
- Jun. 2, 10A, ENG 471
- Dr. Trevor McKee • Image Analysis Core Manager, STTARR Innovation Centre, Princess Margaret Cancer Centre
- Quantifying drug, gene, and oxygen transport within tumors using imaging
and pathology analytics helps to design better cancer therapies
There is a growing need for better—and more personalized—cancer treatments, to provide oncologists with the tools they need to best treat their patients. Biomedical engineers are key players in this process, by using fundamental engineering principles and quantitative imaging tools to study biological processes, delivering insights that may translate to improved therapies.
In particular, transport phenomena play a critical role in many aspects of tumor biology and treatment. We have developed methods to study transport within tumors using intravital image analysis methods and quantitative digital pathology analytics. Using intravital microscopy and fluorescence recovery after photobleaching with spatial Fourier analysis, we have shown that diffusive transport of nanoparticles and gene vectors within the tumor is limited by the effective pore size of the extracellular matrix, in particular collagen. Degrading the tumor collagen enzymatically can improve delivery of oncolytic viruses, resulting in better therapeutic outcomes in preclinical models. In addition, inefficient cancer blood vessels combined with metabolic demands of proliferating tumor cells results in transport limitations for oxygen within many solid tumors. Hypoxia (i.e., lack of oxygen) in the tumor leads to reduced radiation effectiveness, and more aggressive disease. We have
developed methods for quantitatively imaging tumor hypoxia in preclinical models, and shown that hypoxia is positively correlated with tumor proliferation; however, hypoxic tumors are also more sensitive to treatment with hypoxia-activated prodrugs. Additionally, we have combined preclinical imaging with quantitative flow cytometry and pathology to show that drugs that alter metabolic demand in tumor cells, such as metformin, can reduce tumor hypoxia, and improve survival in preclinical models. Both of these studies have provided valuable preclinical rationale for extending these therapeutic strategies into clinical trials in patients.
Our group uses the tools of machine learning and multiplexed digital pathology to build a generalized analytical framework to perform “tissue cytometry”. This new technology can extract quantitative image-derived features in a reproducible and robust fashion, providing clinicians and biological scientists with tools to measure previously inaccessible phenomena, like measuring the hypoxic gradient directly within tumor sections, or comparing glucose uptake to lactic acid production in the same tumor sample. Future applications of this tissue cytometric approach include quantification of immune cell transport within cancer, to help improve the promising new treatment of cancer immunotherapy.