Ji, Yiming Alex

### Description

Localization, also called positioning, is the task of determining the position of a target accurately on the surface of the earth. Localization is an important problem which has a long history. The conception of localization can be tracked back to ancient times when human visual and auditory information were used to localize an object such as prey or an enemy. With the development of science and technology, various technologies have been invented for localization by applying diverse types of...[Show more] sensing and information processing. Before and during the World War II, the Radio Detection And Ranging (RADAR) technology was secretly developed by several nations to track enemy ships and aeroplanes. Later the localization problem was studied using what today would be called a multi-agent system, a system composed of multiple interacting intelligent agents within an environment. In order to implement reconnaissance and surveillance tasks, multiple transmitters and receivers were used. Since World War II, there have been many localization algorithms in order to obtain accurate positions of estimated objects for different applications. However almost all the existing localization algorithms have some limitations. One of the most common limitations arises from errors in the measurements. Once errors are present in the measurements, it is then virtually guaranteed that from the generally nonlinear processing of measurements to achieve localization, bias will appear in the estimated positions which will significantly influence the localization accuracy. In order to ameliorate this problem, in this thesis, we propose a generic bias correction method which mixes Taylor series and Jacobian matrices to determine the bias, where the number of measurements is equal to the number of variables being estimated, leading to an easily calculated analytical (though approximate) bias expression. By using the obtained expression, one can compute the (approximate) bias and then simply subtract it from the original estimated positions to improve the localization accuracy. However, when the number of obtained usable measurements is greater than the number of variables being estimated, the nature of the calculations is more complicated in that a further step is required to obtain the analytical expression of the bias. The proposed novel method is generic which means it can be applied to different types of localization algorithms (range-based, bearing-only, scan-based etc.). In order to demonstrate that the proposed method is also applicable in situations with mobile anchors and targets, we analyze the proposed method in two mobile situations. Monte Carlo simulation results verify that when the underlying geometry is a good geometry, which allows the location of a target to be obtained with acceptable mean square error, the proposed approach can correct the bias effectively with an arbitrary number of independent usable measurements. In addition, the proposed method is applicable irrespective of the type of measurement (range, bearing, time difference of arrival(TDOA), etc). Moreover, a particular geometric problem known as the collinearity problem, which may prevent effective use of localization algorithms and the proposed bias reduction method, is analyzed in detail. In order to deal with the collinearity problem, a novel approach, which takes the level of the measurement noise into consideration, is proposed. Monte Carlo simulation results demonstrate the performance of the proposed method. The simulation results also illustrate the influence of two factors on the effect of the proposed bias correction method: the distance between anchors and the level of noise. Apart from RADAR and multi-agent systems, recently, the localization research has been extended to the wireless sensor networks (WSNs), which have been developed fast in recent years with the innovation and evolution in microelectro-mechanical systems technology. WSNs have gained world-wide attention in many different application areas which are important or even essential to our economy and life. From industrial process control to environment monitoring, to battlefield surveillance, wireless sensor networks can play an important role. In almost every application, the physical locations of sensing nodes are of importance, so that localization is necessary for the sensors themselves. Similar to the situations in other areas, the current localization algorithms for wireless sensor networks also suffer from localization bias. Different from other situations, in wireless sensor networks the geometric layout of the whole network is important since it can significantly influence the localization accuracy. We therefore propose a second bias correction method into which the network geometric information is introduced. The second proposed bias reduction method mixes Taylor series and a maximum likelihood estimate, and leads to an easily calculated analytical (though approximate) bias expression in terms of a known maximum likelihood cost function. In contrast to existing contributions, the work considers a network as a whole when the bias is investigated, by introducing the geometric structure of the network into the proposed bias reduction method via the rigidity matrix, a concept drawn from graph theory. A maximum likelihood cost function is related to the rigidity matrix resulting in a final analytical expression of localization bias in terms of the rigidity matrix. This is a major contribution of our work and appears more suitable for wireless sensor networks than any other bias correction methods. In addition we extend the bias correction method to mobile networks in which the anchors are still static while the sensors at unknown positions are all mobile. The performance of the proposed bias reduction method is demonstrated via Monte Carlo simulation results in networks with different number of nodes. Another important problem which is also a topology related problem is that of consensus. Consensus means to reach an agreement regarding a certain quantity of interest that depends on the state of all objects and it has been investigated in many areas for a long time. In computer science, consensus has been studied as an important challenge of the distributed computing field [1]; in management science and statistics, consensus was considered as long ago as the 1960s [44]. Recently in the wireless sensor networks area consensus has also begun to be studied. Many consensus algorithms have been proposed to deal with consensus. A consensus algorithm (or protocol) is an interaction rule that specifies the information exchange between an object and all of its neighbors achieving consensus. One always hopes that when a consensus algorithm is applied the agreement can be achieved as soon as possible. Therefore the convergence rate, which denotes the speed of a network to achieve the consensus, can be considered to evaluate the performance of a consensus algorithm. We investigate the influence of the network topology on the convergence rate in distributed average consensus problems. Different from existing work, we not only aim to improve the convergence rate but also intend to minimize the communication cost which is important in power limited wireless sensor networks. We analyze the relationship among the number of edges, the convergence rate and the total communication cost. Based on the theoretical analysis and simulation results, we define a kind of Magic Number to help construct a network achieving the minimal communication cost while maintaining a satisfactory convergence rate.

Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.