DSpace Collection:http://hdl.handle.net/10174/9052019-12-05T20:30:51Z2019-12-05T20:30:51ZDigital Filter Performance for Zero Crossing Detection in Power Quality Embedded Measurement SystemsRodrigues, NunoJaneiro, FernandoRamos, Pedrohttp://hdl.handle.net/10174/244372019-02-04T17:36:46Z2018-04-30T23:00:00ZTitle: Digital Filter Performance for Zero Crossing Detection in Power Quality Embedded Measurement Systems
Authors: Rodrigues, Nuno; Janeiro, Fernando; Ramos, Pedro
Abstract: Power quality analysis involves the measurement of
quantities that characterize the power supply waveforms such as
RMS value, frequency, harmonic and inter-harmonic distortion or
the presence of transients. The measurement of these quantities is
regulated by internationally accepted standards from IEEE or
IEC. In particular, for the evaluation of frequency and RMS
values, the standard suggests the detection of the waveform zero
crossings. Thus, accurate zero crossing detection is of utmost
importance in power quality measurements. The undesirable
effects of noise, harmonics and transients can be mitigated by
filtering of the acquired waveforms, which induces a frequency
dependent delay in the processed waveforms. This paper presents
a comparative analysis between two types of filters. A comparative
analysis is performed between the use of a fixed nominal delay and
a frequency corrected delay. Additionally, the effect of the
presence of noise and harmonics in the delay compensation is
presented.2018-04-30T23:00:00ZLow Pass Digital Filter Delay Compensation for Accurate Zero Cross Detection in Power QualityRodrigues, NunoJaneiro, FernandoRamos, Pedrohttp://hdl.handle.net/10174/225502018-02-27T16:49:18Z2017-08-31T23:00:00ZTitle: Low Pass Digital Filter Delay Compensation for Accurate Zero Cross Detection in Power Quality
Authors: Rodrigues, Nuno; Janeiro, Fernando; Ramos, Pedro
Abstract: Zero crossing detection with a low pass filter
is a common method used to estimate a signal
frequency or period when the signal is affected by
noise, harmonics or other parasitic frequencies. This
method is often used in power quality measurements
due to the typical disturbances of the electrical power
grid signals and is for example, suggested in IEC
61000-4-30 for power quality frequency estimation.
Zero crossing is also needed to evaluate RMS events
since RMS evaluation must be synchronized with the
input signal zero crossings. However, the filter
introduces a delay which affects the accuracy of the
RMS evaluation timestamp.
High accuracy power quality measurement systems are
expensive but their usage has significantly increased in
recent years. To create low cost power quality
analysers, a solution based on a digital low pass filter to
detect zero crossings and measure signal
characteristics such as the RMS value or distortions
with accurate timestamps, is analysed in this paper.2017-08-31T23:00:00ZArtificial Bee Colony Algorithm for Peak-to-Peak Factor Minimization in Periodic SignalsHu, YingyueRamos, PedroJaneiro, Fernandohttp://hdl.handle.net/10174/225252018-02-27T16:20:25Z2017-08-31T23:00:00ZTitle: Artificial Bee Colony Algorithm for Peak-to-Peak Factor Minimization in Periodic Signals
Authors: Hu, Yingyue; Ramos, Pedro; Janeiro, Fernando
Abstract: Minimization of the peak-to-peak factor in
multiharmonic signals is highly desirable in many
signal processing applications such as impedance
spectroscopy and acoustic signal generation. This
minimization allows optimized signal to noise ratio
within a given signal amplitude range. Careful
selection of the phases of each harmonic of a
multiharmonic signal may lead to much better peakto-
peak factors when compared to zero or random
phases. This paper presents the use of an Artificial Bee
Colony algorithm in the minimization of the peak-topeak
factor by phase optimization of multiharmonic
signals. The obtained results are compared with the
Schroeder formula and the clipping algorithm. The
computation time of each algorithm is also presented
for comparison.2017-08-31T23:00:00ZA Genetic Algorithm tweak for result improvement in inverse optimization problemsCavaleiro Costa, SérgioMalico, IsabelJaneiro, Fernando M.http://hdl.handle.net/10174/214242017-11-03T12:26:07Z2017-10-26T23:00:00ZTitle: A Genetic Algorithm tweak for result improvement in inverse optimization problems
Authors: Cavaleiro Costa, Sérgio; Malico, Isabel; Janeiro, Fernando M.
Abstract: The decrease of global greenhouse gas (GHG) emissions is one of the ways to contain global warming. Through Anaerobic Digestion (AD) organic effluents are transformed into biomass and, in the process, biogas (methane and carbon dioxide) is released. Methane, with a higher GHG potential than CO2, is an important contributor to climate change. Therefore, the controlled use of microbes to synthetize organic material and minimize the methane release to the atmosphere with the subsequent methane capture and reutilization is one attractive choice in industries with large organic waste production.
Different models were developed to simulate AD. The most common are nonlinear dynamic systems composed of a set of ordinary differential equations. They differ in the number of processes considered.
In order to have a valid model, so it can be used for control purposes, for example, the dynamical model parameters require an estimation. For that reason, an Inverse Optimization (IO) must be performed. Due to the simplicity, flexibility and global search efficiency of Genetic Algorithms (GA) they are largely used in different research areas. However, the conventional implementation of Genetic Algorithms, called here Basic Genetic Algorithms (BGA), faces some difficulties in solving this kind of IO problem. To deal with these problems, a tweak to the BGA is proposed, the Neighbored Genetic Algorithm (NGA).
In the newly proposed NGA method, one or more subjects within the population are selected for use in an inner loop of the algorithm. In this loop, those subjects will randomly generate a subpopulation, with a specific number of individuals, from a normal distribution (or other), whose mean is the value of the selected subject. The best subject of this subpopulation will replace the one that generated him, if he is fitter. This will, in principle, enhance the chances of getting new subpopulations closer to the solution.
To validate and test the proposed model, a benchmark function (Goldstein-Price) was used. In this case, the NGA method converged in 99% of the runs, while the BGA method only converged in 38% of the cases.
Finally, simulated data for methane production were used in the calibration of the AD model. In this IO problem, both BGA and NGA were run 100 times in order to compare their performance. After 10 000 iterations, the cost function values for the BGA and NGA models were 8×10-3 and 1×10-4, respectively. Even though the new approach has proved to be computationally more expensive per iteration, a lower cost function value with less computational time was consistently found in the 100 tests performed when the NGA method was used.2017-10-26T23:00:00Z