Please tell me what happened to EMD ~

Question:

1. components c 1, c2, c3...cn contain different frequency segments from high to low, each frequency component is different and changes with the transformation of signal x(t), and rn represents the central trend of signal x(t).

I'm a little confused about this sentence. C 1 c2...cn is arranged strictly according to the frequency from high to low, and rn indicates that it represents the central trend. I think some errors depend on the situation.

2. I haven't studied Huang's program carefully for the time being. I don't know what he did to improve the shortcomings of emd. How does this program improve emd? More specifically, what kind of signal is more suitable for analyzing this program?

By looking at the definition of imf, we can see that it should be good for symmetrical signal processing of AM-FM signals, but actual signals, such as the time domain waveform of seismic signals, are distorted by nonstandard sine waves or chords, so we tend to give a standard sine wave or AM-FM when giving examples. What if we give an example of noise?

3. For the programs that are now engaged in emd, they are all improving, and as a result, many articles have been published. In the field of fault diagnosis, Mr. Yu, who has done a good job, has published 3-4 articles on mechanical system and signal processing, and the algorithm has been improved. The main fault is the design of gears and bearings. Everyone knows that the probability of amplitude modulation and frequency modulation of these signals is relatively high, and the processing effect should be ok, but I don't know if you have tried other faults. If the speed changes greatly and the collected waveform fluctuates greatly, will the effect be better? I still think the signal is slightly stable and the processing effect is better. I look forward to your discussion.

Answer 1. C 1 c2...cn is indeed generated in strict accordance with the frequency from high to low, but there is a misunderstanding here, which does not mean that the frequency of c 1 must be higher than c2. The correct understanding is that the frequency of a part in c 1 is higher than that of the same part in c2, which just reflects the strong locality of EMD algorithm. It is also consistent with Huang's statement that "adjacent components may contain oscillations in the same time scale, but the oscillations in the same time scale will never appear in the same position of two different IMF components". As for the errors generated in the decomposition process (mainly the selection of envelope mode, the treatment of boundary effect and the design of filtering stop conditions), they will continue to accumulate in the next decomposition layer, not necessarily the final margin (trend term).

2.

A) Actually, we didn't get Huang's source program (it's not free because Huang has applied for a patent). Generally, most people use the source code provided by Flandrin, that is, the method of G. Riley mentioned by LS (because the source code provided by the website is Flandrin, there are two different opinions, but the article mentioned by emd.m is G. Riley as the first author. Maybe foreigners don't distinguish contributions in order like we do. The program is basically reliable and can be used to analyze all kinds of data, but the effect depends on whether it meets your needs. As for what kind of data is suitable, there is still no conclusion. Firstly, EMD algorithm has no proper mathematical model, so it lacks strict mathematical foundation. Many mathematical problems such as convergence, uniqueness and orthogonality cannot be carried out at all, and even "what signal can EMD analyze" cannot be explained at present. Secondly, the algorithm itself is operable and empirical so far (just like the name of the algorithm), so we can only study it if we find its theoretical support. Thirdly, an algorithm cannot be effective for any signal, so don't expect EMD to process any signal.

B) From the definition of IMF, it is indeed required that the IMF is symmetrical, but this does not mean that the signal itself has such characteristics, nor does it require that the signal is a synthesis of sine and cosine. I think the reason why EMD can attract so many people's attention is not only the so-called "pyramid scheme", but also its performance in practice. If we can only deal with regular signals, its influence (including good or bad) is far from being so successful.

C) EMD produces the characteristics of each IMF from high to low, which means that it can be used to denoise, instead of using other methods to deal with noise before using EMD. For example, the detection of brain functional activation area I have done during this period is essentially the process of removing the noise of the signal and restoring the original stimulus. No matter for additive random signals that obey regular distribution (such as Gaussian distribution, uniform distribution, etc.), the implementation results are very good. ) or for multiplicative random signals that obey regular distribution (I only tested Poisson distribution). Of course, the result of the latter is not as good as the former, but it is enough to exceed the traditional method for detection. Personally, I think EMD is so effective in practice because it can deal with non-stationary and nonlinear time series.

3. At present, the improvement of EMD method is divided into two aspects, one is experimental and the other is theoretical. Relatively speaking, the latter is relatively rare.

A) The former mainly includes two parts. In fact, these are some subjective rules that people use when using EMD to decompose signals. First, according to the subjective understanding of zero-mean condition, different methods are adopted as IMF filtering stop conditions; Secondly, when using cubic spline to calculate the upper and lower envelopes of the signal, according to the trend at both ends of the signal, a specific endpoint continuation method is used. When EMD is used for non-stationary and nonlinear signal decomposition, using different rules on the above two points will lead to different EMD decomposition results. In 2003, G. Riley and others improved Huang's EMD algorithm, which belongs to the first kind. Personally, I think this condition is more reasonable than Huang's original condition. The neural network method proposed by Deng Yongjun and other domestic scholars in 200 1, the mirror image closure method and extreme point continuation method proposed by Huang Daji in 2003, and the polynomial fitting algorithm proposed by Liu Huiting in 2004 all belong to the second category. As for the research results of these two years, I haven't sorted them out yet, hehe.

B) The latter is mainly that in 2004, Chen et al. proposed to use the "moving average" method instead of the traditional "envelope average" method to find the low frequency of the signal. They tried to make further progress in establishing the mathematical basis of empirical mode decomposition with the help of the good properties of B spline function. In addition, at the beginning of 2006, Huang proposed an algorithm for post-processing the IMF obtained by EMD algorithm (essentially standardizing the IMF), in order to obtain the instantaneous frequency and amplitude more accurately (personally, this is the real envelope and instantaneous frequency). Before I came to Beijing, I tried to prove the convergence of this algorithm in a local sense, but I only got phased results. I recently heard that one of my younger brothers has basically proved it in the overall sense. Wait till I get back to see the specific results, hehe. This processing method completely abandons Hilbert transform, making instantaneous frequency and instantaneous amplitude more accurate and meaningful.

Generally speaking, EMD and even HHT have many shortcomings, but they are not without merit. Theoretical proof and further improvement need more attention, and its usefulness in experiments depends on your needs and how to exert its potential.