Next Previous Contents

6. Homework

In many of the help files associated with the various menus of Fraclab, you'll find a "homework" section that describes an example of application of the corresponding tools. This is intended to help you get started with Fraclab and to show the possibilities of a fractal approach to signal processing.

In this general help, we highlight three examples taken from the following menus: 1D exponents estimation, Denoising and Segmentation. See the corresponding helps for more details.

6.1 Analysis of a stock market log

The stock market is a fascinating area for fractal analysis. Many authors have argued that models based on stochastic processes exhibiting long dependence and/or infinite variance are relevant in this area. Fraclab can compute both long range dependence exponents and various parameters that characterize processes without a second moment. However, the main focus in Fraclab is on the measure of local regularity: independently of any assumption of long dependence and/or infinite variance, it is certainly true that stock market logs are very irregular. Moreover, this irregularity is a function of time, and we expect that, for instance, at "quiet" periods, the market should evolve smoothly, while krachs translate into sudden changes in the regularity.

A nice illustration of the above intuition is provided by the analysis of the Nikkei225 index during the period 01/01/80 to 05/11/2000. The log consists in 5313 daily values corresponding to that period. Load first these data into Fraclab: Press the Load button in the main window. A new window appears, showing the files of your current directory. Change directory to the DATA directory that comes with the Fraclab release. Choose the file called nikkei225.txt by clicking on it. Its name is then displayed at the top of the window, in the Name: box. Since this file is plain text, click on the button to the right of Load as:, and select the item ASCII. Then press Load, and Close the loading window. The nikkei225 file should appear in your Variables list of the main window, under the name fnikkei225. View this signal: Open the View window by pressing on the View button. In the View window, click on View in new. This will open a window displaying the stock market log. Like most data of this type, this signal is quite erratic. Other obvious features include a steady increase at the beginning of the log, and strong discontinuities around the points 1780, 2040, 2650, 2760 or 3200. Let us see if we can highlight these and other significant events with a local regularity analysis.

Financial analysts do not work directly on the prices, but on their logarithms, so we'll first type lnikkei = log(fnikkei225); in the matlab window, and import lnikkei into Fraclab. To do this, press the Scan Workspace button in the main window. In the new windows that appears, titled Import Data from MATLAB Workspace, locate the signal lnikkei, select it by clicking on it, and hit Import, then Close this window. lnikkei will appear in the Variables list of the main window, under the same name.

We will now estimate the local regularity of lnikkei: Click on 1D Exponents Estimation and choose Local Hölder Exponent then oscillation based method. In the window that appears, check that the Input data is lnikkei. Otherwise, select lnikkei by clicking on it in the Variables list of the main window, and hit Refresh in the Local Hölder Exponent window. Set the parameters as follows: Nmin = 1, Nmax = 8, Neighbourhood size = 16, and regression type = Least Square (see the help file corresponding to this menu for details on the meaning of these parameters). Hit Compute, and wait for less than a minute. The output signal appears in the Variables list of the main window, and is called pht_lnikkei0. View this signal, by pressing View in new in the View menu (check that pht_lnikkei0 is selected before doing so). As you see, most values of the local Hölder exponent are between 0 and 1, with a few peaks above 1 and up to more than 6. Recall that a Hölder exponent between 0 and 1 means that the signal is continuous but not differentiable at the considered point. In addition, the lower the exponent, the more irregular the signal. Looking at the original signal, it appears obvious that the log is almost nowhere smooth, which is consistent with the values of pht_lnikkei0. What is more interesting is that important events in the log have a specific signature in pht_lnikkei0: periods where "things happen" are characterized by sudden increase in regularity, which passes above 1, followed by very small values, e.g. below 0.2, which correspond to low regularity. Let us take some examples. The most prominent feature of pht_lnikkei0 is the peak at abscissa 2018 with amplitude larger than 6. Note also that the points with the lowest values in regularity of the whole log are located just after this peak: The Hölder exponent is around 0.2 at abscissa roughly between 2020 and 2050, and 0.05 at abscissa between 2075 and 2100. Both values are well below the mean of pht_lnikkei0, which is 0.4 (its variance of is 0.036). As a matter of fact, only 10 percent of the points of the signal have an exponent smaller than 0.2. Now the famous October 19 1987 krach corresponds to abscissa 2036, right in the middle on the first low regularity period after the peak. The days with smallest regularity in the whole log are thus logically located in the weeks following the krach, and one can assess precisely which days were more erratic. However, if you go back to original fnikkei225 signal, things are not so clear: although the krach is easily seen as a downward discontinuity at abscissa 2036, the area around this point does not appear to be more "special" than, for instance, the last part of the log (you may zoom on the different areas for easier visualization).

Consider now another region which contains many points with low Hölder exponents with a few isolated very regular points (i.e. with exponent larger than 1). Look at the area between abscissa 4450 and 4800: This roughly corresponds to the "Asian crisis" period, which approximately took place between January 1997 and June 1998 (there are no exact dates for the beginning and end of the crisis. Some authors place the beginning of the crisis mid-1997, and the end by late 1999, or even later). On the graph of the original log of the Nikkei225, you can see that this period is quite erratic, with some discontinuities and pseudo-cycles (this behaviour arguably seems to extend between points 3500 and maybe the end of the trace). Looking now at pht_lnikkei0, we notice that there are two peaks with exponents larger than one in the considered period (there is an additional such point around abscissa 4300, which, however, is not followed by points with low values of regularity -e.g. smaller than 0.15-, but is preceded by such points, between abscissa 4255 and 4285). The first peak is around 4455, and is followed by irregular points between 4465 and 4475. The second is around 4730. This region, between abscissa 4450 and 4800, has a large proportion of irregular points: 12 percent of its points have exponent smaller than 0.15. This is three times the proportion observed in the whole log. In addition, this area is the one with highest density of points with exponent smaller than 0.15 (we exclude in these calculations the first and last points of the log, because of border effects). A nice way of seeing this is to zoom on the graph of pht_lnikkei0 to display only the ordinates between, say, 0.05 and 0.2. This can easily be done using the axes control facility in the View menu of Fraclab by selecting the appropriate Y range (don't forget to hit Apply so that your settings take effect).

Although the discussion above is overly simplistic, it shows that strong perturbations in this particular financial log generally correspond to regions with very low values of the local regularity, with most of the times the presence of a single or a couple of extremely regular points. Such a behaviour has been observed in a large number of other logs. You may care to try the same kind of analysis on your own signals. Chances are that "interesting" regions, or points, will have a specific signature in the regularity graph: The evolution of the Hölder exponents brings an information which is, in some situations, perhaps more intrinsic than the amplitude of the original signal.

Before leaving this signal, let us compute its "long range dependence" exponent. More precisely, Fraclab allows you to compute an exponent that describes the power law behaviour of the frequency spectrum of the increments of the signal around the origin (i.e. at low frequencies), of course assuming that such a power law holds. If this is the case and if the exponent is larger than 1/2, one says that the data display long range dependence (LRD), in the sense that the correlations decay "slowly" when the time lag increases. The LRD exponent estimator in Fraclab is a wavelet-based one. Select first lnikkei in the Variables list, then go to 1D Exponents Estimation and choose LRD Exponent. In the window that appears, choose Advanced Compute. A new window pops up, titled Long Range Dependence Parameter. Check that the Input Signal is lnikkei, and modify the Voices parameter from its default 128 to 64, just to speed up a little bit the computations. Then hit Compute WT. This will launch the computation of the continuous wavelet transform of lnikkei, using the complex Morlet wavelet as an analyzing wavelet, and with the various parameters specified in the window (see the help corresponding to this area of Fraclab for more). When the computation is over, you should see a new structure in the Variables list, called cwt_lnikkei0. This is the continuous wavelet transform of lnikkei, that you may care to visualize in the usual way: hit View in new in the View menu (if the View menu is not opened, hit View in the main window). cwt_lnikkei0 should also appear in the box facing Input CWT in the lower part of the Long Range Dependence Parameter window. Now hit Compute at the bottom of this window. A new window appears, showing a graph where abscissa represent the logarithms of the scales in the wavelet transform, and ordinates are estimates of the logarithms of the energy in the signal at the corresponding scale. The red line is the least square regression line corresponding to the displayed circles. You'll see that the linear fit is poor when the whole graph is considered, as it is here. Since we are interested in LRD, we should however restrict our attention to large scales. Using the black cross that appears when you point inside the graphic window, select the region between abscissa 3 and 7: put the pointer at any point above abscissa 3, click, then put the pointer at any point above abscissa 7, and click again. The red line should now fit approximately the part of the graph above these abscissa. The Estimated Global Scaling Exponent displayed above the graph should be around 0.56. You may try to compute the least square regression line above other parts of the graph by repeating the same steps. When you're finished, hit Return on your keyboard, as indicated on the lower right part of the graphic window. This will make the cross disappear and will display the last estimated value of the exponent at the bottom of the Long Range Dependence Parameter window, in the box facing Scaling Exponent. To make the graphic window go away, close it manually in the usual way (i.e. not with the help of the View menu). According to this estimate, thus, our financial log exhibit a slight long range dependence, because the exponent is a bit above 0.5.

6.2 Synthetic Aperture Radar image denoising

SAR images are generally difficult to read and to analyze because they contain a large amount of a specific noise, called speckle. Dozens of methods have been proposed to enhance their quality. Some use precise knowledge about, e.g., the statistics of the noise, while other are rather generic. The fractal denoising method is based on the following simple observation : consider an image I, and its noisy version J. Pick a particular location (x,y) at random in the image. Then, chances are that the local regularity of I around (x,y) will be larger than the one of J. Of course, this statement is rather imprecise if we do not define how we measure regularity. We will however content ourselves here with the intuitive fact that adding noise decreases the local regularity at all points. Denoising can then be performed by increasing in a controlled way the local regularity. This is exactly how the fractal method works.

To see a practical example of this, first load a SAR image into Fraclab by following these steps: Press the Load button in the main window. A new window appears, showing the files of your current directory. Change directory to the DATA directory that comes with the Fraclab release. Choose the file called sar.tif by clicking on it. Its name is then displayed at the top of the window, in the Name: box. Check that the Load as: box displays the item image, and press Load. Then Close the loading window. The sar image should appear in your Variables list of the main window, under the name im2d_0 (or im2d_1, etc...). View this image: Open the View window by pressing on the View button. In the View window, click on View in new. This will open a window displaying the SAR image. As you'll see, this image appears very noisy, and does not seem to hold any useful information. However, this is not quite true, as this scene does contain a river flowing from North to South. Our aim here is to perform a pre-processing that will enhance the image so that it will be possible to detect automatically the river. Such a procedure is used by the IRD, a French agency, which, in this particular application, is interested in monitoring water resources in this region of Africa.

Go to Denoising, and choose Multifractal Pumping. In the new window that appears, check that the name appearing in the Analyzed signal box is im2d_0. Then choose a Spectrum shift value, either by using the sliders or by entering directly a value. A value of 1.5 will give you an interesting result at this stage. Press Compute. The processing is fast (probably less than a second). You should see a new signal in the Variables list, called den_im2d_00. Display this image by clicking View in new in the View menu. If everything went right, you should be able to distinguish some structures on the processed image. Most prominently, the river now appears, flowing from the top of the image and assuming roughly an inverted "Y" shape. Other values of the Spectrum shift value around 1.5 may give more visually pleasing results.

Here are some other tests worth trying. A characteristic feature of the Multifractal Pumping is that it is invertible. A striking illustration is to denoise the SAR image with a large Spectrum shift value, say 5. You obtain as an output the "enhanced" image den_im2d_01, say. View den_im2d_01, and notice that it is very blurred, and thus seems to contain even less information than the original data. Now, with den_im2d_01 selected in the Variables list, hit Refresh in the denoising window, so that Fraclab knows that you want to process this new signal. Enter -5 as Spectrum shift value (that is, instead of denoising, you "increase the noise"). View the output, called den_den_im2d_010. You'll see that this last image exactly coincides with im2d_0.

A final test is to compare this Multifractal Pumping with the classical wavelet shrinkage method. Wavelet shrinkage is a denoising procedure that gives excellent results for data corrupted with independent additive noise, provided the original signal has some minimum regularity. In our case, the noise is non additive and strongly correlated with the signal, which, furthermore, has no a priori regularity. Thus, we do not expect this method to behave well here. Go to Denoising, and open the Wavelet shrinkage window. Check that the name appearing in the Analyzed signal box is im2d_0. Otherwise, select the signal im2d_0 in the Variables list and hit Refresh in the Wavelet shrinkage window. Choose a threshold value and hit Compute. No matter which value you choose for the threshold, the output signal never appears really "denoised".

6.3 Optical image segmentation

This is intended to show how multifractal analysis may be used for edge detection. Very roughly speaking, the multifractal analysis of a signal or an image consists in two steps: One first compute the Hölder exponents of each point in the signal. Second, one groups all points with the same exponent, say t, to form a set E(t). The multifractal spectrum is the function that associates to each t the "dimension" of the set E(t). In other words, multifractal analysis computes, for each singularity exponent, the "size" of the set of points in the image where this exponent is found.

To apply multifractal analysis to edge detection, we thus start by characterizing the local regularity of the image around each point by its Hölder exponent. Edge points are usually irregular points, so we expect them to have low Hölder exponent. This is true however when one measures the local regularity in the "usual way", i.e. by comparing the grey levels in a given zone with the size of this zone. In this experiment, we use a different measure of regularity: We associate to each region in the image the maximum of its grey levels, and we record the regularity of this quantity. More precisely, we do the following: Around each point in the image, we center windows of increasing size. We "measure" the content of each window by the maximum of the grey levels in the window. The regularity exponent is then obtained by evaluating the scaling law between the logarithms of the maxima and those of the window sizes. It is easy to see that smooth regions will now have a low regularity exponent, while textured zones have a large exponent. For instance, in a zone with constant grey levels, the maximum will not depend on the window size, thus the scaling exponent is 0 (see the references given in the help file of the Segmentation menu for more).

Let us try this on an optical image. Load first the image called door.tif into Fraclab by following these steps: Press the Load button in the main window. A new window appears, showing the files of your current directory. Change directory to the DATA directory that comes with the Fraclab release. Choose the file called door.tif by clicking on it. Its name is then displayed at the top of the window, in the Name: box. Check that the Load as: box displays the item image, and press Load. Then Close the loading window. The door image should appear in your Variables list of the main window, under the name im2d_0 (or im2d_1, etc...). View this image: Open the View window by pressing on the View button. In the View window, click on View in new. This will open a window displaying the door image. This is an image of a Japanese door (more precisely, a toryi).

Click on the Segmentation pop-up menu and select Image multifractal segmentation. In the new window that appears, click on Refresh on the first line, in front of Analyzed (check before that im2d_0 was selected in the Variables list of the main window). im2d_0 should appear on the first line, meaning the it will be the analyzed image. Ignore the three lines below, and move the Pointwise Hölder exponent zone. Note that the max capacity is selected: This corresponds to the fact that we will be measuring the content of a region by its maximum grey level, as explained above. You will now change the max size parameter from 5, its default, to 3. Do this by selecting 3 in the pop-up list that appears when you click on 5. Next, hit Compute Hoelder. After a few seconds, a new signal should appear in your Variables list, called hld2dCoef_im2d_00. This is the image of the Hölder exponents. You can view this image by clicking on View in new in the View menu. Notice that the image of the Hölder exponents gives a nice representation of the main edges of im2d_0. As explained above, pixels with low regularity, as are edges, appear as bright points, while smooth regions have low grey levels.

Technically, however, hld2dCoef_im2d_00 does not represent an edge extraction of the original image, because edge images are supposed to be binary images: edge points are displayed in white, while all other points are in black. In this easy example, it seems that a simple threshold could turn hld2dCoef_im2d_00 into a binary image that would coincide more or less with the contours. We will follow another path here, by using the second part of multifractal analysis. We thus proceed to compute the multifractal spectrum of our image, i.e. the function that will give the dimension of the sets of pixels having a given exponent. There are several type of multifractal spectra, and we will use the Hausdorff one, which is the default in Fraclab. In the zone facing max boxes, replace the default 32 by 128. This will yield a more precise spectrum. Hit Compute spectrum. The new signal hSpectrum_im2d_0_fd2d_alpha0 is added to the Variables list. View this signal by clicking on View in new in the View menu. This is a 1D graph: abscissa represent the various Hölder exponents present in the image, while ordinates display the associated dimensions. Thus, for instance, the dimension 2 corresponding to the exponent 0 means that a subset of pixels of dimension 2 have exponent 0. Since no other exponent has associated dimension 2, this means that "most" points (more precisely Lebesgue-almost all points) in the image have exponent 0. Recalling the 0 is the exponent of points in smooth regions, we recover the visual fact that, in the door image, most points lie in smooth regions. Notice the shape of the spectrum, which is roughly a decreasing segment starting from the point (0, 2) and ending at (x, 0) (here, x= 0.9). This shape is characteristic of optical non noisy images when one measure the exponents using the maxima of the grey levels. Here, we are interested in contours. Let us see how we can detect them using the spectrum: Since contours must form a set of dimension 1 in the image (because contours are smooth curves), and because we expect contours to be made of points which have roughly the same regularity, we expect that edge points should be approximately characterized by those exponents t such that the dimension of E(t) is 1. To verify this assumption, we will perform the segmentation of the image by putting all pixels with exponent t such that E(t) has dimension close to 1 to white, and all other points to black. To do this, go to the Segmentation part of the Multifractal Image Segmentation window, and set the min dim. to 0.9 and the max dim. to 1.1. Hit Compute seg.. The output image, called seg_im2d_00 appears in the Variables list. View this image by clicking on View in new in the View menu. As you can see, we have obtained, by the very simple procedure above, a good approximation to the edges of our original image. Notice in particular that some fine details have been detected, such as the small triangular holes on the right part of the door, the delicate contour of the bush, or the small sphere and the defect in the water on the lower left part of the image. Also, the method has detected some features inside the bush on the left part. These do correspond to some irregularities, that you can see by manipulating a little bit the grey levels of the original image.

An interesting additional feature of this approach is that one can extract relevant subsets of points other than the contours, again based on a dimension analysis. For instance, we expect that sets of irregular points that lie in strongly textured region should form a set, roughly homogeneous with respect to the Hölder exponent, and of dimension between 1 and 2: smooth contours are 1D curves, while smooth regions are 2D areas; textures form subsets which lie "in-between" those two extremes. Verify this by extracting now those points with associated dimension between, say, 1.3 and 1.7, i.e. set min dim. to 1.3 and max dim. to 1.7 (beware that, since Fraclab checks that min dim. is smaller than max dim., you need to enter these values in the right order, otherwise Fraclab will return to its default values 0 and 2). Hit Compute seg. again, and view the output. You should see in white mostly points on the water and in the bush, with additional pixels on the door and the foreground mountains, where one can distinguish some textures. The sky and the background mountains, which only display smoothly varying grey levels, are mostly black, except the edge of the mountain to the right of the image. These sets of white points could rightfully be called "textured points". Finally, if you put min dim. to 1.9 and max dim. to 2, you will verify that you do get full 2D regions mainly composed of points in smooth regions. These three segmentations show that a multifractal analysis of the image allows to extract several kind of points using the information contained in the spectrum. Of course, much more refined methods based on the technique explained here can be applied. See the references indicated above.


Next Previous Contents