Iterative Motion Estimation for Elastography

Estimation steps

 

On Wednesday, July 24, I will be presenting at the 2013 International Ultrasonic Symposium in Prague. The topic: “Iterative autocorrelation motion-estimation with application to elastography imaging” [PDF]

Here is the abstract of the paper

Precise, robust and efficient motion estimation is the
key to high quality strain imaging. Two are the main estimation techniques: (1) spatial-shift estimation using block matching, and (2) phase-shift estimation. The phase-shift estimators are more robust but are limited by phase aliasing. The scan depths of elastography imaging can be up to 300 wavelengths, and even moderate average strains of 0.5–1 % result in displacements of several wavelengths. Several efficient methods have been created over the years to overcome the aliasing limit, but they often employ searching and tracking which are difficult to be implemented efficiently on graphical processing units.
This paper presents an iterative algorithm for phase estimation using autocorrelation. In the first stage the autocorrelation is estimated for lag 0 for all pixels in the image. Then a phaseunwrapping algorithm is applied, and the unwrapped phase is converted to displacement. The center frequency used in the conversion is calculated using a 2-nd order polynomial which describes the depth-dependent shift in center frequency. This polynomial needs to be estimated once for every setup using an uniform speckle phantom. The displacement is quantized by the spatial sampling interval. The autocorrelation function is estimated at the quantized spatial lags. The precision of the autocorrelation estimate varies depending on the magnitude of the phase which leaves visible horizontal stripes at the transition from one lag to another. Finally we apply a deglitching algorithm to compensate for the change in precision at the boundaries between two lags.

 

 

Moved H-PaGe to GitHub

hpage_logo_red

I just moved H-PaGe to Github. It is one of my dear projects, although I have not been maintaining it since 2008.

So what is H-PaGe (pronounced /eɪtʃ peɪdʒ/)? H-PaGe stands for “(H)ome (Pa)ge (Ge)nerator”. I wrote the first version back in 2002, and rewrote it in 2004. I have to give credit to Jørgen Arendt Jensen for adopting and promoting it, and to Henrik Laursen who was always there when the server needed some tweaking.

I wrote H-PaGe because I needed an easy way to maintain a course web site, but at the same time I wanted the site to look as the official pages of the Technical University of Denmark (DTU). Back then, the web pages at DTU were static and created using Dreamweaver. The official web site had already started to deteriorate as people were not consistent in the use of fonts, and the maintenance of the navigation system. I looked around, saw that people use PHP to create dynamic web sites, downloaded the PHP manual and a couple of days later I had the site up and running.

The concept was simple: the navigation (menus/submenus etc) is created from the directory structure. Every directory has one file that has only content in it. The extension determines the type of content: .html is for HTML code, .txt
is plain text, .table is a table using (&) as a column separator. Some directories contained many files which were the “news”, or messages. Today we call them posts. If one of the directories contained an index.html file, then it was treated as the root of a new web site. Directories starting with a dot were hidden and not displayed.

31610 Applied Signal Processing

31610 Applied Signal Processing

This made my life quite easy. To add a new menu item, I needed only to create a new directory. The file main.html had only contents – no header, no footer, no navigation no nothing. There was no limit to the levels of sub-menus.

On the left you can see a screen-shot of a site created with H-PaGe. It is from the course 31610 Applied Signal Processing, which was originally taught by Jørgen A. Jensen, then by me, and now by Sadasivan Puthusserypady.

H-PaGe was, is and will be free for use and modification.

Back in the period 2003 – 2010 most of the pages of the institute were powered by H-PaGe, including some large sites like the site for the education Medicine and Technology . The management of the Technical University of Denmark, however has decided that all pages must be moved to a centralised content-management system, and today there are very few sites still powered by H-PaGe, which is the reason I decided to put it on Github.

Some of the pages I could find using H-PaGe are:

H-PaGe is actually a quite capable program. The sites are customisable via CSS, and there are utility functions to create the headings with the right font, color and shadow. It supports configuration scripts and there can be menus on the top, left side and the bottom of the page. Furthermore, every has its own setup which allows for custom menus to appear when the navigation reaches a certain level.

The program allows to combine custom navigation with navigation derived from the directory hierarchy.

There is an user guide on Github, and a presentation on slideshare that give introduction to H-PaGe.

In due time all these pages will be imported to Site-Core, but I still believe that the concept behind H-PAge

The future of conferences

Conference dinner, IEEE International Ultrasonic Symposium, 2008, Beijing

Given the present technological advances, it is quite remarkable that scientific conferences have kept their form – a great number of people sit in a dark room, watch slides and listen to a presentation given by a speaker. True, the submission of papers is done electronically, and the black board and overhead projector have been replaced by computers and electronic projectors.

Keeping this format of conferences is increasingly more difficult as the number of participants grows every year. When I attend a conference, it feels like a race from session to session, and from presentation to presentation.

Recently, on-line teaching and video lectures have become widely spread – think of khan academy, udacity, and edx, to name a few of the initiatives.  Lecture slides can be made shared on-line via software services like slide share, presentations can be posted as videos, and papers are already available online. The conferences cannot be made entirely virtual though. I believe that the future of conferences lies in a mix of an on-line virtual and physical meetings. Here is why:

Some of the reasons people attend conferences are:

  • To learn something new (for example during the short courses)
  • To see in what direction research is heading
  • To present own research (progress)
  • Network with peers
  • Attend exhibitions
  • Meet with clients and vendors

Some of the key aspects of the conferences is that:

  • People allocate time for the conference
  • Papers are published in proceedings, and are subject to some selection/review, which assures a certain level of quality.

One can imagine the following scenario. People submit abstracts as usual, and these are reviewed by the technical committee as it is now. Then papers are submitted electronically and made available on-line prior to the conference. This can be followed by a series of many, but short on-line sessions, where a large number of participants get the possibility to present their work. The audience gets the chance to watch the presentation, as well as to read the papers. One can rewind the video and watch it again, if there are unclear moments.

A smaller portion of papers is selected for the physical meeting. There are fewer papers, and the sessions can be organized in a way that favors discussions of the topics. This will give the participants better opportunity to network, and to exchange ideas. Maybe not all participants will have the means to attend the physical meeting, and such a format will benefit people from countries and universities with limited resources, which is another benefit of having such a hybrid model.

Doughnut shaped

Image

Recently I attended a conference, where I heard a number of students refer to some geometries as “Doughnut-shaped”. This is a fine analogy, and it is easy to explain in popular terms. Especially if you give a speech to the New York Police Department, for example.

Image

On the other hand, if I had to explain it, say,  to my grandfather, doughnut-shaped would mean nothing, because doughnuts in Bulgaria are not “doughnut-shaped”.

A typical shape of a doughnut can be seen in the image on the left, and as you can probably notice, the shape has nothing to do with the doughnuts made at Dunkin’ Donuts.

Probably the guys should stick to the more mathematical noun “torus” or the adjective toroidal.

Scoring scientific abstracts

History

I have done this for years – score abstracts for conferences. Only a few years ago, I found the task much easier – there were fewer abstracts, and there was a big difference in the quality of work. So it was easier to reject 40 to 50 % (per cent) of the submitted abstracts.

The spread of Internet and fall of computer prices has changed everything upside down. Back in the day, only few research groups had access to raw, unprocessed radio-frequency signals. Commercial scanners did not provide such data because of bandwidth problems. Today most scanners are equipped with a “research interface”. Back then, getting access to journals was expensive, and only rich universities had the privilege of abundant information. Today you can get virtually any paper in one form or another. So the landscape of science has changed dramatically.

Challenge

The change is for good, but it does not help me score the abstracts. Most of the abstracts submitted today are well-written thanks to books like “ Made to Stick“, and the work is of similar quality. Only few of them are decidedly good or decidedly bad. So how should I grade these abstracts on an absolute scale from 1 to 4?

Solution

So I decided to grade them based on a little tournament:

  1. Read all abstracts.
  2. Compare the abstracts in pairs – all against all.
  3. Give them points like in “Champions league” – 3 points for a clear win, 1 point for a draw.
  4. Create a rank list.
  5. The top 10 % get a score of 4, the next 40 % get a score of 3, then following 40 % get a score of 2, and the bottom 10% get a score of 1.

Bde Qt Updated to 64-bit and Qt 4.8

Bulgarian-made Danish/English DictionaryMany, many years ago, I was looking for a Danish/English Dictionary in Linux. One day, I stumbled upon a web page with a list of words and their translation. Oh, joy! The page contained more than 40,000 words. We met, we spoke, and he said that it would be OK if I used the word lists to write a dictionary program. In 1999, the obvious choice was the Qt library – one could write an application for Linux, Windows and Mac. The program did not have an installer, but was rather a ZIP file that one could extract somewhere on the disk, and create a short link to it. That is how BDE_QT appeared. The name stands for “Bulgarian-made Danish/English Dictionary usint QT”.

Many years later, I decided to take another look at the program. So I added CMake setup , which enables the cross-platform generation of make files, and project files for a number of development environments. The program also got a setup program, and automatic creation of desktop icons and links in the start menu.

Download

Notice that I am relatively new to programming in OSX. The distribution does not contain the Qt libraries. Qt comes pre installed on Mac (I think), but if the application fails to run, you will have to install a version of the Qt Library. If it still does not work, then you probably must compile it. When I learn how to bundle the DLLs automatically, I will update the distribution.

Otherwise, to install the program on a Mac, simply download the file from the link, open it, and drag the program to the Applications inside the open Window. To uninstall the program, simply drag it to Trash.

Here is a screen-shot from the Mac installation:

Install BDE QT on Mac

Designed for innovation

Many people promote design thinking in product development and research. One characteristic working style of designers is to produce a number of alternative designs and then select the most successful one.

A similar trend can be observed in software development – the appearance of rapid application development (RAD) tools, and the introduction of agile project management methods. This is sufficient for products with a short life and/or development cycle.

There are technology products, however, which are based on a software or hardware platform that last for over 10 years. These products typically evolve through upgrades. How much innovation can be fitted in an upgrade is dependent on how suitable the overall software/hardware architecture is for rapid prototyping and experimentation. Allowing for this, represents a significant upfront investment, and is often in conflict with time schedules and release dates. “We do not sell prototypes” is the typical project and product manager’s mantra.

No one denies however, that design for manufacturing is important for the success of the product. It ensures that cost and reliability of the product are high. I believe, that the time has come to consider also design for innovation. This will ensure the rapid evolution of the product and the technology product it is built on.

Recent advances in blood flow vector velocity imaging

 Link to paper on Google Docs

Prof. Jørgen Arendt Jensen presented this invited talk at the 2011 IEEE Ultrasonic Symposium, Orlando, Florida. I am happy to be one of the people who have contributed to the work at the Center for Fast Ultrasound Imaging. The link to the paper is given above. Here is a quote of the abstract:

A number of methods for ultrasound vector velocity imaging are presented in the paper. The transverse oscillation (TO) method can estimate the velocity transverse to the ultrasound beam by introducing a lateral oscillation in the received ultrasound field. The approach has been thoroughly investigated using both simulations, flow rig measurements, and in-vivo validation against MR scans. The TO method obtains a relative accuracy of 10% for a fully transverse flow in both simulations and flow rig experiments. In-vivo studies performed on 11 healthy volunteers comparing the TO method with magnetic resonance phase contrast angiography (MRA) revealed a correlation between the stroke volume estimated by TO and MRAof 0.91 (p<0:01) with an equation for the line of regression given as: MRA = 1.1 TO-0.4 ml. Several clinical examples of complex flow in e.g. bifurcations and around valves have been acquired using a commercial implementation of the method (BK Medical ProFocus Ultraview scanner). A range of other methods are also presented. This includes synthetic aperture imaging using either spherical or plane waves with velocity estimation performed with directional beamforming or speckle tracking. The key advantages of these techniques are very fast imaging that can attain an order of magnitude higher precision than conventional methods. SA flow imaging was implemented on the experimental scanner RASMUS using an 8-emission spherical emission sequence and reception of 64 channels on a BK Medical 8804 transducer. This resulted in a relative standard deviation of 1.2% for a fully transverse flow. Plane wave imaging was also implemented on the RASMUS scanner and a 100 Hz frame rate was attained. Several vector velocity image sequences of complex flow were acquired, which demonstrates the benefits of fast vector flow imaging. A method for extending the 2D TO method to 3D vector velocity estimation is presented and the

Incremental Innovation


Many years ago, when I was a Ph.D. student, I looked scornfully at researchers who were presenting the same kind of work for the n-th time. I thought to myself, why can’t these people be more productive.

Now, years later, I am in the same situation. I just finished the sequel to the paper Manipulation of grating lobes by chaning element shape. The paper is titled
“Transducers with non-rectangular elements”
[“Google Docs Link“].

I realized long time ago, that innovation as such requires lots of dedication and persistence, not only creativity, and that is more often than not a result of team work.

I hope that the sequel to the paper is well received. The original paper presents the main design ideas, while this paper elaborates on the performance of the manufactured prototype.

Here is the abstract of the paper:

The number of elements in a transducer is directly
proportional to the cost of the system. To reduce the number
of elements, vendors produce transducers with a pitch larger
than one wavelength. The contrast of the images created with
these transducers is reduced because of the grating lobes. The
radiation pattern of the transducer is a product of the radiation
pattern of the array and of the individual elements. This work
investigates the possibility to reduce the level of the grating lobes
by manipulating the shape of the individual transducer elements.
We consider two design approaches in simulation.We have also
produced and tested a transducer with non-rectangular elements.

The 2 approaches are: (1) rotation of the transducer elements
and (2) interweaving the elements. In (1), the rotation of the
elements results in rotation of the radiation pattern, thus pushing
the grating lobes outside the imaging plane. The acoustic lens still
performs elevation focusing in the imaging plane thus suppressing
the grating lobes. Approach (2) emulates transducer elements
with width larger than the pitch. The position of the first zero
of the radiation pattern of the elements is closer to the main
lobe than the first grating lobe. Grating lobes are suppressed at
steering angles below 10 degrees.

In this paper we present 3 designs: (1) interwoven, (2) diagonal,
and (3) wavy. The wavy design has also been manufactured by
cutting the metallization with a laser along the desired pattern.

All designs have 128 elements, pitch of 300 m, and height
of 4 mm. The center frequency is 8 MHz. Simulations of point
spread function are done in Field II. The interwoven design gives
highest grating lobe suppression of 20 dB. The diagonal and wavy
designs suppress grating lobes by 8 and 5 dB, respectively