Discussion:
Matching images from different sources.
(too old to reply)
Roberto Waltman
2007-02-22 23:59:11 UTC
Permalink
Looking for information, algorithms, etc. on how to match images of
the same object obtained from different sources.

(Also on what would be the proper terminology to describe this
problem. I'm sure I am doing a poor job here. ;) )

For example, I may take pictures of a cloud formation using three
cameras sensible to the visible, infrared and ultraviolet spectra.
The cameras, although close to each other, may be located far enough
to introduce parallax errors, they may have different resolutions, the
images capture may not be simultaneous, so the cloud shapes may change
slightly from one image to the next, etc.

By 'matching' I mean scaling and rotating the images so that they can
be overlaid in such a way that all the data in any area of the screen
is coming from the same 'region' in the physical world.

The matching process should be based only in the images, I may not
have enough information about the cameras physical location and
orientation.

I understand that in the most general case the images could be so
different that this problem is unsolvable, but I still expect to be
able to find (partial) solutions when some minimal correlation level
exists.

Thanks,


Roberto Waltman

[ Please reply to the group,
return address is invalid ]
MCammarano
2007-02-23 02:57:40 UTC
Permalink
Post by Roberto Waltman
Looking for information, algorithms, etc. on how to match images of
the same object obtained from different sources.
(Also on what would be the proper terminology to describe this
problem. I'm sure I am doing a poor job here. ;) )
This problem is often called multimodal registration. It crops up in
medical imaging (align an MRI with a CT scan, for example) and
aerial/satellite imaging (line up images acquired in different spectral
bands).
Post by Roberto Waltman
For example, I may take pictures of a cloud formation using three
cameras sensible to the visible, infrared and ultraviolet spectra.
The cameras, although close to each other, may be located far enough
to introduce parallax errors, they may have different resolutions, the
images capture may not be simultaneous, so the cloud shapes may change
slightly from one image to the next, etc.
You might find a technique called "alignment by maximization of mutual
information" helpful.
Post by Roberto Waltman
By 'matching' I mean scaling and rotating the images so that they can
be overlaid in such a way that all the data in any area of the screen
is coming from the same 'region' in the physical world.
The matching process should be based only in the images, I may not
have enough information about the cameras physical location and
orientation.
I understand that in the most general case the images could be so
different that this problem is unsolvable, but I still expect to be
able to find (partial) solutions when some minimal correlation level
exists.
Thanks,
Roberto Waltman
ImageAnalyst
2007-02-23 12:27:34 UTC
Permalink
Roberto:
I'm not sure why my lengthy reply from yesterday isn't there. I've
seen this once before from Google Groups in the past month - where it
says it posted successfully but then it never shows up. Anyway, it
was something about building up feature vectors. But I had another
thought. In some fields (medical, remote sensing, military) they have
a problem such as yours. The terms you want to search for are "image
fusion" or "data fusion" and have to do with aligning images from
different modalities, like how can you overlap corresponding physical
slices from a CT image and an MRI image. I've never really had to do
fusion this myself but I know it was (and maybe still is) a hot topic
in medical imaging in the 90's.
Try this:

http://www.google.com/search?hl=en&q=image+fusion

You just missed the image fusion conference but maybe you can get
proceedings, or go next year:
http://www.iqpc.com/cgi-bin/templates/genevent.html?topic=329&event=11435&

Hoping this posts (please Google!!!)
ImageAnalyst
Post by Roberto Waltman
Looking for information, algorithms, etc. on how to match images of
the same object obtained from different sources.
(Also on what would be the proper terminology to describe this
problem. I'm sure I am doing a poor job here. ;) )
For example, I may take pictures of a cloud formation using three
cameras sensible to the visible, infrared and ultraviolet spectra.
The cameras, although close to each other, may be located far enough
to introduce parallax errors, they may have different resolutions, the
images capture may not be simultaneous, so the cloud shapes may change
slightly from one image to the next, etc.
By 'matching' I mean scaling and rotating the images so that they can
be overlaid in such a way that all the data in any area of the screen
is coming from the same 'region' in the physical world.
The matching process should be based only in the images, I may not
have enough information about the cameras physical location and
orientation.
I understand that in the most general case the images could be so
different that this problem is unsolvable, but I still expect to be
able to find (partial) solutions when some minimal correlation level
exists.
Thanks,
Roberto Waltman
[ Please reply to the group,
return address is invalid ]
Roberto Waltman
2007-02-23 15:49:03 UTC
Permalink
Post by MCammarano
Post by Roberto Waltman
Looking for information, algorithms, etc. on how to match images of
the same object obtained from different sources.
This problem is often called multimodal registration. It crops up in
medical imaging (align an MRI with a CT scan, for example) and
aerial/satellite imaging (line up images acquired in different spectral
bands).
...
You might find a technique called "alignment by maximization of mutual
information" helpful.
...
Thanks for the pointers, a first Google search is bringing more
relevant hits than what I was able to find before.
Post by MCammarano
... building up feature vectors.
... In some fields (medical, remote sensing, military) they have
a problem such as yours. The terms you want to search for are "image
fusion" or "data fusion" and have to do with aligning images from
different modalities, like how can you overlap corresponding physical
slices from a CT image and an MRI image.
Ditto.
Post by MCammarano
...
You just missed the image fusion conference but maybe you can get
http://www.iqpc.com/cgi-bin/templates/genevent.html?topic=329&event=11435&
That would be very nice, but at this time this is just a thought
experiment. Even if it wasn't, nobody is going to pay me to attend
such a conference. Well, maybe if I reincarnate somewhere in academia
... ;)
Post by MCammarano
Hoping this posts (please Google!!!)
Going off topic, I use Google extensively for searching, never for
posting. The best environment I found after trying a few different
things is Forte's Agent as a usenet reader (Windows, Linux under Wine)
and http://news.individual.net/ as a usenet provider. (Not free, but
only 10 euros per year).

Thanks again,

Roberto Waltman

[ Please reply to the group,
return address is invalid ]
Terry B
2007-03-01 03:03:56 UTC
Permalink
Post by ImageAnalyst
I'm not sure why my lengthy reply from yesterday isn't there. I've
seen this once before from Google Groups in the past month - where it
says it posted successfully but then it never shows up. Anyway, it
was something about building up feature vectors. But I had another
thought. In some fields (medical, remote sensing, military) they have
a problem such as yours. The terms you want to search for are "image
fusion" or "data fusion" and have to do with aligning images from
different modalities, like how can you overlap corresponding physical
slices from a CT image and an MRI image. I've never really had to do
fusion this myself but I know it was (and maybe still is) a hot topic
in medical imaging in the 90's.
http://www.google.com/search?hl=en&q=image+fusion
You just missed the image fusion conference but maybe you can get
http://www.iqpc.com/cgi-bin/templates/genevent.html?topic=329&event=11435&
Hoping this posts (please Google!!!)
ImageAnalyst
Post by Roberto Waltman
Looking for information, algorithms, etc. on how to match images of
the same object obtained from different sources.
(Also on what would be the proper terminology to describe this
problem. I'm sure I am doing a poor job here. ;) )
For example, I may take pictures of a cloud formation using three
cameras sensible to the visible, infrared and ultraviolet spectra.
The cameras, although close to each other, may be located far enough
to introduce parallax errors, they may have different resolutions, the
images capture may not be simultaneous, so the cloud shapes may change
slightly from one image to the next, etc.
By 'matching' I mean scaling and rotating the images so that they can
be overlaid in such a way that all the data in any area of the screen
is coming from the same 'region' in the physical world.
The matching process should be based only in the images, I may not
have enough information about the cameras physical location and
orientation.
I understand that in the most general case the images could be so
different that this problem is unsolvable, but I still expect to be
able to find (partial) solutions when some minimal correlation level
exists.
Thanks,
Roberto Waltman
[ Please reply to the group,
return address is invalid ]
Roberto
It depends what your aiming to achieve.
If the aim is pretty pics than I don't know the answer.
If the aim is multifrequency analysis then there are varoius programs to do
this. I used "Karma" in the past but I am sure others also exist.
See http://www.atnf.csiro.au/computing/software/karma/

Terry B
Roberto Waltman
2007-03-14 03:35:05 UTC
Permalink
Post by Terry B
If the aim is pretty pics than I don't know the answer.
In a sense, it is. I am interested on the non-visual spectra, but I
want to be able to easily correlate the data with a visual reference.
Post by Terry B
If the aim is multifrequency analysis then there are varoius programs to do
this. I used "Karma" in the past but I am sure others also exist.
See http://www.atnf.csiro.au/computing/software/karma/
Thanks for the info. From a quick glance it looks as I could use a lot
from that library, (mostly non-image related.)

Roberto Waltman

[ Please reply to the group,
return address is invalid ]
Stupendous_Man
2007-03-01 17:28:27 UTC
Permalink
Post by Roberto Waltman
Looking for information, algorithms, etc. on how to match images of
the same object obtained from different sources.
(Also on what would be the proper terminology to describe this
problem. I'm sure I am doing a poor job here. ;) )
For example, I may take pictures of a cloud formation using three
cameras sensible to the visible, infrared and ultraviolet spectra.
The cameras, although close to each other, may be located far enough
to introduce parallax errors, they may have different resolutions, the
images capture may not be simultaneous, so the cloud shapes may change
slightly from one image to the next, etc.
By 'matching' I mean scaling and rotating the images so that they can
be overlaid in such a way that all the data in any area of the screen
is coming from the same 'region' in the physical world.
The matching process should be based only in the images, I may not
have enough information about the cameras physical location and
orientation.
I understand that in the most general case the images could be so
different that this problem is unsolvable, but I still expect to be
able to find (partial) solutions when some minimal correlation level
exists.
If you can reduce your images into a list of discrete
point sources (or tie points, call them what you will),
there are plenty of routines devised by astronomers
to match up the lists and compute the geometric
transformation between the two lists. For example,

http://spiff.rit.edu/match/
Roberto Waltman
2007-03-14 03:45:31 UTC
Permalink
Post by Stupendous_Man
If you can reduce your images into a list of discrete
point sources (or tie points, call them what you will),
there are plenty of routines devised by astronomers
to match up the lists and compute the geometric
transformation between the two lists. For example,
http://spiff.rit.edu/match/
No point sources in my case, I am looking mainly for ill-defined areas
at different temperature. Still, may be I can generate "points of
interest" based on local peaks, etc. and apply some of these
techniques. Thanks!


Roberto Waltman

[ Please reply to the group,
return address is invalid ]
pixel.to.life
2007-03-18 00:28:56 UTC
Permalink
Hi, Roberto,

If your problem is solely because of intensity inconsistencies in the
images, try
using normalized mutual information based image fusion. It is widely
used to fuse medical images from different modality sources (e.g.
CT-MRI, CT-PET etc.) , so the final alignment is largely overlap and
intensity-difference independent. Heres the link to a survey article:
http://www.cs.jhu.edu/~cis/cista/746/papers/mutual_info_survey.pdf .

Good luck!

P.
Post by Roberto Waltman
Post by Stupendous_Man
If you can reduce your images into a list of discrete
point sources (or tie points, call them what you will),
there are plenty of routines devised by astronomers
to match up the lists and compute the geometric
transformation between the two lists. For example,
http://spiff.rit.edu/match/
No point sources in my case, I am looking mainly for ill-defined areas
at different temperature. Still, may be I can generate "points of
interest" based on local peaks, etc. and apply some of these
techniques. Thanks!
Roberto Waltman
[ Please reply to the group,
return address is invalid ]
Roberto Waltman
2007-03-22 20:33:08 UTC
Permalink
Post by pixel.to.life
If your problem is solely because of intensity inconsistencies
in the images,
Unfortunately not. The images are different in
other ways, but hopefully there is enough of a
common structure to allow to correlate then
somehow.
Post by pixel.to.life
try
using normalized mutual information based image fusion. It is widely
used to fuse medical images from different modality sources (e.g.
CT-MRI, CT-PET etc.) , so the final alignment is largely overlap and
http://www.cs.jhu.edu/~cis/cista/746/papers/mutual_info_survey.pdf .
Thanks, will take a look. I am only getting
started on this area, so any additional
information is both interesting and potentially
useful.

Roberto Waltman

[ Please reply to the group,
return address is invalid ]
minipan
2007-04-16 15:56:55 UTC
Permalink
I would go for SIFT, try:

http://en.wikipedia.org/wiki/Scale-invariant_feature_transform
Post by Roberto Waltman
Post by pixel.to.life
If your problem is solely because of intensity inconsistencies
in the images,
Unfortunately not. The images are different in
other ways, but hopefully there is enough of a
common structure to allow to correlate then
somehow.
Post by pixel.to.life
try
using normalized mutual information based image fusion. It is widely
used to fuse medical images from different modality sources (e.g.
CT-MRI, CT-PET etc.) , so the final alignment is largely overlap and
http://www.cs.jhu.edu/~cis/cista/746/papers/mutual_info_survey.pdf .
Thanks, will take a look. I am only getting
started on this area, so any additional
information is both interesting and potentially
useful.
Roberto Waltman
[ Please reply to the group,
return address is invalid ]
Pixel.to.life
2007-05-05 01:16:57 UTC
Permalink
Post by Roberto Waltman
Post by pixel.to.life
If your problem is solely because of intensity inconsistencies
in the images,
Unfortunately not. The images are different in
other ways, but hopefully there is enough of a
common structure to allow to correlate then
somehow.
Post by pixel.to.life
try
using normalized mutual information based image fusion. It is widely
used to fuse medical images from different modality sources (e.g.
CT-MRI, CT-PET etc.) , so the final alignment is largely overlap and
http://www.cs.jhu.edu/~cis/cista/746/papers/mutual_info_survey.pdf.
Thanks, will take a look. I am only getting
started on this area, so any additional
information is both interesting and potentially
useful.
Roberto Waltman
[ Please reply to the group,
return address is invalid ]
Hi,

I thought this post might be useful to you:

http://groups.google.com/group/medicalimagingscience/browse_thread/thread/04051c88659894ba/#

In addition, you can also search for material on 'Fourier Mellin
transform based registration'.

There is more than one ways to try to estimate the 'perfect' alignment
between two images, so they comney the most information when overlaid.
The choice depends on criteria such as

- source of images (direct [digital camers etc.], indirect [medical
scans])
- noise tolerance
- overlap between similar objects when images are overlaid without
registration
- degrees of freedom (objects imaged are affected by which of scaling,
rotation, translation, shear, or deformation)
- performance (speed and memory consumption)

Ultimately, it boils down to optimization of a multi-variate function
that would explain similarity between two given images given a
transform.
In medical image registration, people use Normalized mutual
information as a function of given degrees of freedom of the alignment
transform.
Some people use Chamfer matching.. where you could use chamfer score
as the function to optimize given a transform.
Some people use fourier domain registration. e.g. 'Fourier Mellin
transform based registration', but mostly only for rigid/affine
transform estimation.
There are others too.. geometric model based registration, or pointset
based registration, where secondary features are extracted from
subject images and registered.

The choise is yours.. based on what is more important to you from the
given criteria. But I think you will have to choose

(1) A metric (e.g. mutual information, cross correlation, or chamfer
match score)
(2) An optimizer (gradient descent, simplex etc.)

at the very least to begin with.

Good luck,

Pixel.To.Life

[ http://groups.google.com/group/medicalimagingscience ]

Rob
2007-05-04 11:36:39 UTC
Permalink
Post by Roberto Waltman
Looking for information, algorithms, etc. on how to match images of
the same object obtained from different sources.
(Also on what would be the proper terminology to describe this
problem. I'm sure I am doing a poor job here. ;) )
For example, I may take pictures of a cloud formation using three
cameras sensible to the visible, infrared and ultraviolet spectra.
The cameras, although close to each other, may be located far enough
to introduce parallax errors, they may have different resolutions, the
images capture may not be simultaneous, so the cloud shapes may change
slightly from one image to the next, etc.
By 'matching' I mean scaling and rotating the images so that they can
be overlaid in such a way that all the data in any area of the screen
is coming from the same 'region' in the physical world.
The matching process should be based only in the images, I may not
have enough information about the cameras physical location and
orientation.
I understand that in the most general case the images could be so
different that this problem is unsolvable, but I still expect to be
able to find (partial) solutions when some minimal correlation level
exists.
Roberto,

Have you come across AstroWave in your searches?

"AstroWave is a program that calculates the linear transformation
details needed to register one image with another, ie. rotation and
scaling."

http://www.planetsi.plus.com/AstroWave/AstroWave.html

HTH,
--
Rob
Loading...