Imaging
Radar, Ultrasound & Sonar
(You would almost certainly not use the Radar or Sonar spins under this heading)
Radar The transducer is a radio aerial, often letterbox-shaped so as to give a wide diffraction pattern vertically and a narrow pattern horizontally (remember that pattern width varies inversely with slit width). Since the resulting image is a map, this gives good resolution in the plane of the map while ensuring that everything in a vertical direction is picked up. Ultrasound The transducer is a piezo-electric crystal, which vibrates when a pd is applied across it (and vice-versa). Sonar The transducer is a piezo-electric crystal, which vibrates when a pd is applied across it (and vice-versa). I don't think you sweep through different angles with sonar, but I'm not going to take the time off to look just at the moment.
|
|
CCD
Suitable for CCTV cameras, camcorders, infra-red cameras, etc.
|
|
Smoothing
Note that you can do smoothing in reverse to extract detail from an image that isn't at first sight in the original. Suppose you have a 3 x 3 grid, but the resolution of your telescope or whatever is such that you can only see four squares at once, and the number you get back is the sum of the four squares. To make life easier, we'll allow the grid to be surrounded by a notional ring of squares with 0 in them - off the picture, as it were:
So the telescope / radar / scanner looks first at abfg, then at bcgh and so on. There are 16 such combinations, and the unprocessed image information might be
i.e. a+b+f+g returns a value of 1, b+c+g+h returns a value of 4 and so on. Try to reconstruct the original 3 x 3 grid surrounded by the notional ring of 0s. Remember that a, b, c, d, e, f, j, k, o, p, t, u, v, w, x and y are all 0. Here is the answer. Much use is made of this technique in processing images from Hubble.
|
I imagine that you know as much as you need to about this (and other forms of manipulation) from the book. | |||||||||||||||||||||||||||||||||||||||||
Light | Light from a diffuse extended source is scattered in all directions from each point of the subject. We use a lens between the illuminated subject and the film/CCD unit to form a focussed image in which the divergent light from a particular point is all brought together again. |
X-Rays | A standard X-Ray photograph is
essentially a shadow. The electrons are focussed onto a small target in the
X-Ray tube, so the source is point-like, giving out a conical beam, which
passes through the subject on its way to the receiver, which is usually a
photographic film. Each bit of the film is only in line to receive one beam
of X-Rays. They either get there or they are absorbed on the way. There is
no question of focussing the image. The receiver has to be bigger than the
subject for this to work, so you wouldn't use a CCD. In this system there
are lots of divergent beams passing through all the different bits of the
target simultaneously. There is a system in which you have a thick lead plate with holes in it, so angled that only those X-Rays travelling in a particular direction get through, and yet another system in which the X-Rays setting off have to get through two holes in lead plates, thereby ensuring that they are going in a particular direction. These rays are said to be collimated. I will try to find out more about how you form the very sharp images obtained in a CT scan. I suspect that what happens is that you have a collimated beam shining on to a photomultiplier tube on the opposite side of the apparatus, and that you move the beam (and subject) so that it passes through all the bits of the subject one after the other. This produces a varying output from the photomultiplier tube, which you sample each time the beam is shining through a different bit of tissue. Software later sorts all this numerical data out into a series of images of 'slices' of the body. Here are the issues I would like to find out about:
I've found nothing very useful on the internet in a short surf. If anyone comes up with something good, perhaps he could alert us all? |
Radar, Ultrasound |
Radar, Ultrasound
|
|
TV, Computer monitors | |
Radar, Ultrasound |
This can mean a number of things.
the number of pixels altogether (eg 20 Mpixel as the resolution of a digital camera)
the number of pixels in a specified length (eg 300 dpi as the resolution of a printer)
the numbers of pixels in two semi-specified lengths (eg a screen resolution of 1028 x 760 pixels)
the size of a pixel
the distance apart two items in an object need to be in order to be represented by different bits of information
the number of levels used in sampling an analogue waveform (eg 24-bit resolution (about 16 million levels) for CDs)
the angle subtended at the device by the smallest distance apart that two objects can be and yet still be 'resolved' into separate images - this can be diffraction-limited by the size of the aperture on the device or pixel-limited by the image-capturing system.
You need to let the context guide you. You can use a formula
Resolution = original size / smallest recordable chunk (or the other way up)
in some circumstances, if you like.