Men in the school quite often e-mail me with questions and I e-mail a reply. I have decided to keep the answers on this page if they are of general interest.
Can you explain transmission frequencies in signalling - "what the frequency represents", "advantage of high transmission frequency", "factors which affect the rate of information transfer" please?
Alasdair MacKinnon, 8 June 2006
A monochromatic wave (i.e. just one frequency) contains no information (other than the trivial fact that someone has switched on). So you always modulate it by:
Obviously (well, fairly), the modulation frequency, usually called the ‘signal frequency’, has to be lower than the frequency being modulated (usually called the ‘carrier frequency’ although you have called it the ‘transmission frequency’).
Let’s say the carrier frequency is 100 kHz and the signal frequency is 15 kHz (OK for medium-fi music). It turns out that the modulated carrier wave can be regarded as a sum of waves of frequencies 85kHz, 100 kHz and 115 kHz. (This is because of Fourier’s Theorem. You can play with a spreadsheet exemplifying it by selecting the Animated Waves button on the 3Pz website.) So to make a transmission we need a bandwidth of 30 kHz, centred on 100 kHz. No-one else can be permitted to transmit between 70 kHz and 130 kHz. You will easily see that not many programmes can be transmitted at relatively low carrier frequencies. On the other hand, if you transmit at 10 MHz, then you only need exclusive access to frequencies between 985 kHz and 1015 kHz, so it’s relatively ‘sooner’ along the spectrum before someone else can have a slot.
Let’s start again with 100 kHz. We can’t very well have a bit much less than two cycles long, and we’ll need another 2 bits of ‘nothing’ time before starting the next bit, so as to make sure that the bits are distinct. So the maximum possible bit transfer rate is going to be 25 kbits s-1. That’s obviously very slow (and it will use up 50 kHz of bandwidth). On the other hand, if you transmit at 12 GHz, then you could in principle have a bit transfer rate of about 3 Gbit s-1, although, once again, this would use up big slice of the available bandwidth, and other broadcasters would complain bitterly. Of course, if you are transmitting at, say 1014 Hz, which is more or less what optical links do, then you can have much higher transfer rates – certainly higher than you computer can cope with.
Suppose you are sending a digital voice signal down an optical link and using, say 8-bit technology at a sampling rate of 8 kHz (so as to achieve about 3.7 kHz quality). Once every 125 ms you would need to send an 8-bit number. If your computer is working at 2 GHz, you could well achieve transmission rates of, say, 0.5 GHz which would mean that your 8-bit number would transmit in under 20 ns. So you will then twiddle your thumbs for 124980 ns until it’s time to send the next sample. So the phone system uses this dead time to transmit samples for some 6000 other customers down the same link and then unscramble it all at the other end. This is called time-division multiplexing. So another issue to consider in the rate of information transfer is “how much other traffic is there on the link?”
With respect to the bibliography of the materials project, what information must be put when referring to books. In what form should this be: do we need to quote publisher and date of publication for example?
Can you give me an example
Well, for example, I think it is the case that the Geiger-Müller tube uses the avalanche effect to amplify the consequences of a single ionisation¹. Now you may or may not want to take that on trust, but if you don't then the bibliography might help (it's in your light blue text book).
I am having a bit of trouble with
X-rays. In past papers I have seen the questions :
Chosen Imaging System: (I'm choosing an X-Ray Machine)
a) Estimate the resolution of the image that you have chosen
b) Estimate i) The number of pixels in the image
ii) The number of bits per pixel
Then, using b), it asks you to calculate the time taken to transmit the image at 56 kbit/s (which I can do), but then it says:
d) In practice the time taken may be very different. Suggest why this might be so.
There was also one small sensors question which said:
a) state one benefit of having a sensor with a linear output. (I think this is to do with calibrating?)
Harry Harden, 10 June 2002As I've pointed out on the website, there are several different meanings for 'resolution'. You need to specify which one you are going with. In this case, I'd suggest "size of smallest discernable object" or "separation between details on the object that can just be perceived as distinct". This can never be less than the wavelength being used (this is called 'diffraction limited resolution'), but in your case is likely to be much greater, because you can't focus X-Rays and have to control direction by using beams that have been collimated by holes in a thick lead plates. So you are then limited by how closely you can punch holes in lead plates and leave enough lead in between not to let radiation get from one hole to the next, So I should think about 2 mm, wouldn't you? So you can multiply that up for, say, a torso.
Next, you are going to have to use a photomultiplier tube behind each hole to convert the X-Ray intensity into a current, which can flow through a resistor to produce a voltage that can be fed to an ADC. Given the coarseness of the pixelation, and the variation that will inevitably occur across a pixel, I should think that 10 bits, giving 1024 levels, would be ample.
Of course, in much X-Ray work, you simply use a photograph, in which case each grain in the emulsion forms a pixel something like a micron across (?). Each exposed grain either flips or doesn't during development, so there is only on bit per pixel, and you get grey levels by the local proportion of flipped grains.
I should think the transmission time variation has to do with error detection. If you detect errors, then you re-transmit that parcel of bits, which slows things down a little. So far as the sensor is concerned, the point about a linear response is that you can multiply it by a factor (using an op-amp) and add an offset to it (using an op-amp) and thereby convert the output into a direct reading instrument of the stimulus.
This is a question about significant figures. Presumably all data should be to a uniform number of significant figures. This is complicated by the fact that in order to have, say, the size of the light source to three significant figures, you would have had to measure to 1/10 of a mm or, at the other end of the scale, if v were 115cm, in order to fit the sig figs rule you would have to write 120cm, which represents quite a noticeable error. There seem to me to be two options
Ditch the sig figs rule, which seems a little unscientific.
Ashley Riches, 29 November 2002