Frequently

Asked

Questions

Men in the school quite often e-mail me with questions and I e-mail a reply. I have decided to keep the answers on this page if they are of general interest. 


Can you explain transmission frequencies in signalling - "what the frequency represents", "advantage of high transmission frequency", "factors which affect the rate of information transfer" please?

Alasdair MacKinnon, 8 June 2006

A monochromatic wave (i.e. just one frequency) contains no information (other than the trivial fact that someone has switched on). So you always modulate it by:

Obviously (well, fairly), the modulation frequency, usually called the ‘signal frequency’, has to be lower than the frequency being modulated (usually called the ‘carrier frequency’ although you have called it the ‘transmission frequency’).

 AM transmissions

Let’s say the carrier frequency is 100 kHz and the signal frequency is 15 kHz (OK for medium-fi music). It turns out that the modulated carrier wave can be regarded as a sum of waves of frequencies 85kHz, 100 kHz and 115 kHz. (This is because of Fourier’s Theorem. You can play with a spreadsheet exemplifying it by selecting the Animated Waves button on the 3Pz website.) So to make a transmission we need a bandwidth of 30 kHz, centred on 100 kHz. No-one else can be permitted to transmit between 70 kHz and 130 kHz. You will easily see that not many programmes can be transmitted at relatively low carrier frequencies. On the other hand, if you transmit at 10 MHz, then you only need exclusive access to frequencies between 985 kHz and 1015 kHz, so it’s relatively ‘sooner’ along the spectrum before someone else can have a slot.

 Digital transmissions

Let’s start again with 100 kHz. We can’t very well have a bit much less than two cycles long, and we’ll need another 2 bits of ‘nothing’ time before starting the next bit, so as to make sure that the bits are distinct. So the maximum possible bit transfer rate is going to be 25 kbits s-1. That’s obviously very slow (and it will use up 50 kHz of bandwidth). On the other hand, if you transmit at 12 GHz, then you could in principle have a bit transfer rate of about 3 Gbit s-1, although, once again, this would use up big slice of the available bandwidth, and other broadcasters would complain bitterly. Of course, if you are transmitting at, say 1014 Hz, which is more or less what optical links do, then you can have much higher transfer rates – certainly higher than you computer can cope with.

 Multiplexing

Suppose you are sending a digital voice signal down an optical link and using, say 8-bit technology at a sampling rate of 8 kHz (so as to achieve about 3.7 kHz quality). Once every 125 ms you would need to send an 8-bit number. If your computer is working at 2 GHz, you could well achieve transmission rates of, say, 0.5 GHz which would mean that your 8-bit number would transmit in under 20 ns. So you will then twiddle your thumbs for 124980 ns until it’s time to send the next sample. So the phone system uses this dead time to transmit samples for some 6000 other customers down the same link and then unscramble it all at the other end. This is called time-division multiplexing. So another issue to consider in the rate of information transfer is “how much other traffic is there on the link?”

 


 

With respect to the bibliography of the materials project, what information must be put when referring to books. In what form should this be: do we need to quote publisher and date of publication for example?

Can you give me an example

Thank you

Francis Hemingway

 

Well, for example, I think it is the case that the Geiger-Müller tube uses the avalanche effect to amplify the consequences of a single ionisation¹. Now you may or may not want to take that on trust, but if you don't then the bibliography might help (it's in your light blue text book).

 
-------------------------------------------------------------------------------------------------------------
Bibliography
 
1. Introduction to Advanced Physics by David Brodie (John Murray, 2000), p.70

 


 

I am having a bit of trouble with X-rays. In past papers I  have seen the questions :

Chosen Imaging System: (I'm choosing an X-Ray Machine)

a) Estimate the resolution of the image that you have chosen
b) Estimate i) The number of pixels in the image
                ii) The number of bits per pixel

Then, using b), it asks you to calculate the time taken to transmit the image at 56 kbit/s (which I can do), but then it says:

d) In practice the time taken may be very different. Suggest why this might be so.

There was also one small sensors question which said:

a) state one benefit of having a sensor with a linear output. (I think this is to do with calibrating?)

Harry Harden, 10 June 2002

As I've pointed out on the website, there are several different meanings for 'resolution'. You need to specify which one you are going with. In this case, I'd suggest "size of smallest discernable object" or "separation between details on the object that can just be perceived as distinct". This can never be less than the wavelength being used (this is called 'diffraction limited resolution'), but in your case is likely to be much greater, because you can't focus X-Rays and have to control direction by using beams that have been collimated by holes in a thick lead plates. So you are then limited by how closely you can punch holes in lead plates and leave enough lead in between not to let radiation get from one hole to the next, So I should think about 2 mm, wouldn't you? So you can multiply that up for, say, a torso.    

Next, you are going to have to use a photomultiplier tube behind each hole to convert the X-Ray intensity into a current, which can flow through a resistor to produce a voltage that can be fed to an ADC. Given the coarseness of the pixelation, and the variation that will inevitably occur across a pixel, I should think that 10 bits, giving 1024 levels, would be ample.    

Of course, in much X-Ray work, you simply use a photograph, in which case each grain in the emulsion forms a pixel something like a micron across (?). Each exposed grain either flips or doesn't during development, so there is only on bit per pixel, and you get grey levels by the local proportion of flipped grains.  

I should think the transmission time variation has to do with error detection. If you detect errors, then you re-transmit that parcel of bits, which slows things down a little.   So far as the sensor is concerned, the point about a linear response is that you can multiply it by a factor (using an op-amp) and add an offset to it (using an op-amp) and thereby convert the output into a direct reading instrument of the stimulus.


This is a question about significant figures. Presumably all data should be to a uniform number of significant figures. This is complicated by the fact that in order to have, say, the size of the light source to three significant figures, you would have had to measure to 1/10 of a mm or, at the other end of the scale, if v were 115cm, in order to fit the sig figs rule you would have to write 120cm, which represents quite a noticeable error.  There seem to me to be two options

Ashley Riches, 29 November 2002

Should all data should be to a uniform number of significant figures? No, absolutely not. You take data as accurately as you can. If you have a single go at it (but see third paragraph), the scale will set the accuracy. For example, 1.3 cm implies that you are measuring to the nearest 0.1 cm, and the data is 10% data. If another measurement is about a metre then, plainly, if you were to measure it to the nearest centimetre you'd be to within 1%. This is much more accurate than 10%, and it simply wouldn't be worth bothering with millimetres for that reading, even if they were present on the ruler. You'd quote that reading as, say, 1.27 m, if that was what it was. So your first datum is 2 SF and the second is 3 SF. If you find yourself needing to multiply them together, say to find an area (1.651 sq cm), you'd argue that since the 2 SF accuracy 'wins', the answer should be quoted as 1.7 sq cm).
 
That's the rule of thumb approach. A better approach is to say that the combined percentage uncertainty is 1%+10%=11%. 11% of 1.651 is 0.18161, so the area is likely to lie between 1.46939 and 1.83261. The first significant figure isn't in doubt, but the second clearly is, and by quite a big margin. There's absolutely no point in losing sleep over the value of the fourth SF and beyond, so what about saying that the value lies between 1.47 and 1.83, i.e 1.65 + 0.18 ? If we want an uncertainty of about 0.2, it's no good quoting the answer to the nearest 1: we can't say 2 + 0.2 : the central value and the uncertainty must stop at the same absolute value (DP are no good for expressing this, because 1234 and 1.234 x 103 have different numbers of DP, but the same accuracy and the same SF. Likewise, SF are no good because 1.65 and 0.18 have the same absolute accuracy but different numbers of SF). So the conventional answer would be 1.7 + 0.2 . 1.7 on its own would imply + 0.1, but it's a bit worse than that. 2 would imply + 1, and it's much better than that. 1.65 on its own would imply + 0.01, and it's so much worse than this that the 1.65 + 0.18 version would be regarded by most people as ridiculous.
 
However, there are obviously difficulties if you think that your uncertainty margins are too big. One ought to consider the likelihood of the uncertainty actually occurring. There's a lot to be said for generating the absolute uncertainty by taking lots of repeats, calculating the mean and standard deviation, dividing the standard deviation by the square root of the number of readings contributing to it, and then proceeding as above.
 
You might like to try the effect of measuring the long length just to the nearest 5 cm (130 cm), as you suggested, or, alternatively to the nearest mm (say, 127.3 cm). I think you'll see why working to the nearest cm is, in this case, a perfectly sensible choice.