One of the most interesting experiments for first-year radiography students is the effect of collimation on density and contrast. If you have not had the privilege of performing this with film/screen imaging, it's probably not going to yield as much of a profound result. The idea is to take two radiographs. The first one (we did ours on an abdomen) was taken with the collimation open to the entire phantom, and the second was taken with a 5" x 5" field size. We used the same regular 8x10 film for each, as well as the same technical factors. For best results, we did both exposures without the added variable of a grid - so table-top. Here's what we got:
The film with the open collimation on the left is overall higher in radiographic density. This is due to the increased amount of scatter radiation due to a larger volume of tissue (or phantom) being radiographed. This causes fog, or unwanted density to a greater degree than with a well-collimated view. The scatter also reduces contrast.
We measured two adjacent shades of gray on each radiograph. Contrast can be defined as the difference between two shades of gray. The densitometer had readings of 0.87 and 0.99 for the non-collimated exposure, and 0.36 and 0.54 for the collimated film. The lower density difference means there is lower contrast.
Sometimes, if anyone has a hard time visually seeing the difference between the two radiographs prior to taking optical density readings, you can take a pair of scissors and cut out the exposed field in the 5" x 5" exposure, and use it as a template to cut the same size piece with the same anatomy in the other image. Since I don't have to do that on my blog, I just did it digitally... here's a closer look:
So the results concluded that an increase in collimation (increase in beam restriction and decrease in field size) will reduce radiographic density and increase contrast. You can perform the same experiment with CR or DR, but the contrast effect will not be so pronounced due to the software compensation on the image. You will, however, see exposure indicators affected in the same manner.
Looking for tips on success through Radiography school? Check out my book!
The film with the open collimation on the left is overall higher in radiographic density. This is due to the increased amount of scatter radiation due to a larger volume of tissue (or phantom) being radiographed. This causes fog, or unwanted density to a greater degree than with a well-collimated view. The scatter also reduces contrast.
We measured two adjacent shades of gray on each radiograph. Contrast can be defined as the difference between two shades of gray. The densitometer had readings of 0.87 and 0.99 for the non-collimated exposure, and 0.36 and 0.54 for the collimated film. The lower density difference means there is lower contrast.
Sometimes, if anyone has a hard time visually seeing the difference between the two radiographs prior to taking optical density readings, you can take a pair of scissors and cut out the exposed field in the 5" x 5" exposure, and use it as a template to cut the same size piece with the same anatomy in the other image. Since I don't have to do that on my blog, I just did it digitally... here's a closer look:
So the results concluded that an increase in collimation (increase in beam restriction and decrease in field size) will reduce radiographic density and increase contrast. You can perform the same experiment with CR or DR, but the contrast effect will not be so pronounced due to the software compensation on the image. You will, however, see exposure indicators affected in the same manner.
Looking for tips on success through Radiography school? Check out my book!
No comments:
Post a Comment