High-Resolution Radiographs of the Hand

Free download. Book file PDF easily for everyone and every device. You can download and read online High-Resolution Radiographs of the Hand file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with High-Resolution Radiographs of the Hand book. Happy reading High-Resolution Radiographs of the Hand Bookeveryone. Download file Free Book PDF High-Resolution Radiographs of the Hand at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF High-Resolution Radiographs of the Hand Pocket Guide.

In order to make the annotation process as comfortable as possible, few extensions were implemented. First, for this application the overall number of ROIs as well as the number of ROIs per class on each radiograph are always the same.

portable X-ray solution

Instead of starting from scratch every image, a template was copied as annotation candidate. Second, to cope with the differences in size between different radiographs, global scaling of all boxes as well as local scaling, i. Third, since radiographs in this dataset are of different qualities and brightness, a contrast enhanced view was implemented. The annotation was not done entirely by hand, but instead the proposed neural network was initially trained on the first annotated images and used subsequently to produce annotation candidates for the remaining images.

These were then corrected manually by utilizing our modified labelImg tool. This approach reduced annotation times and ensured that the tools were fit to the task. All experiments were conducted using Tensorflow r1.

Fractal analysis of subchondral bone changes of the hand in : Medicine

The model was pre-trained on the COCO dataset [ 11 ], which consists of roughly Of course it would be desireable to use models which were trained on the medical domain instead of natural images. However, at the moment no pre-trained models are available for the chosen architecture, which were trained from scratch on medical datasets. For fine-tuning, the default configuration from the repository was used: SGD optimizer with momentum set to 0.

Only moderate data augmentation was applied in form of random horizontal flips. Since the dataset at hand does not contain a large number of images and to ensure that the model does not overfit, only steps were used for training. Since the pre-trained models were trained on RGB images and the dataset consists of monochrome radiographs, the channel was duplicated to form a grayscale RGB image. This process was repeated ten times with different seeds for random shuffling, in order to obtain a robust estimation of the generalization capability of the process.

Additionally, to determine the effect of using pre-trained networks on generalizability, we repeated the simulated experiment, but initialized the weights randomly using the Xavier method [ 14 ] instead of using the pre-trained weights. The performance was measured by the average precision of the Intersection over Union IoU which is also known as Jaccard Index. As the annotations were created by non-experts, another experiment was conducted to compare the quality of both expert and non-expert annotations. For this evaluation a different criteria was used, since the medical definition of the regions of interest differ from the applications use-cases.

Therefore we adopt the evaluation criteria of [ 16 , 17 ] and compare the L 2 distance of the central points between the annotated and predicted ROIs. A prediction is considered to match the groundtruth, if: 1 Where g is the groundtruth central point and p the predicted central point of the ROI. The threshold is computed based on the image height, which is the most influencing axis of hand radiographs.

All experiments were conducted on a Nvidia DGX In Fig 4 , bounding box sizes of each class are visualized. It is clearly visible that the classes follow the same relationships between both annotated sets, although the validation set is biased to be slightly smaller. The results are shown in Table 1. Results are stated as mean and standard deviation of ten different training set splits. The evaluation is performed on the held-out set of 89 images.

Replacing the pre-trained weights with random weights initialized by the Xavier method was extremely harmful. The performance of the network for every region dropped below 0.

High-Resolution Radiographs of the Hand

The results of models trained on non-expert annotations are listed in Table 2. In both cases the precision was higher than the recall rate. However, MCP was the class with the greatest loss. The iota coefficient between both annotations is 0. Training was performed on the full set of , annotated by the non-expert, and evaluated on the held-out set of 89 images. Results are stated as mean and standard deviation of 10 runs. Annotating ossification regions in X-ray images of the hand is a reasonable preprocessing step for hand bone age assessment, e.

An automated ossification area detector therefore is a first step to replicate the workflow of a radiologist. Furthermore, the output of the fully automated bone age assessment pipeline is easy to interpret and failure cases can be analyzed more easily.


In contrast to other approaches, this pipeline allows using high resolution image patches of the localized regions of interest, instead of downsampling the whole image and thus discarding details of the bones. Though only few annotated images are necessary for our approach, many annotation tools, while powerful, are quite general in nature and not adapted to the problem at hand e. Finally, in our case this process can also be used by non-specialists to annotate the data. By comparing the annotations to those of a specialist, we show that there is not a large difference in the location of the central point of the ossification region.

Therefore, the cost of annotations can be reduced even further, as no distinguished expert is necessary. Regarding the hand bone age problem, there exist two classical methods which are employed by radiologists to determine the bone age. The Greulich-Pyle method takes the whole hand image into account and compares it to an atlas of radiographs. While this is the easier of both methods, the inter-rater as well as intra-rater variability is quite large [ 21 ]. The second, and more often used, is the Tanner-Whitehouse method. Here, 13 selected hand bones are examined for their ossification stage.

These are individually scored based on their textual appearance and then combined into a single score, using race as an auxiliary factor. Automation of these methods has been attempted many times over the years, e. A review can be found in [ 23 ]. One particular method, the FingerNet, was proposed in [ 24 ]. There, a special deep network is constructed to detect the joints and trained on images segmented by an expert radiologist.

In [ 25 ] hand bone age is estimated from 3D MRI volumes. Using deep learning, [ 26 ] constructs a regression network, called BoNet, based on the OverFeat model. Both do not detect joints, but in a post-hoc analyses they show that the networks mainly take the ossification areas into account to determine the bone age.

In [ 28 ] the authors constructed a two stage neural network for locating carpal bones in hand radiographs for the application of bone age assessment. At first, a focussing network identifies the center point of carpal bones. Afterwards, the identified regions of interest are processed by another network to classify the bone as one of seven carpal bones. Each classifying network was constructed differently for each carpal bone in order to ensure a sufficient receptive field. Recently, several end-to-end object detection algorithms were developed. Regarding the bone age assessment, using an object detector network yields several advantages.

First, radiographs have usually a much higher image resolution than current network architectures can process due to limited memory. By identifying the ossification ROIs, high resolution patches can be extracted which retain all relevant bone details. Second, each individual region gets scored and therefore the final age prediction is the result of ensembling over all ossification regions. Third and most importantly, the outcome of such a two-stage system is more interpretable by the radiologist and therefore increases the clinical acceptance of such a method.

We have shown that with even few data deep networks can be trained and successfully applied to detect joints and ossification areas. The key ingredient was a freely available, pre-trained neural network object detector. Using pre-trained models as a starting point for training a generalizable model with only small datasets has also been proven by other researchers [ 3 ].

To understand how the size of the annotated dataset is related to the performance of the detection network, we simulated the annotation process by subsampling the data. To ease the annotation process, we adapted an open source segmentation tool to our needs. In detail, all classes benefitted strongly from more data, except for the Wrist, where the positive trend was not as pronounced. Looking at the region sizes in Fig 4 , the Wrist and Radius were the two largest classes in terms of spatial dimensions. This might have be caused by the Faster-RCNN configuration, which uses by default box proposals of size 64, , , and px.

This relation was also reflected in the standard deviations, where larger regions had less variation than the smaller regions. One explanation could be that smaller regions of interest contain less information to discriminate and therefore more training data is needed to successfully classify a box proposal as one of the classes of interest. The importance of using pre-trained weights was very evident when training the network with randomly initialized weights.

The performance on the validation sets was nearly zero, i. This behaviour was not unexpected, as the network contains millions of parameters which cannot be trained by just a few annotated images, and underlines the importance of pre-trained networks. Regarding the agreement of the expert vs non-expert annotation, it was not surprising that the non-expert annotations matched the predictions better than the expert annotations as the network was trained on the former.

Still, both had a rather high agreement, as can be seen from the iota coefficient, so that the central points of the expert annotated ROIs were mostly matched by models trained on non-expert annotated training data. However, MCP was the class with the greatest loss, since the medical definition differs a lot for the middle finger and thumb [ 32 ]. The difference could be reduced by declaring the underlying definition in advance and will be application dependent. There are some improvements that our study could benefit from: While X-ray images tend to be visually rather consistent over different sites, we only used the RSNA dataset.

External validation data would be necessary to judge the generalizability of the network. Furthermore, we only used annotations from one radiologist expert, thereby we cannot consider inter-observer variability.

  • Article Tools.
  • Finding Groups in Data: An Introduction to Cluster Analysis!
  • High-Resolution Radiographs of the Hand | SpringerLink.
  • Mario Cammisa (Editor of High-Resolution Radiographs of the Hand).
  • Computer Vision -- ACCV 2014: 12th Asian Conference on Computer Vision, Singapore, Singapore, November 1-5, 2014, Revised Selected Papers, Part IV;

Another possible extension to this workflow could be to utilize a self-learning approach [ 33 ]. By applying very strict rules for annotation candidate selection, the number of images with wrong and corrupting training material could be reduced to a minimum.

RAD 1226 Spatial/Contrast Resolution Radiography