We appreciate the authors' interest in our recently published article.1 We would like to take this opportunity to articulate our thoughts and address the concerns arising from this matter.
Regarding the classification method in our study, firstly, it is important to underline that our study represents pioneering work in this field. As such, there were no pre-existing, validated classifications for quality measurements. Secondly, the objective of our study was not to create a method to classify quality measurements. Instead, our primary focus was to compare the performance of ocular ultrasonic and optical biometry devices across various quality measurements. Thirdly, the IOLMaster 700 employs the Standard Deviation (SD) index to validate its metric measurements. We used this index exclusively for categorizing the measurements based on their quality. It is crucial to emphasize that our study was a consecutive case series involving 239 candidates for cataract surgery. Therefore, our classification can indeed be seen as representative of the general population of cataract surgery candidates. This includes those with cataracts at various stages, ranging from mild to mature, and those with a wide spectrum of measurement quality, from low to high.
Regarding the concern about the stages of cataracts studied, it is acknowledged that the density of cataracts can impact the quality of measurements, and denser cataracts have been shown to influence biometry results negatively.2 However, it is important to emphasize that the primary focus of our study was not to investigate the impact of different cataract types or cataract densities on the quality of measurements. To better illustrate this point, for instance, denser cataracts are associated with poorer signal strength and measurement quality.3 Yet, this condition is equal in our study's optical and ultrasonic measurements. However, in the 'Limitations' section, we openly acknowledged that our study did not categorize patients according to the type and degree of cataract. This transparency affirms our understanding of the potential confines of our findings and the areas that future research in this field could further explore.
In response to your comment regarding the influence of patient characteristics, lens opacities, ocular diseases, or ocular biometry history, on measurement reliability and agreement, we indeed took these factors into account. Firstly, we considered the confounding effects of age and gender in our study. As mentioned in the method section, these were included in the regression model and controlled for by treating them as covariates. Additionally, it is crucial to clarify that any patients with other ocular diseases or a history of ocular surgery were excluded from our study. This further ensures the specificity of our findings to the cataract surgery candidate population.
In response to the questions raised about the clinical implications of our study, we have clearly reported that the very strong correlation in axial length and anterior chamber depth measurements indicates that the more cost-effective US-4000 Echoscan could potentially serve as a feasible alternative to the pricier IOLMaster 700, especially in settings with limited resources. Nevertheless, the discrepancies noted in lens thickness measurements between the two biometry devices could considerably influence the planning of cataract surgeries. We thus recommend that clinicians should be careful when using these devices interchangeably, especially when dealing with measurements of low to moderate quality.
In response to the lack of interexaminer repeatability analysis comment, it is necessary to clarify that the term interexaminer analysis typically applies when multiple examiners are assessing the same subject using the same device to determine the consistency of measurements across different examiners. In the case of our study, two different devices were utilized to measure the biometric parameters of the same patients, but a separate examiner operated each device. This scenario does not lend itself to an interexaminer analysis because each examiner uses a different device, and any variability could be due to the devices themselves rather than differences in the examiners' evaluations.
In summary, we clarified that the primary focus of our study was to compare two biometry devices in different quality measurements, not to create a classification method for quality measurements. We acknowledged the potential impact of cataract types and density but noted that this wasn't the focus of our investigation. We affirmed that we accounted for confounding effects of age and gender and excluded patients with other ocular diseases or a history of ocular surgery. Finally, we explained that an interexaminer repeatability analysis was not applicable in our study design as a different examiner operated each device, hence, any variability could be due to the devices rather than examiner evaluations.