How to use data to predict medical costs

Release date: 2016-03-09

The Intetix Foundation was founded by Chinese scholars engaged in data science, non-profit organizations and public policy research to improve human society and the natural environment through data science. By liaising and mobilizing the top data scientists and social scientists in China and the United States, as well as volunteers around the world, we creatively practice our mission: to see the value of data for a better life. Author: Dimitris Bertsimas; Michael A. Kane; J Christian Kryder; Rudra Pandey; Santosh Vempala; Grant Wang.

1 Introduction

The rising cost of medical care is one of the most pressing worldwide issues today. Therefore, accurately predicting related costs is a critical first step in solving this problem. Since the 1980s, research on building predictive models based on claims data from heuristic rules and regression methods has continued. However, none of these methods have been properly validated and the rules for their use are not clear. We use modern data mining methods such as classification trees and clustering algorithms to track more than 800,000 policyholders' claims for more than three years, and then provide them in the third year based on the first two years of medical and cost data. Strict medical cost verification forecasts. We use data from more than 200,000 members that are invisible (outside the sample) to quantify the accuracy of our predictions. The key conclusions are: (1) Our data mining method can obtain accurate prediction of medical costs, which means that this method is a powerful and effective tool for medical cost prediction; (2) The past cost data model can be used for the future. Cost prediction provides very helpful help; (3) medical information only helps to accurately predict the medical expenses of high-consumption people. The value of claims data in medical research (health insurance) is often questioned (Jolins et al. 1993, Dans 1993) because these databases are used for financial rather than clinical purposes. However, claim data has been shown to play a role in many settings and is increasingly used in medical research. Examples of studies are differences in efficacy of adherence medications (Pladevall 2004), identification of hospitalization complications (Lawthers et al. 2000), duration of treatment (Mehta et al. 1999), and medical efficacy (Wennberg et al. 1987). Jones (2000) provides a good summary of the general use of statistical methods in conjunction with medical data, while other publications, including Zhou et al. (1997) and Manning and Mullahy (2001), address health care cost data. The predictive power of claims data became a research topic in the 1980s (Zhao et al. 2005), and many studies have begun to explore the ability of management data to predict medical costs (Ash et al. 2000, Zhao et al. 2001, Farley). Et al. 2006, Zhao et al. 2005). Van de Ven and Ellis (2000) provided an insightful overview of the risk-based predictive model of the year 2000. Cumming et al. (2002) proposed a comparatively different forecasting model developed in the insurance industry's risk assessment and population health care cost forecasting. This model compares the use of diagnostic and prescription data, and the study further validates the predictive power of claims data. Early researchers focused on the use of classical regression models to predict total medical costs, or logistic regression models (LaVange et al. 1986, Roblin et al. 1999) to identify high-risk members (Zhao et al. 2005, Ash et al. 2000, Zhao et al. 2001, Powers et al. 2005). It is common practice to combine these regression models with heuristic classification rules to work. However, the management data comorbidity score will be created as a method for comparing population differences in medical research (Klabunde et al. 2002) to design a reasonable repayment plan (Van de Ven and Ellis 2000, Dunn et al. 2002) and This can also play a significant role as a basis for predicting the medical costs of the model (Ash et al. 2000, Farley et al. 2006, Chang and Lai 2005). Many studies that predict health care costs and based on data other than claim data are feasible; for example, Fleishman et al. (2006) and Pietz et al. (2004). In our opinion, the best way to describe the predictability of a method is to use different performance indicators for out-of-sample experiments (ie, data not available using this method). To the extent we can understand, most early regression studies did not report the predictability of methods in off-sample experiments, with a few exceptions (Powers et al. 2005, Dove et al. 2003). Traditionally (Cumming et al. 2002), R2 or modified R2 is used as a measure to evaluate prediction models, but their use also has some serious shortcomings, which in our opinion is not suitable for research. The R2 measurement is relative, not absolute, and is a reasonable measure. It measures an improved predictive ratio (measured by the sum of squared residuals) compared to a constant prediction (see Bertsimas and Freund 2005 for use cases). In particular, R2-based comparisons can play a role in the comparison of different regression models in the same data set, but it does not indicate that R2 is more effective than other methods (such as the one we use here). Based on the purpose of cost prediction (medical intervention, contract pricing, etc.), different error correction measures may be more appropriate and effective than R2. So we have defined new error correction measures that better describe the prediction accuracy in a variety of different ways. The purpose of this paper is to use modern data mining methods, specifically the use of classification trees and clustering algorithms, to track the claims data of more than 800,000 policyholders for more than three years, and then according to the first two years of medical and Cost data, providing a rigorous medical cost verification forecast in the third year. We applied a model built with test samples of more than 200,000 members to quantify the accuracy. The key conclusions are: (1) Our data mining method can obtain accurate prediction of medical costs, which means that this method is a powerful and effective tool for medical cost prediction; (2) The past cost data model can be used for the future. Cost prediction provides very favorable help; (3) Medical information increases the prediction accuracy when using the clustering algorithm, and when using the classification tree, the cost information only acts on similar error correction measures. The rest of the paper is structured as follows: In § 2, we describe the data and define the performance metrics we think we need; in § 3, we propose the two main methods we use: classification trees and clustering algorithms; in § 4, we The performance of classification trees and clustering algorithms in predicting medical costs is reported separately; in § 5, we briefly describe our conclusions and future research directions.

2. Data and error correction measures

This study used medical data generated by hospitals and other health care providers who claimed to be feedback from third-party taxpayers on their services. The study period is: from 8 / 1 / 2004 – 7 / 31 / 2007, from 8 / 1 / 2004 – 7 / 31 / 2006 is a 24-month observation period from 8 / 1 / 2006 – 7 / 31 / 2007 It is the result period of 12 months. Our data set includes 838,242 commercial insurance populations and 2,866 medical claims data for employers and their groups across the country, medical and pharmaceutical requirements, and information on the duration of coverage of individuals (his or her family). The data also contains basic demographic information such as age and gender. All members are insured no later than 8/1/2005 and are not insured before 8/1/2006, and all employers are required to have a continuous coverage period no later than 8 / 1 / 2005 and no earlier than 8 / 1 / 2007. This ensures that every employee (and his/her family) has at least 12 months of data during the observation period, so there will not be a large number of people exiting during the outcome period under the influence of changes in the employer's insurance carrier. Of the 838,242 members, 730,918 were still eligible outside the results period. The difference is that during the results period, more than 108,000 or more than 13.8% of the population withdrew. This is usually due to the turnover of employees and is expected to be about 15% per year. A small percentage of them, about 3,000 members (based on the gender and age distribution of the population), are not completely covered because of death. Our analysis shows that populations that include partial coverage during the outcome period improve corrective action, so for the sake of brevity, we build our model and report these results with a full coverage of the number of people in the results period. Our split data sets are randomly assigned to three equal-sized parts: learning samples, validation samples, and test samples. Learning samples are used to build our prediction models, while validation samples are used to evaluate the performance of various models. Test samples are placed on hold when the model is built and calibrated, and are only used in the results of the behavioral model report at the end of the experiment. We believe that this method is a good test of our conclusions.

2.1 Summary of Claim Data

The claim includes diagnostic, surgical and medication information. Diagnostic data is encoded using the ICD-9-CM code (International Classification of Diseases, Ninth Revision, Clinical Revision), (Medical Insurance and Medicaid Services Center, 2004). This code is a generic code for medical diagnosis and procedures. The program can be coded under various coding schemes: ICD9, DRG, Speed ​​Code, CPT4, and HCPCS----a total of more than 22,000 codes. In addition, the data includes pharmacy claims, that is, it contains information that, where available, prescription (and certain over-the-counter) drugs are absorbed as part of a health plan, and 45,972 drug barcodes have been encoded ( National Drug Code Directory, 2004).

The claim data relies on professional health care personnel using the ICD-9-CM code to code their diagnosis and procedures. Although the code for a medical claim begins with a clinician, it is most often completed and submitted by an independent dedicated billing operator. Because of the practice and the more manageable scale that data can be obtained, the inevitable changes in interpretation can be made, so we chose to use code groups instead of independent code. We have reduced more than 13,000 individual diagnoses to 218 diagnostic groups. Medical procedures and drug categories are also grouped. More than 22,000 individual programs were divided into 180 program groups, and more than 45,000 individual prescription drugs were divided into 336 treatment groups. The analysis also included more than 700 measures of medical quality and risk for a given dangerous clinical situation (eg, an ER model patient who did not go to the clinic, a diabetic patient with a foot ulcer, etc.). We also calculate the number of diagnoses, procedures, drugs, and risk factors for each person and add them as additional variables. In summary, predictive medical variables include: diagnostic group, program group, drug group, risk factors we developed, and their number, totaling nearly 1500 possible medical variables. We recommend that readers obtain more details in the online Appendix A and D. The electronic appendix to this article is available online at http://or.journal.informs.org/.

2.2 Cost and demographic data

In addition to medical variables, we also use 22 cost variables because we believe that cost information provides a global picture of an individual's health. We also include age and gender. To capture the trajectory of medical costs (as a representative of overall medical conditions), we use monthly cost data for the last 12 months of the observation period, total drug cost and total medical cost for the entire observation period, and the final observation period. The overall cost of three months and the last six months. In addition, in order to capture the cost model, we propose a new indicator variable, which can determine whether the captured member cost model exhibits a “spike” pattern, that is, a sudden increase in the cost curve followed by a sudden drop. To illustrate this idea, let's look at Figure 1, which describes the monthly cost of two members over the past 12 months of the observation period. Although each member has a claim cost of approximately $9,800, member A has a relatively high constant medical cost (a typical pattern for a chronic patient), while member B has a spike in the cost curve profile (a typical pattern for an acute patient) ). The key idea here is that although higher medical costs have a clear recurring trend in the future, a spike cost model may have lower risks in future higher medical costs: for example, pregnancy complications, accidents or acute Medical conditions such as pneumonia or appendicitis.

Note: A cubic spline curve is for easy viewing of data, member A's cost curve has the characteristics of chronic disease, and member B's cost curve is characterized by acute. The most expensive diagnostic claim for member A is lymphoma and respiratory failure. The reason for the maximum claim of Member B was the labor complications.

In addition, we used the following four variables: the maximum cost per month, the number of months above the average cost, and the positive and negative trends over the past few months of the observation period. Finally, we used gender and age as additional variables. Table 1 summarizes all the variables used in the study and provides more details in Appendix A.

2.3. Cost column

In the results period of the study sample, members' payments range from $0 to $710,000. The cumulative cost of the population presents some known characteristics: 80% of the total cost of the population comes from only 20% of the most expensive members. Figure 2 shows the cost characteristics of our population. For our sample, we noticed that about 8% of the population accounted for 70% of the total medical expenses.

Note: The X-axis is the cumulative percentage of the population and the Y-axis is the cumulative percentage of the overall medical expenses. From the graph we noticed that 8% of the population (the highest consumption members) accounted for 70% of the overall medical expenses.

While reducing the extremely expensive members (which can be considered as outliers), the impact of errors in the data can be reduced. We divide the member's cost curve into five different bands or cost columns. We divide this so that the sum of all member costs is roughly the same in each column, ie the total amount in each column is the same (about $117 million per column). We chose 5 columns because it ensures that the number of members in the most expensive column is large enough (we have 1,175 members in the learning sample in the fifth column). Table 2 shows the range, percentage, and number of members of each column in the learning sample for each column.

Note: The scope of the cost column and the scores in the learning samples for each column (calculated for the last 12 months of the observation period). The sum of the membership fees in any one column is between $1.16 and $119 million.

For professional health care managers, the forecast bar is valuable. Columns 1 through 5 can be interpreted as low, emerging, medium, high, and very high risk medical complications. The member predictions in columns 2 and 3 will be candidates for the health plan, the member predictions in column 4 are candidates for the disease management plan, and the most expensive member prediction columns are candidates for the situation management program, and are also the most difficult for patient care programs. type.

2.4 Performance Specifications

There are three main error correction measures we measure our model performance: hit rate, penalty, and absolute prediction error (APE). In order to be able to compare our results with published studies, we also introduced R2 and truncated R2, and introduced a new similarity measure ∣R∣. We provide some additional insights on R2 in § 2.4.2 and new error correction measures in § 2.4.1.

2.4.1 Definition of error correction measures

Hit rate: We define the hit rate as the proportion of members when we predict the correct cost item.

Penalty: The penalty is based on the opportunity of medical intervention and is therefore non-uniform. Underestimating higher costs has greater penalties, which is consistent with ignoring the greater medical and financial risks of these individuals. Incorrectly identifying an individual with a lower actual cost as a high-risk penalties would be lighter than the opposite, because the damage or cost is small. Therefore, underestimating the penalty in the cost column will double the overestimation. This is due to the doctor's estimated loss of opportunity. Table 3 shows the program penalty table for the five cost columns. We define the penalty measure as the average forecast penalty for each given sample member.

Note: A perfect prediction result has an error of zero.

Absolute prediction error: The absolute predictive error is rooted in actual health care costs. We define the absolute prediction error as the average absolute difference between the predicted (year) amount and the realized (year) amount. For example, if we predict that a member's health care cost is $500 in the results period, but in reality the member has a total medical cost of $2,000, then the member's absolute forecast error is -$500?$2000-=$1500. We define the absolute prediction error (APE) as the average error for a given sample. APE has been used in conjunction with traditional R2 in a recent study (Cumming et al. 2002, Powers et al. 2005, Dunn et al. 2002). One advantage of APE is that it is not consistent with prediction errors, which makes it insensitive to outliers (members with extreme medical costs). This particular advantage is due to the nature of medical cost data, as there are always individual members with unpredictable high cost scenarios.

2.4.2 R2 measures

R2 is defined as

Fi: predicted cost of the i-th member

Ti: actual cost of the i-th member

a: Average medical cost in the outcome period If we observe the standard sum of member contributions in the cost column of the observation period, we will see that it varies greatly, as shown in Table 4. The second column has the scores of the study samples in each column, and the third column has the contribution of the denominator to the sum. We note that 27.9% of the total is contributed by 0.5% of members in column 5 during the observation period. Therefore, R2 is not affected by the non-uniformity of the most expensive columns in the members.

Note: The effect of the standard sum of R2 and -R-correction measures is like the function of the cost column for the last 12 months of the observation period. (The numbers are based on test samples.) R2 corrects each prediction error, which makes it very sensitive to prediction errors for high medical cost members. So a good model for most people may result in low R2 due to some extreme and unpredictable outliers. (For example, a member with a sudden onset and a serious condition) In the literature, researchers have used medical cost truncation to solve the case. We believe that the R2 result is caused when the claim cost is truncated to $100,000 with R2100, and the fourth column of Table 4 shows the contribution to the standard sum in this case. By truncating these members, the contribution of the standard sum in the fifth column drops to 16%, close to columns 2 to 4. The natural measure of medical cost prediction is the absolute value of the prediction error. So we define a new R-like measure that has some of the same properties as R2.

m: sample median

When we predict the median value for all member samples, -R-=0; if there are ti = fi for all i values, there is -R-=1. Similarly, R2 measures the amount of squared reduction of the residual, and -R—the amount of decrease in the sum of the absolute values ​​of the residuals. In the last two columns of Table 4, we summarize the contribution to the sum of the -R-standard population. We note that in the column of the observation period, the contribution is strictly decremented and less affected by truncation (note - R100 -). Our conclusion is that R-R2 is less sensitive to outliers and may therefore be more suitable for cost prediction in healthcare.

3. Method

3.1. Benchmark method

To make a meaningful comparison, we defined a basic method to compare the results of the predictive model. We used the medical expenses of the past 12 months to predict the overall medical cost. Because the current medical expenses are strong for one's health. A strong prediction is that this benchmark is more convincing than random allocation. Table 5 shows the sample distribution in the cost column of the observation period and the forecast period. For example, nearly 70% of people fall in the interval in both stages. People who spend in the observation period falling in the interval 1-4 are most likely to fall in the column interval during the forecast period. On the other hand, those who fall in the interval 5 during the observation period are most likely still in the interval 5 (the highest cost) It can be speculated that people with moderate medical costs are getting better and better, and those who have invested a lot of medical expenses tend to invest in higher medical expenses.

Table 6 summarizes all the error indicators of the baseline prediction. The benchmark prediction model has a hit rate of 80%, the average penalty error is 0.431, and the absolute prediction error is 2677. Further mining reveals that the hit rate of interval 1 is 90.1%, and the penalty error. 0.287, absolute error 1279. In fact, most of the members fall in interval 1, the current medical expenses are low, and the predicted medical expenses are also low. The accuracy of this model increases with the increase of the cost interval.

3.2 Data Mining Method: Classification Tree

Classification trees can be applied to various fields such as finance, speech recognition and medicine. In medicine, they are applied to osteoarthritis of the hip, Churg-Strauss syndrome and head and neck cancer. The classification tree will be all Members are grouped into smaller groups that are more detailed and can be displayed in the form of trees. The way the graphics are displayed makes the classification tree easier to express and easier to understand. Suppose the person in the data set is only likely to have three diseases: coronal Arterial disease (CAD), diabetes, and acute pharyngitis. Table 7 shows the results of the classification tree. The classification tree can be used to predict the medical expenses of new members. We first check if he has CAD, and if not, the cost interval 1; if there is, further test whether it has diabetes. If there is, it will be in the cost range 5 (high consumption), if not, it will be in the cost interval 3.

Running the classification tree algorithm across the entire data set yields more complex classification results than Table 7. Tables 8 and 9 describe the characteristics of the members of the cost intervals 4 and 5. They illustrate how the classification tree uses consumption, medical information, and age. Information to identify dangerous populations.

Table 8 Predicting consumption of members in interval 5

The overall consumption in the past 12 months was between 12,300 and 16,000. Members took no more than 14 different therapeutic drugs during this period. There was no heart blockage after injection of amiodarone hydrochloride. There were more than 15 diseases and at least One of the following: (a) has been in the ICU because of congestive heart failure, (b) has kidney failure disease, (c) is treated in multiple hospitals during the observation period, and (d) suffers from coronary artery disease and depression at the same time. Members who spent more than 24,500 during the observation period were diagnosed with secondary malignancies. Members of the interval 2 spent 2700-6100 and had the following symptoms: (a) taking coronary artery disease and hypertension drugs (b) having peripheral vascular disease But do not take medicine. Consumption interval 2, 15-34 kinds of treatment drugs, cost 1200-4000 US dollars, hospitalization for hepatitis C. Consumption interval 2 and 3, cost less than 2,400 US dollars, less than 13 treatment drugs, accepted after the attack Zyban treatment

Table 9 predicts consumption of members in interval 4.

Members in the interval 2 to 5 took more than 34 classes of treatment during the observation period. Members of the interval 1 had a hospitalization cost of approximately 1,300 in the past three months. Women in the interval 1 spent between 1,300 and 1,500 in the observation period of the past 6 months. There was no renal failure, and no prenatal vitamins were taken during pregnancy. Members of Interval 1 spent more than 1,700 over the past 6 months of observation, have non-acute costs, have high blood pressure, but have no experiment during the observation period. It cost more than $24,500 during the observation period, but spent less than $3,200 in the clinic. Members who took less than 14 drugs did not have a diagnosis of secondary malignancy, but had more than 9 consultations during the past three months.

3.3 Data Mining Method: Clustering

The clustering method aggregates similar samples into one class and aggregates different samples into different classes. Our prediction mainly uses the search-cluster eigen-cluster algorithm. When applied to the dataset, The algorithm can automatically detect patterns in the data and aggregate members with similar properties. At first we only consider the monthly cost data of the sample, giving the subsequent months a greater weight than the first month. The result is a sample with similar consumption characteristics. Aggregate into the same class. Then, cluster each medical data with similar consumption characteristics. The result is to generate a class with similar consumption characteristics and medical characteristics. Finally, predict each class based on the learning sample. For a class to analyze, we found that samples with low consumption characteristics in the first six months of the observation period will have greater consumption in the last month. The key to the problem is that we can't distinguish each class well based on consumption data. Members of our algorithm. The application of medical data to the class of consumer data is subdivided into two categories.

Table 10 shows the biggest differences between the two types of medical information.

Members of the class have been shown to have cancer in pathology, cytopathology, diet, and other aspects. Serious health problems predict that members may have higher medical expenditures in the future. The second group of members have received physics. Treatment and orthopedic surgery and other musculoskeletal conditions. We estimate that these members will have better health and will have lower medical costs in the future.

4 Conclusion

4.1 Data mining method performance

We run the classification tree algorithm on the learning sample and validate it using the validation samples. We built three separate classification trees to detect the model's performance. Once we find a tree that fits the error validation, apply it to the test set. Each one The data in the class and the medical characteristics are similar. For each class, the predictions are based on the learning and validation samples and applied to the test set. We first examine the overall performance and then validate the interval.

Table 11 shows the performance test. More than 84% of the samples are assigned to the correct interval, the average penalty error is 0.385, and the absolute prediction error is $2243. The performance of the classification tree is considerably higher than the benchmark method. Yes, the penalty error is reduced by 10.5% and the hit rate is increased by 5%. For the clustering method, there is a considerable improvement over the benchmark method. The result is comparable to the classification tree algorithm, and the clustering method is on the average prediction error. There is a good performance. Now, we will explore the correctness of the algorithm and the performance of the observation period. For both algorithms, the accuracy of the top-level interval is the most obvious. The hit rate of the classification tree method is almost Doubled, the penalty error is reduced by 23%, and the absolute prediction error of the most expensive interval is reduced by more than 50%. The hit rate of the clustering method is also doubled, and the penalty error is reduced by 35%. The absolute prediction error of the most expensive interval is reduced by 58%. In summary, the classification tree method performs better in the low-cost interval and the penalty error, while the clustering method performs better in the high consumption interval.

4.2 Forecast based on cost information

We then classify based on cost information and compare it with predictions based on cost and medical information. The performance is comparable or even better in the low-consumption interval. The classification tree has better performance in the low-cost interval. The clustering method performs better in two high-cost intervals. In general, the addition of medical information does not improve the performance of the classification tree. An important goal based on cost prediction is medical intervention with patients, interpretable medical information. The model is superior. In other cases, based on 22 cost variables, equivalent to 1500 medical variables, it may be optimal.

4.3 Comparison with other studies

Comparing with studies that do not use the same data set does not yield relatively accurate results because the average prediction error is highly dependent on the data set used. Therefore, to illustrate the effect, we have only compared the other two studies. Cumming et al (2002) The average absolute prediction error is 93%, while Powers et al. (2005) has an error of 98%. The average error of our clustering method on the test set is 78.8%, while the classification The tree is 89.4%, which is lower than the other two studies. Our algorithm reduces the relative error in all the spending ranges. The larger the interval value, the more the error is reduced, which is 5% to 49% for the R2 and R2100 methods. The R and R100 methods are 10% to 32%. This shows that our predictive model improves the accuracy of all cost ranges, especially for high-spending members.

4.4 Summary of results

In summary, the two data mining methods have higher predictive power than the benchmark method in all performance measurement methods, and are more obvious in the higher cost range. The performance of the two methods is comparable. Clustering method In the high-consumption members, there is better predictive ability, and according to the hit rate and penalty error, the classification tree method has higher predictive ability on low-consumption members. We believe that the clustering method has higher scores on high-spending members. The predictive ability is due to the cost and the use of medical information. In the past, the clustering method first used the cost information, and then used the medical information, because the medical information can further explore which cost range the members belong to. Refer to our cluster sample. Members of similar classes have similar trajectories in the last few months of the observation period. Using medical information, clustering algorithms can divide patients into two broad categories: high-risk cancer patients with interval 4, interval 1 Musculoskeletal. When medical information is not dense, using the cost information of members with low cost intervals will lead to the same error. In addition, compared with previous studies, I

5. Conclusion and future research

Our proposed algorithm based on current data mining methods provides a powerful prediction of the cost of quantization. We believe that the R2 method, which has traditionally been used to evaluate the correctness of predictions, has certain limitations. The prediction method we designed may be accurate. It is better. Although clinical information is relatively abundant in our dataset, we find that consumer information is the most accurate prediction for high-spending patients. Obviously, in the case of sparse medical data, the cost is medical information. An effective alternative. The algorithm can be used to predict the cost of individuals and groups, and as the basis for medical management patient contact. Subsequent research can be used for financial reimbursement or insurance pricing based on the algorithm based on this paper, but more comprehensive health is needed. Medical economy and system design.

Text / Knowing Participants: Planning - Xu Ruiyi, Fan Wei; Compilation - Xu Min, Li Canjia; Editor - Teng Yi, Wang Qun; Promotion - Shen Honghao, Cheng Jiechao, Zhou Yuqi, Liang Yazhen, Li Huafang

Source: Knowing

Protective Clothing

Medical protective clothing refers to the protective clothing used by medical personnel (doctors, nurses, public health personnel, cleaning personnel, etc.) and people entering specific medical and health areas (such as patients, hospital visitors, personnel entering infected areas, etc.). Its function is to isolate germs, harmful ultrafine dust, acid and alkaline solutions, electromagnetic radiation, etc., to ensure the safety of personnel and keep the environment clean.

Protective ClothingProtective ClothingProtective Clothing

protective clothing,personal protection,disposable surgical protective clothing

Shanghai Rocatti Biotechnology Co.,Ltd , https://www.shljdmedical.com

Posted on