TMM4175 Polymer Composites

Home About Python Links Next Previous Table of Contents

Material variability and characteristic strength

Characteristic strength

In most design codes, material or component strength values are based on the concept of a characteristic value. The characteristic value is defined as the value below which not more than a specified percentage of the test results may be expected to fall based on the assumed distribution function. Normal distribution is most commonly employed for characteristic strength and will be used here. Note however that the basic principles applies to other distribution functions as well.

Example: Assume that the following data sample is the experimental results of a strength parameter:

In [1]:
v = [543.8, 507.2, 521.9, 485.8, 481.7, 528.7, 506.6, 499.1, 493.2, 392.4, 540.9, 492.1, 511.1, 457.1, 535.1, 506.0, 
     573.2, 454.5, 534.6, 519.9, 570.4, 520.2, 482.7, 580.9, 573.8, 478.0, 572.6, 552.0, 534.6, 563.4, 455.2, 480.3, 
     543.8, 572.3, 496.0, 495.3, 502.5, 537.7, 426.4, 456.4, 516.7, 526.5, 532.0, 536.0, 377.1, 508.4, 581.1, 540.5, 
     532.5, 430.9, 483.3, 385.1, 491.7, 511.8, 465.0, 495.6, 553.1, 562.9, 545.0, 485.9, 544.5, 416.7, 538.5, 413.3, 
     540.3, 541.1, 577.5, 588.1, 560.3, 505.4, 537.0, 546.7, 556.9, 513.5, 565.6, 518.7, 462.1, 548.8, 502.7, 530.8, 
     500.3, 517.9, 508.1, 560.4, 569.4, 484.3, 489.6, 540.9, 540.1, 578.3, 475.7, 416.9, 519.8, 477.4, 483.0, 603.8, 
     521.9, 496.1, 526.5, 516.9, 551.1, 494.2, 439.9, 584.0, 458.7, 463.7, 464.8, 520.5, 489.1, 513.0, 520.1, 509.5, 
     540.9, 579.8, 478.5, 604.8, 503.9, 565.2, 462.6, 485.0, 589.4, 507.3, 560.6, 453.5, 456.5, 513.5, 416.4, 543.9, 
     446. , 509.5, 544.4, 509.5, 499.8, 506.4, 585.7, 579.9, 544.0, 531.9, 529.8, 629.4, 574.8, 570.2, 512.2, 574.7,
     541.6, 483.9, 520.1, 480.7, 605. , 571.5, 587.9, 613.6, 510.2, 615.6, 517.6, 631.1, 518.4, 630.6, 492.7, 523.3, 
     583.6, 474.7, 419.5, 570.8, 582.6, 463.6, 506.3, 496.7, 520.3, 559.8, 519.4, 551.6, 515.7, 598.9, 423.1, 543.2, 
     555.7, 586.9, 539.9, 481.7, 479.5, 451.2, 554.3, 491.9, 522.3, 542.5, 442.5, 537.5, 604.0, 447.0, 498.4, 627.4, 
     595.8, 526.2, 543.3, 573.5, 551.5, 558.3, 450.3, 567.7]

Some basic statistical parameters:

In [2]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.stats import norm

n    =  len(v)
vmin =  np.amin(v)
vmax =  np.amax(v)
vmean=  np.mean(v)
vstd =  np.std(v,ddof=1)
vcov =  100*vstd/vmean

print('Sample size:',n)
print('Min:        ',vmin)
print('Max:        ',vmax)
print('Mean:       ',vmean)
print('Stdev:      ',vstd)
print('COV%:       ',vcov)
Sample size: 200
Min:         377.1
Max:         631.1
Mean:        520.989
Stdev:       49.771198902588274
COV%:        9.553214924420338

Thus, the mean value is about 521 while the standard deviation is close to 50 and the coefficient of variation is 9.6 %.

We may plot the normal distribution of these data based on the mean and the standard deviation along with a histogram using 10 bins of the data:

In [3]:
x=np.linspace(vmin-3*vstd,vmax+3*vstd,1000)
y=norm.pdf(x,vmean,vstd)
fig,ax = plt.subplots(figsize=(10,4))
ax.plot(x,y,color='black')
n,bins,patches=plt.hist(v, bins=10, density=True,facecolor='orange', alpha=0.6 )
ax.set_xlabel('Strength')
ax.set_ylabel('Frequency')
ax.set_title('Normal distribution of strength')
ax.set_xlim(0,)
ax.set_ylim(0,)
ax.grid(True)
plt.show()

Many standards takes, for historical reasons, the 5th percentile as the characteristic strength. That is: no more than 5% of the values shall fall below this value. For a normal distribution the value is found by

\begin{equation} strength = mean - z \cdot std \tag{1} \end{equation}

where $z = 1.645$

Other standards may use for example the 2.5th percentile where $z = 1.960$.

Both examples are computed and illustrated below:

In [4]:
p050 = vmean-1.645*vstd
p025 = vmean-1.960*vstd

fig,ax = plt.subplots(figsize=(10,4))
ax.plot(x,y,color='black')
ax.plot((p050,p050),(0,max(y)),'--', color = 'blue',label='5th percentile')
ax.plot((p025,p025),(0,max(y)),'--', color = 'red',label='2.5th percentile')
ax.set_xlabel('Strength')
ax.set_ylabel('Frequency')
ax.set_title('Normal distribution of strength')
ax.set_xlim(0,)
ax.set_ylim(0,)
ax.grid(True)
ax.legend(loc='best')
plt.show()

print('5th percentile:  ',p050)
print('2.5th percentile:',p025)
5th percentile:   439.1153778052423
2.5th percentile: 423.437450150927

So far, the assumption of an infinite sample size (infinite number of tests) has been made. That is obviously not a viable requirement, and the statistical approach must somehow include considerations for limited sample sizes based on required confidence.

The following table is an example for the 2.5th percentile with 95% confidence:

Sample size $n$ $z$
3 9.0
4 6.0
5 4.9
6 4.3
10 3.4
15 3.0
20 2.8
25 2.7
$\infty$ 2.0

Table-1: Value of z for 2.5th percentile with 95% confidence

Let’s choose a 'random' sequence of 5 tests extracted from the data set:

In [5]:
v1 = v[15:20]
print(v1)
[506.0, 573.2, 454.5, 534.6, 519.9]
In [6]:
v1min  =  np.min(v1)
v1max  =  np.max(v1)
v1mean =  np.mean(v1)
v1std  =  np.std(v1,ddof=1)
v1cov  =  100*v1std/v1mean
v1char =  v1mean-4.9*v1std

x1=np.linspace(v1min-3*v1std,v1max+3*v1std,1000)
y1=norm.pdf(x,v1mean,v1std)
fig,ax = plt.subplots(figsize=(10,4))
ax.plot(x1,y1,color='black')
ax.plot((v1char,v1char),(0,max(y1)),'--', color = 'red',label='Characteristic strength')
ax.set_xlabel('Strength')
ax.set_ylabel('Frequency')
ax.set_title('Normal distribution of strength')
ax.set_xlim(0,)
ax.set_ylim(0,)
ax.grid(True)
ax.legend(loc='best')
plt.show()

print('Mean:                 ',v1mean)
print('Stdev:                ',v1std)
print('COV%:                 ',v1cov)
print('Characteristic value: ',v1char)
Mean:                  517.6400000000001
Stdev:                 43.30650066675904
COV%:                  8.366142621659653
Characteristic value:  305.4381467328808

Hence, the estimated characteristic strength for the limited data set is much less than the estimation from the big data set (305 versus 423) when reasonably approximated that a sample size of 200 is close to infinite in this context.

The following code picks 195 different samples, each with a sample size of 5. The characteristic value is computed for each and shown in the graph:

In [7]:
cvs = []
for i in range (0,195):
    vt = v[i:i+5]
    cv = np.mean(vt) - 4.9*np.std(vt,ddof=1)
    cvs.append(cv)
    
fig,ax = plt.subplots(figsize=(10,4))
ax.plot(cvs,color='black')
ax.set_xlabel('Sample number')
ax.set_ylabel('Characteristic value')
ax.grid(True)
plt.show()

The result shows different estimations of characteristic strength, from approximately 130 to a maximum of about 460 depending on the chosen sub-sample. Notice however that the maximum value of 460 is relatively close to the characteristic value from the whole dataset (423). After all, the confidence is not 100% and the z- value should probably be slightly more than 2 for the large sample set. Therefore, it is perfectly reasonable that there exists sample sizes of 5 with slightly higher characteristic values than the one estimated from the whole set.

Example of bad luck picking a sequence of five specimen from the 200 specimen sample:

In [8]:
v2 = v[44:49]

v2min  =  np.min(v2)
v2max  =  np.max(v2)
v2mean =  np.mean(v2)
v2std  =  np.std(v2,ddof=1)
v2cov  =  100*v2std/v2mean
v2char =  v2mean-4.9*v2std

x2=np.linspace(v2min-3*v2std,v2max+3*v2std,1000)
y2=norm.pdf(x2,v2mean,v2std)
fig,ax = plt.subplots(figsize=(10,4))
ax.plot(x2,y2,color='black')
ax.plot((v2char,v2char),(0,max(y2)),'--', color = 'red',label='Characteristic strength')
ax.set_xlabel('Strength')
ax.set_ylabel('Frequency')
ax.set_title('Normal distribution of strength')
ax.set_xlim(0,)
ax.set_ylim(0,)
ax.grid(True)
ax.legend(loc='best')
plt.show()

print('Mean:                 ',v2mean)
print('Stdev:                ',v2std)
print('COV%:                 ',v2cov)
print('Characteristic value: ',v2char)
Mean:                  507.91999999999996
Stdev:                 77.67729397964375
COV%:                  15.293214281706518
Characteristic value:  127.30125949974558

Large scatter

Materials with extensive heterogeneity on the macro scale and intrinsically brittle materials like many of the ceramics may show a large, or even a very large coefficient of variation (COV). Even the scatter of the average values of a number of series of measurements may show a large scatter compared to the overall average value. In those cases, expression (1) can easily lead to a nonsensical characteristic or minimum strength as demonstrated in the following example:

In [9]:
v3 = [200.0, 250.0, 300.0, 350.0, 400.0]

v3mean=  np.mean(v3)
v3std =  np.std(v3,ddof=1)
v3cov =  100*v3std/v3mean
print('Mean:                 ',v3mean)
print('Stdev:                ',v3std)
print('COV%:                 ',v3cov)
print('Characteristic value: ',v3mean-4.9*v3std)     # z = 4.9 when n = 5 
Mean:                  300.0
Stdev:                 79.05694150420949
COV%:                  26.352313834736496
Characteristic value:  -87.37901337062652

Note: For most structural engineering materials including engineering metals and composites, the coefficient of variation (COV) for samples of well prepared specimen is less than 15% and the principles behind equation (1) are applicable.

Strength measurements series where large scatter is an intrinsic feature of a material or a structure may be described by 2-parameters Weibull distributions that indicate that the minimum value approaches to zero. The strength is now defined, or chosen, based on an accepted probability of failure.

Disclaimer:This site is about polymer composites, designed for educational purposes. Consumption and use of any sort & kind is solely at your own risk.
Fair use: I spent some time making all the pages, and even the figures and illustrations are my own creations. Obviously, you may steal whatever you find useful here, but please show decency and give some acknowledgment if or when copying. Thanks! Contact me: nils.p.vedvik@ntnu.no www.ntnu.edu/employees/nils.p.vedvik

Copyright 2021, All right reserved, I guess.