毕业论文

打赏
当前位置: 毕业论文 > 外文文献翻译 >

塑料模具设计英文文献和翻译

时间:2017-03-01 22:40来源:毕业论文
1 Introduction Process parameter settings for plastic injection molding critically influence the quality of the molded products. An unsuitable process parameter setting inevitably causes a multitude of production problems: long lead times, m

1 Introduction Process parameter settings for plastic injection molding critically influence the quality of the molded products. An unsuitable process parameter setting inevitably causes a multitude of production problems: long lead times, many rejects, and substandard moldings. The negative impact on efficiency raises costs and reduces competitiveness. This research develops a process parameter optimization system to help manufacturers make rapid, efficient, preproduction setups for MISO plastic injection molding. The focus of this study was molded housing components, with attention to a particularly telling quality characteristic: weight. The optimization system proposed herein includes two stages. In the first stage, mold flow analysis was used to obtain preliminary process parameter settings. In the second stage, the Taguchi method with ANOVA was applied to etermine optimal initial process parameter settings, and a BPNN was applied to build up the prediction model. Then, the BPNN was inpidually combined with the DFP method and with a GA to search for the final optimal process parameter settings. Three confirmation experiments were performed to verify the effectiveness of the final optimal process parameter settings. The final optimal process parameter settings are not limited to discrete values as in the Taguchi method and can determine settings for production that not only approach the target value of the selected quality characteristic more closely but also with less variation. data for the BPNN are limited by the function values, the data must be normalized by the following equation:6211
where PN is the normalized data; P is the original data; Pmax is the maximum value of the original data; Pmin is the minimum value of the original data; Dmax is the expected maximum value of the normalized data, and Dmin is the expected minimum value of the normalized data. When applying neural networking to the system, the input and output values of the neural network fall in the range of[Dmin, Dmax].
According to previous studies [24, 25], there are a few conditions for network learning ermination: (1) when the root mean square error (RMSE) between the expected value and network output value is reduced to a preset value; (2) when the preset number of learning cycles has been reached; and (3) when cross-validation takes place between the training samples and test data. In this research, the first approach was adopted by gradually increasing the network training time to slowly decrease the RMSE until it was stable and acceptable. The RMSE is defined as follows:
where N, di, and  are the number of training samples, the actual value for training sample i, and the predicted value of the neural network for training sample i, respectively.
2 Optimization methodologies
The optimization methodologies including BPNNs, GAs, and the DFP method are briefly introduced as follows.
2.1 Back-propagation neural networks
Many researchers have mentioned that BPNNs have the advantage of fast response and high learning accuracy [19-23]. A BPNN consists of an input layer, one or more hidden layers, and an output layer. The parameters for a BPNN include: the number of hidden layers, the number of hidden neurons, the learning rate, momentum, etc. All of these parameters have significant impacts on the performance of a neural network. In this research, the steepest descent method was used to find the weight and bias change and minimize the cost function. The activation function is a hyperbolic tangent function. In network learning, input data and output results are used to adjust the weight and bias values of the network. The more detailed the input training classification is and the greater the amount of learning information provided, the better the output will conform to the expected result. Since the learning and verification of
2.2 Genetic algorithms
GAs are a method of searching for optimized factors analogous to Darwin's survival of the fittest and are based on a biological evolution process. The evolution process is random yet guided by a selection mechanism based on the fitness of inpidual structures. There is a population of a given number of inpiduals, each of which represents a particular set of defined variables. Fitness is determined by the measurable degree of approach to the ideal. The “fittest” inpiduals are permitted to “reproduce” through a recombination of their variables, in the hope that their “offspring” will prove to be even better adapted. In addition to the strict probabilities dictated by recombination, a small mutation rate is also factored in. Less-fit inpiduals are discarded with the subsequent iteration, and each generation progresses toward an optimal solution. 塑料模具设计英文文献和翻译:http://www.751com.cn/fanyi/lunwen_3624.html
------分隔线----------------------------
推荐内容