RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        Loss of coolant accident analysis under restriction of reverse flow

        Majdi I. Radaideh,Tomasz Kozlowski,Yousef M. Farawila 한국원자력학회 2019 Nuclear Engineering and Technology Vol.51 No.6

        This paper analyzes a new method for reducing boiling water reactor fuel temperature during a Loss ofCoolant Accident (LOCA). The method uses a device called Reverse Flow Restriction Device (RFRD) at theinlet of fuel bundles in the core to prevent coolant loss from the bundle inlet due to the reverse flow aftera large break in the recirculation loop. The device allows for flow in the forward direction which occursduring normal operation, while after the break, the RFRD device changes its status to prevent reverseflow. In this paper, a detailed simulation of LOCA has been carried out using the U.S. NRC's TRACE code toinvestigate the effect of RFRD on the flow rate as well as peak clad temperature of BWR fuel bundlesduring three different LOCA scenarios: small break LOCA (25% LOCA), large break LOCA (100% LOCA), anddouble-ended guillotine break (200% LOCA). The results demonstrated that the device could substantiallyblock flow reversal in fuel bundles during LOCA, allowing for coolant to remain in the core during thecoolant blowdown phase. The device can retain additional cooling water after activating the emergencysystems, which maintains the peak clad temperature at lower levels. Moreover, the RFRD achieved thereflood phase (when the saturation temperature of the clad is restored) earlier than without the RFRD.

      • SCIESCOPUSKCI등재

        Analyzing nuclear reactor simulation data and uncertainty with the group method of data handling

        Radaideh, Majdi I.,Kozlowski, Tomasz Korean Nuclear Society 2020 Nuclear Engineering and Technology Vol.52 No.2

        Group method of data handling (GMDH) is considered one of the earliest deep learning methods. Deep learning gained additional interest in today's applications due to its capability to handle complex and high dimensional problems. In this study, multi-layer GMDH networks are used to perform uncertainty quantification (UQ) and sensitivity analysis (SA) of nuclear reactor simulations. GMDH is utilized as a surrogate/metamodel to replace high fidelity computer models with cheap-to-evaluate surrogate models, which facilitate UQ and SA tasks (e.g. variance decomposition, uncertainty propagation, etc.). GMDH performance is validated through two UQ applications in reactor simulations: (1) low dimensional input space (two-phase flow in a reactor channel), and (2) high dimensional space (8-group homogenized cross-sections). In both applications, GMDH networks show very good performance with small mean absolute and squared errors as well as high accuracy in capturing the target variance. GMDH is utilized afterward to perform UQ tasks such as variance decomposition through Sobol indices, and GMDH-based uncertainty propagation with large number of samples. GMDH performance is also compared to other surrogates including Gaussian processes and polynomial chaos expansions. The comparison shows that GMDH has competitive performance with the other methods for the low dimensional problem, and reliable performance for the high dimensional problem.

      • SCIESCOPUSKCI등재

        On using computational versus data-driven methods for uncertainty propagation of isotopic uncertainties

        Radaideh, Majdi I.,Price, Dean,Kozlowski, Tomasz Korean Nuclear Society 2020 Nuclear Engineering and Technology Vol.52 No.6

        This work presents two different methods for quantifying and propagating the uncertainty associated with fuel composition at end of life for cask criticality calculations. The first approach, the computational approach uses parametric uncertainty including those associated with nuclear data, fuel geometry, material composition, and plant operation to perform forward depletion on Monte-Carlo sampled inputs. These uncertainties are based on experimental and prior experience in criticality safety. The second approach, the data-driven approach relies on using radiochemcial assay data to derive code bias information. The code bias data is used to perturb the isotopic inventory in the data-driven approach. For both approaches, the uncertainty in k<sub>eff</sub> for the cask is propagated by performing forward criticality calculations on sampled inputs using the distributions obtained from each approach. It is found that the data driven approach yielded a higher uncertainty than the computational approach by about 500 pcm. An exploration is also done to see if considering correlation between isotopes at end of life affects k<sub>eff</sub> uncertainty, and the results demonstrate an effect of about 100 pcm.

      • KCI등재

        PESA: Prioritized experience replay for parallel hybrid evolutionary and swarm algorithms - Application to nuclear fuel

        Radaideh Majdi I.,Shirvan Koroush 한국원자력학회 2022 Nuclear Engineering and Technology Vol.54 No.10

        We propose a new approach called PESA (Prioritized replay Evolutionary and Swarm Algorithms) combining prioritized replay of reinforcement learning with hybrid evolutionary algorithms. PESA hybridizes different evolutionary and swarm algorithms such as particle swarm optimization, evolution strategies, simulated annealing, and differential evolution, with a modular approach to account for other algorithms. PESA hybridizes three algorithms by storing their solutions in a shared replay memory, then applying prioritized replay to redistribute data between the integral algorithms in frequent form based on their fitness and priority values, which significantly enhances sample diversity and algorithm exploration. Additionally, greedy replay is used implicitly to improve PESA exploitation close to the end of evolution. PESA features in balancing exploration and exploitation during search and the parallel computing result in an agnostic excellent performance over a wide range of experiments and problems presented in this work. PESA also shows very good scalability with number of processors in solving an expensive problem of optimizing nuclear fuel in nuclear power plants. PESA's competitive performance and modularity over all experiments allow it to join the family of evolutionary algorithms as a new hybrid algorithm; unleashing the power of parallel computing for expensive optimization.

      • SCIESCOPUSKCI등재

        Application of deep neural networks for high-dimensional large BWR core neutronics

        Abu Saleem, Rabie,Radaideh, Majdi I.,Kozlowski, Tomasz Korean Nuclear Society 2020 Nuclear Engineering and Technology Vol.52 No.12

        Compositions of large nuclear cores (e.g. boiling water reactors) are highly heterogeneous in terms of fuel composition, control rod insertions and flow regimes. For this reason, they usually lack high order of symmetry (e.g. 1/4, 1/8) making it difficult to estimate their neutronic parameters for large spaces of possible loading patterns. A detailed hyperparameter optimization technique (a combination of manual and Gaussian process search) is used to train and optimize deep neural networks for the prediction of three neutronic parameters for the Ringhals-1 BWR unit: power peaking factors (PPF), control rod bank level, and cycle length. Simulation data is generated based on half-symmetry using PARCS core simulator by shuffling a total of 196 assemblies. The results demonstrate a promising performance by the deep networks as acceptable mean absolute error values are found for the global maximum PPF (~0.2) and for the radially and axially averaged PPF (~0.05). The mean difference between targets and predictions for the control rod level is about 5% insertion depth. Lastly, cycle length labels are predicted with 82% accuracy. The results also demonstrate that 10,000 samples are adequate to capture about 80% of the high-dimensional space, with minor improvements found for larger number of samples. The promising findings of this work prove the ability of deep neural networks to resolve high dimensionality issues of large cores in the nuclear area.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼