http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Enuresis as a Presenting Symptom of Graves' Disease: A Case Report
Hwang, Inseong,Park, Eujin,Lee, Hye Jin Korean Society of Pediatric Nephrology 2021 Childhood kidney diseases Vol.25 No.1
Enuresis is intermittent urinary incontinence during sleep at night in children aged 5 years or older. The main pathophysiology of enuresis involves nocturnal polyuria, abnormal sleep arousal, and low functional bladder capacity. In rare cases, enuresis is an early symptom of endocrine disorders such as diabetes or thyroid disorders. Herein, we report a case of a 12-year-old girl with enuresis as a rare initial presentation of Graves' disease. She complained of nocturnal enuresis from a month before visiting our clinic. She also complained of urinary frequency, headache, and weight loss. On physical examination, she had tachycardia, intention tremors, and a diffuse goiter on her anterior neck with bruit on auscultation. Her thyroid function test results revealed hyperthyroidism, and Graves' disease was diagnosed as the thyroid stimulating hormone receptor autoantibody was positive. After treatment for Graves' disease with methimazole, symptoms of enuresis resolved within 2 weeks as she became clinically and biochemically euthyroid. In children with secondary enuresis, Graves' disease should be considered as a differential diagnosis, and signs of hyperthyroidism should be checked for carefully.
McSimA+ 시뮬레이터를 사용한 Vision Transformer 추론 과정의 레이어 별 Memory Bottleneck 분석
황인성(Inseong Hwang),장지훈(Jihoon Jang),신진(Shin Jin),김현(Hyun Kim) 대한전자공학회 2023 대한전자공학회 학술대회 Vol.2023 No.6
As deep learning models continue to grow in scale, the number of parameters in these models has increased, causing a significant memory bottleneck in conventional von Neumann architecture-based systems. To address this issue, a new memory technology such as Processing-In-Memory (PIM) is being developed, and its importance is also steadily being emphasized. However, since PIM designs additional logic to the existing memory structure, an in-depth analysis of the workload suitable for PIM is required in advance to prevent unnecessary overhead in the design process. In this paper, in order to verify the suitability of the recently popular Vision in Transformer (ViT) model for PIM, we build a deep learning model analysis environment using McSimA+ simulator and analyze the memory bottleneck of the ViT inference workload by layer. The analysis results show that the ViT is a very memory-intensive workload because Last-to-First Miss Ratio (LFMR) and Last Level Cache Miss Per Kilo Instruction (LLC MPKI) of the ViT, which are composed of embedding, multi-head self attention, and multi-layer perceptron layers, are 88.64 and 45.31, respectively, on average. As a result, the ViT is an appropriate workload to achieve significant system acceleration and power savings through PIM systems, unlike computationally intensive convolution neural networks (CNNs).