RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 학위유형
        • 주제분류
          펼치기
        • 수여기관
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 지도교수
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • Exploring the Structure-property Relationships of Linear and Crosslinked Poly(ethylene Oxide) Polymer Membranes for Gas Separations

        Kline, Gregory K ProQuest Dissertations & Theses University of Notr 2018 해외박사(DDOD)

        RANK : 247343

        The development of polymeric materials suitable for gas separation membrane applications is discussed in this dissertation. Compared to conventional gas separation systems, such as absorption, gas separation membrane systems are inherently smaller in size and easier to operate, and potentially, more economically viable. Membranes with high permeability (for a high gas throughput) and adequate selectivity (ability to separate a given gas from a mixture) are desired. However, due to the natural properties of polymeric materials, generally, membranes with high permeabilities unfortunately operate with low selectivities and vice versa. To combat this natural trade-off, to produce materials with both high permeabilities and sufficiently high selectivities, the chemical and physical properties of polymeric materials must be strategically designed.The majority of this work explores strategies for incorporating rubbery poly(ethylene oxide) (PEO) into gas separation membranes. PEO is a promising material for CO2 related-separations due to its high solubility selectively for CO2 and its high diffusivity, which together, give PEO-based materials excellent CO2-separation performance. However, pure PEO is mechanically weak and suffers from high crystallinity which prevent its use in gas separation membranes. Therefore, this work explores strategies to incorporate PEO into copolymers, into crosslinked networks, and into semi-interpenetrating networks (s-IPNs). These systems have demonstrated improved mechanical properties, mitigated PEO crystallinity, and highly promising CO2-related gas separation performance.

      • Systematically Missing Subject-Level Data in Longitudinal Research Synthesis

        Kline, David The Ohio State University 2015 해외박사(DDOD)

        RANK : 247343

        When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on either the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this dissertation, we focus on missing subject-level variables where few methods have been developed or compared. We compare two multiple imputation approaches that have been proposed for missing subject-level data in single longitudinal studies: a joint modeling approach and a sequential conditional modeling approach. Based on analytical and empirical results for the case when all variables are normally distributed, we find the joint modeling approach to be preferable to the sequential conditional approach except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Based on this preference, we develop a new joint model for multiple imputation of missing subject-level variables that models subject- and observation-level variables with distributions in the exponential family. Our model is built within the generalized linear models framework and uses normally distributed latent variables to account for dependence on both the subject- and observation-levels. When compared via simulation, the performance of our model is similar to or better than existing approaches for imputing missing subject-level variables with normal, Bernoulli, Poisson, and multinomial distributions. We illustrate our method by applying it to coombine two longitudinal studies on the psychological and social effects of pediatric traumatic brain injury that have systematically missing subject-level data.

      • A fundamental study of the charge transport and morphology of regioregular poly(3-hexylthiophene)

        Kline, R. Joseph Stanford University 2005 해외박사(DDOD)

        RANK : 247343

        Conjugated polymers include some of the most promising candidates for the active layer of low-cost thin-film transistors (TFTs) and bulk heterojunction photovoltaic (PV) cells. The charge carrier mobility of these conjugated polymers is the key materials property limiting the performance of both of these devices. This thesis outlines a fundamental investigation of the charge transport and morphology of the first high mobility conjugated polymer, regioregular poly(3-hexylthiophene) (P3HT). The charge carrier mobility in TFTs was found to increase by four orders-of-magnitude as the molecular weight (MW) of P3HT is increased from 3000 g/mole to 36,000 g/mole. P3HT films with different MWs provided an ideal system for correlating morphological changes in conjugated polymers to resulting changes in charge transport. Atomic force microscopy, x-ray diffraction and grazing incidence x-ray scattering (GIXS) were used to measure changes in the crystallinity and crystal orientation associated with varying the spin-casting solvent, annealing conditions, substrate surface treatment, and drop-casting at a constant MW. The GIXS results showed that at a constant MW in both low- and high-MW films, the mobility correlated to the strength of in-plane pi-stacking. When comparing different MWs, however, this correlation broke down. Rocking curves on samples with a chemically modified surface showed highly oriented crystals that were nucleated from the substrate and correlate with variations in charge transport. Switching to low-MW P3HT improves the overall crystallinity, the intensity of in-plane pi-stacking, and the concentration of highly oriented crystals, but the mobility is more than a factor of 100 lower than high-MW P3HT. These counterintuitive results clearly show that the charge carrier mobility of conjugated polymers is coupled to several different aspects of the morphology. In the case of the low-MW films, the strong driving force for ordering creates grain boundaries that isolate the ordered regions from their neighbors. Whereas in high-MW films, the long chains connect the small ordered regions and provide a clear pathway for charges to move through the film. These results were used to develop a model for relating charge transport and structure that can be used as a guide for the development of new, improved chemical structures.

      • Confirmatory item factor analysis investigating adolescent gender differences in applied quantitative knowledge

        Kline, Tracy Lynn University of Virginia 2006 해외박사(DDOD)

        RANK : 247343

        The intense study of mathematical gender differences is not recent. Previous research has investigated "why" gender differences exist, and cited social and environmental factors such as anxiety, teacher expectation, and class selection. While external (i.e., environmental) influences on gender performance are important, current research takes an in-depth look at mathematical items, building an internally focused research framework. Item complexity and gender differences at the item-level are examined to investigate plausible mediators of performance differences. It is hypothesized that mathematical word problems will prove to be more complex than previously thought. Also, said item-level complexity is believed to differentially influence male and female performance. Finally, it is hypothesized that relationships to non-quantitative cognitive factors can mediate item-level gender differences. Baron and Kenny's causal steps mediation methodology was employed. Theoretically, cognitive abilities (reasoning and processing speed) are associated with quantitative ability and show gender differences. Once those abilities are explained in the analytic model, item-level gender differences are expected to disappear. A portion of Woodcock-Johnson III's standardization sample was selected and contained (N = 3874) school aged children ranging from 9 to 19 years of age (M = 13.06). The Applied Problems test of WJ-III's Achievement scale was central to item-level mathematical mediation analyses. Confirmatory factor analyses tested within-item complexity and possible mediation effects. Results suggest that mathematical word problems are complex and individual items contain aspects of reasoning, speed, and spatial ability, which support previous research by Carroll (1996), who postulated that mathematical ability is highly related to other cognitive factors (reasoning, spatial ability, and processing speed). Showing that mathematical items can exhibit strong relationships to indicators of cognitive ability further reinforces this link and could be indicative of differential processing strategies. However, the relationship between gender and these item-level cognitive ability associations was not strong enough to validate a mediation model. Future research should pursue within-item complexity and mathematical item-level gender performance differences in other achievement tests to assess previously undetected bias and strengthen the mathematical ability measurement tools.

      • Controller performance improvements for reentry through Earth's atmosphere for low L/D spacecraft

        Kline, Eric Michael Texas A&M University 2001 해외박사(DDOD)

        RANK : 247343

        Development of space-based industry created on the International Space Station is limited by reentry through Earth's atmosphere. The Lyapunov method is applied to improve reentry controller performance through Earth's atmosphere. A Predictor Corrector controller (PCCPA) derived from the Apollo program reentry guidance is the baseline against which candidate controllers are compared. Controllers designed to guide the reentry vehicle to within 1nm of the final target while satisfying a 51.55 BTU/ft<super>2</super>/sec heat rate constraint and a 4 g load constraint are evaluated in a six degree of freedom simulation environment. Three Lyapunov controllers developed in the initial design phase are subjected to variations in vehicle L/D. Controller gains are iteratively determined to satisfy reentry requirements for nominal reentry. However, different controller gain sets have different levels of performance robustness. Only one controller, the Lyapunov Controller, Gain Set 3, satisfies reentry requirements over a larger range of L/D variations than the PCCPA. In the final design phase, three hybrid reentry controllers are developed by combining Lyapunov based guidance routines with PCCPA transition logic and are evaluated for variations in vehicle L/D, weight and initial flight path angle (γ<sub>0</sub>). The Hybrid Predictor Corrector/Lyapunov Controller #1 demonstrates the greatest performance robustness and satisfies all reentry requirements over a larger range of L/D, weight, and γ<sub> 0</sub> variations than the PCCPA. The hybrid controller, #3, meets all reentry requirements for a larger range of L/D variations than the PCCPA. The hybrid controller, #2, fails to outperform the PCCPA. All seven reentry controllers are linearized at four operating points and linear robust control analysis techniques are employed to quantify controller performance. Two robustness parameters, η, a guaranteed domain of stability, and J<sub>w</sub>, a measure of system performance to the worst possible direction of the unit initial condition vector, are evaluated for each controller at all operating points. η has greater flexibility than J<sub>w</sub> because it can be evaluated for unstable linear controllers that are stabilizable and η can be evaluated for systems that experience control position and rate saturation. η also forecasts controller robustness and accurately indicates that the HPCLC1 is the most robust controller considered in this study.

      • The role of integrin alpha9beta1 during angiogenesis induced by VEGF-A

        Kline, Ahnika University of California, San Francisco 2011 해외박사(DDOD)

        RANK : 247343

        Integrins are heterodimeric membrane proteins that mediate cell adhesion, migration and proliferation in a number of physiologic processes, including angiogenesis. The integrin alpha9beta1 binds to a number of ligands, including the angiogenic growth factor, VEGF-A, and mediates cell adhesion and proliferation on VEGF-A. Blockade of alpha9beta1 inhibits angiogenesis in the chick and quail chorioallantoic membranes, but the specific cell type(s) in which alpha9beta1 is expressed and the mechanisms by which integrin alpha9beta1 mediates angiogenesis have not been described. The complete knockout of alpha9 results in lethal chylothoraces between postnatal day 6 and 10, but has no reported deficits of vasculogenesis or developmental angiogenesis. To study the role of alpha9beta1 during pathologic angiogenesis, we generated a mouse in which alpha9 can be temporally deleted in all tissues following administration of Doxycyline. Administration of Doxycycline after the development of the lymphatics allowed us to define a role for alpha9beta1 during pathologic angiogenesis in adult mice. Deletion of alpha9beta1 inhibited angiogenesis in vivo induced by VEGF-A, but not by bFGF. We determined that alpha9beta1 is expressed on cells which also express the pericyte markers NG2 and PDGFRbeta, and not on endothelial cells expressing PECAM. Pericyte-specific deletion of the integrin using an alpha-Smooth Muscle Actin promoter was sufficient to inhibit angiogenesis in response to VEGF-A due to a failure of pericyte recruitment before endothelial cell recruitment. Taken together, these results indicate that pathologic angiogenesis in response to VEGF-A requires pericyte recruitment before endothelial cell recruitment, and that this pericyte recruitment requires expression of integrin alpha9beta1 on pericytes.

      • Characterizing the Microvascular Branching Geometry of the Dual Blood Supply to the Liver with Micro-CT

        Kline, Timothy Lee University of Minnesota 2013 해외박사(DDOD)

        RANK : 247343

        Microvascular branching geometries determine the efficacy of the transport of nutrients and metabolic products to and from tissues in large-bodied organisms. The general 'plan' is that an artery supplies oxygen, nutrients, and hormones to the tissue and a vein removes metabolic products from that tissue. The blood flow to the organ is controlled by the metabolic demand of the organ by a feedback mechanism controlling the arterial lumen diameter. The liver differs from other organs by having two vascular systems delivering its blood - the hepatic artery and the portal vein. The hepatic artery supplies the oxygen needed by liver cells, and the portal vein delivers the molecules absorbed by the gut which need to be processed by the liver tissue for use by other organs in the body. However, how the hepatic artery and portal vein interact is not fully understood in terms of how their relative flows are adjusted, either passively and/or actively, to meet the needs of the liver tissue. This dissertation explores the hypothesis that the hepatic artery's blood mixes with the portal vein's proximal to the hepatic sinusoids (where their mixing is traditionally thought to occur). This is performed utilizing micro-CT to image rat liver lobes injected with a contrast polymer. During the process of exploring this hypothesis, a number of image analysis tools needed to be developed. For one, understanding the level of accuracy by which geometrical measurements can be made by micro-CT is very important because vascular resistance to flow is proportional to the interbranch segment length, as well as inversely proportional to the fourth power of the lumen diameter. Moreover, a single vessel tree contained in a micro-CT image has hundreds, if not thousands of individual interbranch segments and knowledge of the interconnectivity relationship between the segments is important for modeling such properties as pressure distributions and relative blood flow rates. For these reasons, the development of automated measurement methods to measure the length and diameter of interbranch segments and extract the hierarchical structure of vascular trees was performed. These methods were then compared to a gold-standard measurement (obtained by measuring the lengths and diameters of interbranch segments of a microvascular cast by 'hand' under a microscope) to understand the level of accuracy obtainable by micro-CT. Having successfully developed accurate automated measurement algorithms (thereby replacing the time-consuming gold standard measurement method), the algorithms were then used to compare and validate other algorithmic approaches, particularly those that quickly extract geometrical information regarding a vascular bed composed of many vessel trees within a micro-CT image. Because the hepatic artery and portal vein are in close proximity to one another as they distribute throughout the liver, the development of a special segmentation method was needed to allow separation of these concomitant vessel systems that may have 'false' connections resulting from blurring of the micro-CT image. Finally, an anatomic study of the vasculature of the liver was performed which offered insight into the interaction between the hepatic artery and portal vein. In the case of specimens where only the portal vein was injected with contrast, only the portal vein was opacified, whereas in hepatic artery injections, both the hepatic artery and portal vein were opacified. Also, when different contrast agents were injected into the hepatic artery and the portal vein, the hepatic artery's contrast agent was observed to be mixed in with the different contrast injected into the portal vein. In addition, in high-resolution scans (5$\mu$m cubic voxels) anatomic evidence for hepatic arteriolo-portal venular shunts occurring between the hepatic artery and portal vein branches were found. Simulations were performed in order to rule out the possibility of the observed shunts being artifacts caused by image blurring. Thus, mixing of the hepatic artery and portal vein can occur proximal to the sinusoidal level, and hepatic arteriolo-portal venular shunts may function as a one-way valve-like mechanism, allowing flow only from the hepatic artery to the portal vein (and not the other way around).

      • Convex Optimization as a Foundation for the Representation and Analysis of Protein NMR Spectroscopy Data

        Kline, Jeffrey Thomas The University of Wisconsin - Madison 2010 해외박사(DDOD)

        RANK : 247343

        A fundamental technique of protein NMR spectroscopy is time domain windowing of the free induction decay (FID). Typical analysis of the FID involves the identification of "peaks" living in the spectrum of the FID. We address two problems associated with this task: identification of peaks living in a crowded region of the spectrum and noise. Our starting point is a methodology based on convex optimization for window construction that is well-defined for the n-dimensional FID. Using a parameterized family of solutions to a particular convex problem, we introduce a redundant and invertible linear representation of the FID that is suitable for removing noise from the noisy FID. The invertibility of this representation is remarkable since the original convex problem and resulting solution is built without consideration of a redundant representation, let alone an invertible one. We also invoke convex optimization for the construction of orthonormal windows, each window required to have minimal coherence with the primary feature, or atom, of the FID. Due to the mutual orthogonality of the windows and since each of the windows is coherent with the atom, the "true peaks" of the FID spectrum can be discerned from the "false peaks" not due to atoms. Using these windows, we are able to produce a representation of the FID that allows one to decluster a group of overlapping peaks in a crowded and noisy spectrum beyond what is possible using linear methods. Using the analytic solution to one of the convex problems we consider, we also demonstrate a quantitative and theoretically justified lower bound for tmax, the maximum sample time of the FID. In higher dimensional experiments, locating the region with utility in the time domain is critical to shortening the total time needed to conduct an experiment. To our knowledge, this is the first theoretical, not empirical estimate of tmax and the region with utility.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼