http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Time-Resolved Cryo-EM Studies on Translation and Cryo-EM Studies on Membrane Proteins
Fu, Ziao Columbia University ProQuest Dissertations & These 2019 해외박사(DDOD)
Single-particle reconstruction technique is one of the major approaches to studying ribosome structure and membrane proteins. In this thesis, I report the use of time-resolved cryo-EM technique to study the structure of short-lived ribosome complexes and conventional cryo-EM technique to study the structure of ribosome complexes and membrane proteins. The thesis consists three parts. The first part is the development of time-resolved cryo-EM technique. I document the protocol for how to capture short-lived states of the molecules with time-resolved cryo-EM technique using microfluidic chip. Working closely with Dr. Lin's lab at Columbia University Engineering Department, I designed and tested a well-controlled and effective microspraying-plunging method to prepare cryo-grids. I demonstrated the performance of this device by a 3-A reconstruction from about 4000 particles collected on grids sprayed with apoferritin suspension. The second part is the application of time-resolved cryo-EM technique for studying short-lived ribosome complexes in bacteria translation processes on the time-scale of 10-1000 ms. I document three applications on bacterial translation processes. The initiation project is collaborated with Dr. Gonzalez's lab at Chemistry Department, Columbia University. The termination and recycling projects are collaborated with Dr. Ehrenberg's lab at Department of Cell and Molecular Biology, Uppsala University. I captured and solved short-lived ribosome intermediates complexes in these processes. The results demonstrate the power of time-resolved cryo-EM to determine how a time-ordered series of conformational changes contribute to the mechanism and regulation of one of the most fundamental processes in biology. The last part is the application of conventional cryo-EM technique to study ribosome complexes and membrane proteins. This part includes five collaboration projects. Human GABA(B) receptor project is the collaboration with Dr. Fan at Department of Pharmacology, Columbia University. Cyclic nucleotide-gated (CNG) channels project is the collaboration with Dr. Yang at Department of Biological Sciences, Columbia University. The cryo-EM study of Ybit-70S ribosome complex and Cystic fibrosis transmembrane conductance regulator (CFTR) project are the collaboration with Dr. Hunt at Department of Biological Sciences, Columbia University. The cryo-EM study of native lipid bilayer in membrane protein transporter is the collaboration with Dr. Hendrickson at Department of Biochemistry and Molecular Biophysics, Columbia University and Dr. Guo at Department of Medicinal Chemistry, Virginia Commonwealth University.
Structure-Conductivity Relationships in Group 14 Based Single-Molecule Wires
Su, Timothy A Columbia University ProQuest Dissertations & These 2016 해외박사(DDOD)
Single-molecule electronics is an emerging subfield of nanoelectronics where the ultimate goal is to use individual molecules as the active components in electronic circuitry. Over the past century, chemists have developed a rich understanding of how a molecule's structure determines its electronic properties; transposing the paradigms of chemistry into the design and understanding of single-molecule electronic devices can thus provide a tremendous impetus for growth in the field. This dissertation describes how we can harness the principles of organosilicon and organogermanium chemistry to control charge transport and function in single-molecule devices. We use a scanning tunneling microscope-based break-junction (STM-BJ) technique to probe structure-conductivity relationships in silicon- and germanium-based wires. Our studies ultimately demonstrate that charge transport in these systems is dictated by the conformation, conjugation, and bond polarity of the sigma-backbone. Furthermore, we exploit principles from reaction chemistry such as strain-induced Lewis acidity and ?-bond stereoelectronics to create new types of digital conductance switches. These studies highlight the vast opportunities that exist at the intersection between chemical principles and single-molecule electronics. (Abstract shortened by ProQuest.).
Reeping, Paul Michael Columbia University ProQuest Dissertations & These 2022 해외박사(DDOD)
Gun-free zones have the potential to increase or decrease the risk of gun crime and active shootings that occur within their borders. People who assume that gun-free zones increase gun related outcomes believe that the lack of the ability for law-abiding citizens to carry a firearm, and thus an inability to engage in defensive gun use if a threat presented itself, makes gun-free zones a soft target for crime. Those that assume gun-free zones decrease gun related outcomes believe the absence of firearms eliminates the risk of an escalation of violence to gunfire. Up until this point, there has been no quantitative research on the effectiveness of gun-free zones, despite the topic being highly controversial. This dissertation was therefore the first to: create and describe a dataset of active shootings in the United States, and assess the extent to which defensive gun use occurs during these events (Aim 1); conduct a cross-sectional ecological analysis for the in St. Louis, Missouri (2019), both city and county, comparing the proportion of crimes committed with a firearm that occur in gun-free school zones compared to gun-allowing zones immediately surrounding the gun-free zone to quantify the effectiveness of gun-free school zones and (Aim 2); conduct a spatial ecological case-control study in the United States where cases are the locations or establishments of active shootings between 2014 and 2020, to quantify the impact of gun-free zones on active shootings, and assess if active shooters target gun-free zones (Aim 3). The results of Aim 1 of this study suggested that defensive gun use during active shootings was rare, usually does not stop the attack, and does not decrease the number of casualties compared to active shootings without defensive use. Aim 1 also thoroughly described the novel active shooting dataset. I found in Aim 2 that gun-free school zones had fewer crimes committed with a firearm than corresponding gun-allowing zones in St. Louis, MO in 2019. There were 13.4% fewer crimes involving a firearm in gun-free school zones, with a confidence interval ranging from 23.6% fewer to 1.8% fewer (p-value: 0.025). Aim 3 determined that the conditional odds of an active shooting in an establishment that was gun-free were 0.375 times the odds of an active shooting in a gun-allowing establishment with a confidence interval ranging from 0.193 to 0.728 (p-value<0.01), suggesting that gun-free zones did not attract active shooters, and may even be preventative. In conclusion, gun-free zones did not appear to increase gun related outcomes and may even be protective against active shootings. Efforts across the United States to repeal laws related to gun-free zones, due to the belief that gun-free zones are targeted for violence, are therefore not backed by data. However, these are the first quantitative studies ever conducted on the effectiveness of gun-free zones, so more research is needed to build on the results of this dissertation.
Takahashi, Maressa Columbia University ProQuest Dissertations & These 2018 해외박사(DDOD)
The search for food and adequate nutrition determines much of an animal's behavior, as it must ingest the macronutrients, micronutrients, and water needed for growth, reproduction and body maintenance. These macro- and micronutrients are found in varying proportions and concentrations in different foods. A generalist consumer, such as many primates, faces the challenge of choosing the right combination of foods that confers adequate and balanced nutrition. Diet selection is further complicated and constrained by antifeedants, as well as digestive morphology and physiological limitations. Nutritional ecology is the study of the connected relationships between an organism, its nutrient needs (determined by physiological state), its diet selection, and the foraging behavior it uses within a specific food environment. Additionally, these relationships are complex and changeable since the nutrient needs of a consumer change over time and food resources (including the nutritional composition) vary spatiotemporally. Published data on primate nutritional ecology are limited, with most investigations of nutritional needs stemming from captive populations and few field studies. To contribute to the body of knowledge of nutritional ecology in natural populations, I examined the nutritional ecology of wild adult female blue monkeys, Cercopithecus mitis. I used the geometric framework (GF) to quantify nutritional patterns, as it allows simultaneous examination of multiple nutrients that may be driving foraging behavior and patterns of food intake. Blue monkeys are known to be generalist feeders, with flexible feeding behavior. The population I studied inhabits the Kakamega Forest, western Kenya. This forest has a history of variable human modification on a small scale, and offered a unique opportunity to examine environmental factors (e.g. degree of human-modification of forest type, food availability), social factors (dominance rank), and physiological factors (reproductive demand) that may alter blue monkey nutritional strategies. From January and September 2015, a team of field assistants and I collected behavioral data from 3 study groups, intensively sampling 24 adult females that varied in dominance rank and reproductive condition. I used all-day focal follows to quantify feeding behavior, which allowed me to assess diet selection and nutrient intake on a daily basis. I also monitored subjects' daily movement. To assess food availability, I quantified vegetative differences among major habitat types within each group's home range and monitored biweekly changes in plant production of fruits and young leaves, which were major constituents of the plant-based diet. I collected > 300 food samples, as well as fecal samples, and analyzed them for macro-nutritional content using wet chemistry and near-infrared spectroscopy techniques. I combined data to examine patterns in diet and nutritional strategy on different scales: patterns across subjects, between groups and within the population as a whole, patterns in the diet on the food composition level versus nutrient intake level, and patterns in nutrient intake on a daily basis versus a long term basis (i.e. over the course of the study period). Additionally, I evaluated factors that might affect variation in nutritional strategies, including a female's reproductive condition, dominance rank, habitat use, and degree of frugivory or folivory in daily intake, as well as food availability in the environment. Kakamega blue monkeys ate a broad diet of over 445 food items (species-specific plant parts and insect morphotypes). Fruit was preferred food, and particular species-specific fruits constituted the majority of important food items (i.e., those contributing > 1% of total caloric intake by group); many fruits were highly selected (i.e. eaten more than expected based on availability). Many species-specific young leaves also were important food items, though they were eaten in proportion to their availability, or even less often. Regardless of whether group diet was characterized by time spent feeding or by calories, fruit remained the largest constituent and young leaves the second largest. A subject's daily path length was negatively related to proportion of fruit in the diet (by kcal) because females focused feeding in particular trees when important fruits ripened and thus traveled less. Daily path length was not related to group size, probably because females spread out when foraging to avoid within-group scramble competition over food. Group differences in the food composition of diets likely reflected habitat differences in food distribution. Comparison of the population's diet to data from previous studies showed that as study groups moved into new areas and habitats, they capitalized on new food resources, reinforcing the idea that blue monkey are flexible feeders. During this study, subjects adjusted their diet in response to food availability in the environment, consuming more fruit (by percentage of diet and absolute kcal) when fruit was more available. (Abstract shortened by ProQuest.).
Gibson, James Charles Columbia University ProQuest Dissertations & These 2022 해외박사(DDOD)
Seafloor sedimentary depositional and erosional processes create a record of near and far-field climatic and tectonic signals adjacent to continental margins and within oceanic basins worldwide. In this dissertation I study both modern and paleo-seafloor surface processes at three separate and distinct study sites; Cascadia offshore Oregon, U.S.A., the Eastern North American Margin from south Georgia in the south to Massachusetts in the north, and the Deep Galicia Margin offshore Spain. I have the advantage of using modern geophysical methods and high power computing resources, however the study of seafloor processes at Columbia University's Lamont-Doherty Earth Observatory (LDEO) stretches back over ~80 yrs. Specifically I use data collected during a variety of geophysical research cruises spanning the past ~50 yrs.-the majority of which can be directly attributed to seagoing programs managed by LDEO. The modern seafloor is the integrated result of all previous near and far field processes. As such, I look below the seafloor using multi-channel seismic reflection data, which is the result of innumerable soundings stacked together to create an image of the sub-seafloor (paleo) horizons. I map, analyze and interpret the sub-seafloor sedimentary horizons using a variety of both novel and established methods. In turn, I use multi-beam sonar data, which is also the result of innumerable soundings to map, analyze, and interpret the modern seafloor topography (bathymetry). Additionally, I look to the results from academic ocean drilling programs, which can provide information on both the composition and physical properties of sediments. The sediment composition alone can provide important information about both near and far-field processes, however when supplemented with physical properties (e.g., density/porosity) the results become invaluable. In my second chapter, I use a compilation of multi-beam sonar bathymetry data to identify and evaluate 86 seafloor morphological features interpreted to represent large-scale erosional scours not previously recognized on the Astoria Fan offshore Oregon, U.S.A. The Astoria Fan is primarily composed of sediments transported from the margin to the deep ocean during Late Pleistocene interglacial periods. A significant portion of the sediments have been found to be associated with Late Pleistocene outburst flood events attributed to glacial lakes Bonneville and Missoula. The erosional scours provide a record of the flow path of the scouring event(s), which if well understood can provide important information for the study of past earthquakes as the sedimentary record remains intact outside of the erosional force created by the massive flood events. I design and implement a Monte Carlo inversion to calculate the event(s) flow path at each individual scour location, which results in a comprehensive map of Late Pleistocene erosion on the Astoria Fan. The results indicate that at least 4 outburst flood events are recorded by the scour marks.In my third chapter, I build a stratigraphic framework of the Eastern North American margin using a compilation of multi-channel seismic data. Horizon Au is a primary horizon within the stratigraphic framework and is thought to represent a significant margin wide bottom-water erosional event associated with subsidence of the Greenland-Scotland Ridge and opening of Fram Strait in the late Eocene/early Oligocene. A recent study found that the bottom-water was enriched in fossil carbon, leading us to hypothesize that the bottom-water erosion recorded by horizon Au may have been facilitated by chemical weathering of the carbonate sediments. I use sediment isopach(s) to build a margin-wide model of the late Eocene/early Oligocene continental margin in order to estimate the volume of sediments eroded/dissolved during the event marked by horizon Au. The results indicate that ~170,000 km3 of sediments were removed with a carbonate fraction of 42,500 km3, resulting in 1.15e18 mol CaCO3 going into solution. An influx of this magnitude likely played a role in significant climatic changes identified at the Eocene-Oligocene transition (EOT).In my fourth chapter, I use a combination of 3D multi-channel seismic and multi-beam sonar bathymetry data collected during the Galicia 3D Seismic Experiment in 2013. The Galicia Bank is the largest of many crustal blocks and is located 120 km west of the coast on the Iberian Margin. The crustal blocks have been attributed to the opening of the North Atlantic Ocean in the Late Triassic/Middle Jurassic. The Galicia Bank is the source for the majority of sediments delivered to the Deep Galicia Margin, the focus of this study. I map the seafloor and 5 paleo-seafloor surfaces in order to study controls on sediment delivery provided by the crustal blocks. The results show that the crustal blocks begin as a barrier to and remain a primary control on sediment delivery pathways to the Deep Galicia basin. Additionally, the paleo-seafloor surfaces record morphological structures that can inform us on both near and far field past climatic and tectonic events e.g., the Alpine Orogeny and Pleistocene inter-glacial periods.
Spectro-Temporal and Linguistic Processing of Speech in Artificial and Biological Neural Networks
Keshishian, Menoua Columbia University ProQuest Dissertations & These 2024 해외박사(DDOD)
Humans possess the fascinating ability to communicate the most complex of ideas through spoken language, without requiring any external tools. This process has two sides-a speaker producing speech, and a listener comprehending it. While the two actions are intertwined in many ways, they entail differential activation of neural circuits in the brains of the speaker and the listener. Both processes are the active subject of artificial intelligence research, under the names of speech synthesis and automatic speech recognition, respectively. While the capabilities of these artificial models are approaching human levels, there are still many unanswered questions about how our brains do this task effortlessly. But the advances in these artificial models allow us the opportunity to study human speech recognition through a computational lens that we did not have before. This dissertation explores the intricate processes of speech perception and comprehension by drawing parallels between artificial and biological neural networks, through the use of computational frameworks that attempt to model either the brain circuits involved in speech recognition, or the process of speech recognition itself.There are two general types of analyses in this dissertation. The first type involves studying neural responses recorded directly through invasive electrophysiology from human participants listening to speech excerpts. The second type involves analyzing artificial neural networks trained to perform the same task of speech recognition, as a potential model for our brains. The first study introduces a novel framework leveraging deep neural networks (DNNs) for interpretable modeling of nonlinear sensory receptive fields, offering an enhanced understanding of auditory neural responses in humans. This approach not only predicts auditory neural responses with increased accuracy but also deciphers distinct nonlinear encoding properties, revealing new insights into the computational principles underlying sensory processing in the auditory cortex. The second study delves into the dynamics of temporal processing of speech in automatic speech recognition networks, elucidating how these systems learn to integrate information across various timescales, mirroring certain aspects of biological temporal processing. The third study presents a rigorous examination of the neural encoding of linguistic information of speech in the auditory cortex during speech comprehension. By analyzing neural responses to natural speech, we identify explicit, distributed neural encoding across multiple levels of linguistic processing, from phonetic features to semantic meaning. This multilevel linguistic analysis contributes to our understanding of the hierarchical and distributed nature of speech processing in the human brain. The final chapter of this dissertation compares linguistic encoding between an automatic speech recognition system and the human brain, elucidating their computational and representational similarities and differences. This comparison underscores the nuanced understanding of how linguistic information is processed and encoded across different systems, offering insights into both biological perception and artificial intelligence mechanisms in speech processing.Through this comprehensive examination, the dissertation advances our understanding of the computational and representational foundations of speech perception, demonstrating the potential of interdisciplinary approaches that bridge neuroscience and artificial intelligence to uncover the underlying mechanisms of speech processing in both artificial and biological systems.
Deep Learning for Action Understanding in Video
Shou, Zheng Columbia University ProQuest Dissertations & These 2019 해외박사(DDOD)
Action understanding is key to automatically analyzing video content and thus is important for many real-world applications such as autonomous driving car, robot-assisted care, etc. Therefore, in the computer vision field, action understanding has been one of the fundamental research topics. Most conventional methods for action understanding are based on hand-crafted features. Like the recent advances seen in image classification, object detection, image captioning, etc, deep learning has become a popular approach for action understanding in video. However, there remain several important research challenges in developing deep learning based methods for understanding actions. This thesis focuses on the development of effective deep learning methods for solving three major challenges. Action detection at fine granularities in time: Previous work in deep learning based action understanding mainly focuses on exploring various backbone networks that are designed for the video-level action classification task. These did not explore the fine-grained temporal characteristics and thus failed to produce temporally precise estimation of action boundaries. In order to understand actions more comprehensively, it is important to detect actions at finer granularities in time. In Part I, we study both segment-level action detection and frame-level action detection. Segment-level action detection is usually formulated as the temporal action localization task, which requires not only recognizing action categories for the whole video but also localizing the start time and end time of each action instance. To this end, we propose an effective multi-stage framework called Segment-CNN consisting of three segment-based 3D ConvNets: (1) a proposal network identifies candidate segments that may contain actions; (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network; and (3) a localization network fine-tunes the learned classification network to localize each action instance. In another approach, frame-level action detection is effectively formulated as the per-frame action labeling task. We combine two reverse operations (i.e. convolution and deconvolution) into a joint Convolutional-De-Convolutional (CDC) filter, which simultaneously conducts downsampling in space and upsampling in time to jointly model both high-level semantics and temporal dynamics. We design a novel CDC network to predict actions at frame-level and the frame-level predictions can be further used to detect precise segment boundary for the temporal action localization task. Our method not only improves the state-of-the-art mean Average Precision (mAP) result on THUMOS'14 from 41.3% to 44.4% for the per-frame labeling task, but also improves mAP for the temporal action localization task from 19.0% to 23.3% on THUMOS'14 and from 16.4% to 23.8% on ActivityNet v1.3. Action detection in the constrained scenarios: The usual training process of deep learning models consists of supervision and data, which are not always available in reality. In Part II, we consider the scenarios of incomplete supervision and incomplete data. For incomplete supervision, we focus on the weakly-supervised temporal action localization task and propose AutoLoc which is the first framework that can directly predict the temporal boundary of each action instance with only the video-level annotations available during training. To enable the training of such a boundary prediction model, we design a novel Outer-Inner-Contrastive (OIC) loss to help discover the segment-level supervision and we prove that the OIC loss is differentiable to the underlying boundary prediction model. Our method significantly improves mAP on THUMOS14 from 13.7% to 21.2% and mAP on ActivityNet from 7.4% to 27.3%. For the scenario of incomplete data, we formulate a novel task called Online Detection of Action Start (ODAS) in streaming videos to enable detecting the action start time on the fly when a live video action is just starting. ODAS is important in many applications such as early alert generation to allow timely security or emergency response. Specifically, we propose three novel methods to address the challenges in training ODAS models: (1) hard negative samples generation based on Generative Adversarial Network (GAN) to distinguish ambiguous background, (2) explicitly modeling the temporal consistency between data around action start and data succeeding action start, and (3) adaptive sampling strategy to handle the scarcity of training data. Action understanding in the compressed domain: The mainstream action understanding methods including the aforementioned techniques developed by us require first decoding the compressed video into RGB image frames. (Abstract shortened by ProQuest.).
Essays on Demand Estimation, Financial Economics and Machine Learning
He, Pu Columbia University ProQuest Dissertations & These 2019 해외박사(DDOD)
In this era of big data, we often rely on techniques ranging from simple linear regression, structural estimation, and state-of-the-art machine learning algorithms to make operational and financial decisions based on data. This calls for a deep understanding of practical and theoretical aspects of methods and models from statistics, econometrics, and computer science, combined with relevant domain knowledge. In this thesis, we study several practical, data-related problems in the particular domains of sharing economy and financial economics/financial engineering, using appropriate approaches from an arsenal of data-analysis tools. On the methodological front, we propose a new estimator for classic demand estimation problem in economics, which is important for pricing and revenue management.In the first part of this thesis, we study customer preference for the bike share system in London, in order to provide policy recommendations on bike share system design and expansion. We estimate a structural demand model on the station network to learn the preference parameters, and use the estimated model to provide insights on the design and expansion of the system. We highlight the importance of network effects in understanding customer demand and evaluating expansion strategies of transportation networks. In the particular example of the London bike share system, we find that allocating resources to some areas of the station network can be 10 times more beneficial than others in terms of system usage, and that currently implemented station density rule is far from optimal. We develop a new method to deal with the endogeneity problem of the choice set in estimating demand for network products. Our method can be applied to other settings, in which the available set of products or services depends on demand.In the second part of this thesis, we study demand estimation methodology when data has a long-tail pattern, that is, when a significant portion of products have zero or very few sales. Long-tail distributions in sales or market share data have long been an issue in empirical studies in areas such as economics, operations, and marketing, and it is increasingly common nowadays with more detailed levels of data available and many more products being offered in places like online retailers and platforms. The classic demand estimation framework cannot deal with zero sales, which yields inconsistent estimates. More importantly, biased demand estimates, if used as an input to subsequent tasks such as pricing, lead to managerial decisions that are far from optimal. We introduce two new two-stage estimators to solve the problem: our solutions apply machine learning algorithms to estimate market shares in the first stage, and in the second stage, we utilize the first-stage results to correct for the selection bias in demand estimates. We find that our approach works better than traditional methods using simulations.In the third part of this thesis, we study how to extract a signal from option pricing models to form a profitable stock trading strategy. Recent work has documented roughness in the time series of stock market volatility and investigated its implications for option pricing. We study a strategy for trading stocks based on measures of their implied and realized roughness. A strategy that goes long the roughest-volatility stocks and short the smoothest-volatility stocks earns statistically significant excess annual returns of 6% or more, depending on the time period and strategy details. Standard factors do not explain the profitability of the strategy. We compare alternative measures of roughness in volatility and find that the profitability of the strategy is greater when we sort stocks based on implied rather than realized roughness. We interpret the profitability of the strategy as compensation for near-term idiosyncratic event risk.Lastly, we apply a heterogeneous treatment effect (HTE) estimator from statistics and machine learning to financial asset pricing. Recent progress in the interdisciplinary area of causal inference and machine learning has proposed various promising estimators for HTE. We take the R-learner algorithm by Nie & Wager (2019) and adapt it to empirical asset pricing. We study characteristics associated with standard factors, size, value and momentum through the lens of HTE. Our goal is to identify sub-universes of stocks, "characteristic responders", in which size, value or momentum trading strategies perform best, compared with the performance had they been applied to the entire universe. On the other hand, we identify subsets of "characteristic traps" in which the strategies perform the worst. In our test period, the differences in average monthly returns between long-short strategies restricted to "characteristic responders" and "characteristic traps" range from 0.77% to 1.54% depending on treatment characteristics. The differences are statistically significant and cannot be explained by standard factors: a long-short of long-short strategy generates alpha of significant magnitude from 0.98% to 1.80% monthly, with respect to standard Fama-French plus momentum factors. Simple interaction terms between standard factors and ex-post important features do not explain the alphas either. We also characterize and interpret the characteristic traps and responders identified by our algorithm. Our study can be viewed as a systematic, data-driven way to investigate interaction effects between features and treatment characteristic, and to identify characteristic traps and responders.
Datta, Bikramaditya Columbia University ProQuest Dissertations & These 2018 해외박사(DDOD)
This dissertation analyzes problems related to barriers to innovation. In the first chapter, "Delegation and Learning", I study an agency problem which is common in many contexts involving financing of innovation. Consider the example of an entrepreneur, who has an idea but not the money to implement it, and an investor, who has the money but not the idea. In such a case, how should a financial contract between the investor and the entrepreneur look like? How much money should the investor provide the entrepreneur? How should the surplus be divided between them in case the idea turns out to be profitable? There are certain common elements in situations such as these. First, there is an element of learning. This is because initially it is unknown if the idea is profitable or not and hence the idea has to be tried out in the market and both the investor and entrepreneur learn about the profitability of the idea from observing market outcomes. Second, there is an element of delegation in the above situation. This is because decision rights regarding where and when should the idea be tried out is typically in the hands of the entrepreneur and he knows his idea better than the investor. Finally, the preferences of the investor and the entrepreneur might not be aligned. For instance, the investor may receive private benefits, monetary or reputational, from launching products even when these are not profitable. In such a case, how should a contract that incentivizes the entrepreneur to act in the investor's interest look like?. To study these issues, I develop a model in which a principal contracts with an agent whose ability is uncertain. Ability is learnt from the agent's performance in projects that the principal finances over time. Success however also depends on the quality of the project at hand, and quality is privately observed by the agent who is biased towards implementation. I characterize the optimal sequence of rewards in a relationship that tolerates an endogenously determined finite number of failures and incentivizes the agent to implement only good projects by specifying rewards for success as a function of past failures. The fact that success becomes less likely over time suggests that rewards for success should increase with past failures. However, this also means that the agent can earn a rent from belief manipulation by deviating and implementing a bad project which is sure to fail. I show that this belief-manipulation rent decreases with past failures and implies that optimal rewards are front-loaded. The optimal contract resembles the arrangements used in venture capital, where entrepreneurs must give up equity share in exchange for further funding following failure. In the second chapter, "Informal Risk Sharing and Index Insurance: Theory with Experimental Evidence", written with Francis Annan, we study when does informal risk sharing act as barrier or support to the take-up of an innovative index-based weather insurance? We evaluate this substitutability or complementarity interaction by considering the case of an individual who endogenously chooses to join a group and make decisions about index insurance. The presence of an individual in a risk sharing arrangement reduces his risk aversion, termed "Effective Risk Aversion"---a sufficient statistic for index decision making. Our analysis establishes that such reduction in risk aversion can lead to either reduced or increased take up of index insurance. These results provide alternative explanations for two empirical puzzles: unexpectedly low take-up for index insurance and demand being particularly low for the most risk averse. Experimental evidence based on data from a panel of field trials in India, lends support for several testable hypotheses that emerge from our baseline analysis. In the third chapter, "Investment Timing, Moral Hazard and Overconfidence", I study how overconfidence and financial frictions impact entrepreneurs by shaping their incentives to learn. I consider a real option model in which an entrepreneur learns about the quality of project he has, prior to implementation. Success depends on the quality of the project as well as the unknown ability of the entrepreneur. The possibility of the entrepreneur diverting investor funds to his private uses, creates a moral hazard problem which leads to delayed investment and over-experimentation. An entrepreneur who is overconfident regarding his ability, under-experiments and over invests compared to an entrepreneur who has accurate beliefs regarding his ability. Such overconfidence on behalf of the entrepreneur creates inefficiencies when projects are self financed, but reduces inefficiencies due to moral hazard in case of funding by investors.
Zhang, Yun Columbia University ProQuest Dissertations & These 2021 해외박사(DDOD)
An abundance of evidence from a wide range of astrophysical and cosmological observations suggests the existence of nonluminous cold dark matter, which makes up about 83% of the matter and 27% of the mass-energy of the Universe. Weakly Interacting Massive Particles (WIMPs) have been one of the most promising dark matter candidates. Various detection techniques have been used to directly search for the interaction in terrestrial detectors where WIMP particles are expected to scatter off target nuclei. Over the last fifteen years, dual-phase time projection chambers (TPCs) with liquid xenon (LXe) as target and detection medium have led the WIMP dark matter search. The XENON dark matter search project is a phased program focused on the direct detection of WIMPs through a series of experiments employing dual-phase xenon TPCs with increasing target mass operated at the Gran Sasso underground laboratory (LNGS) in Italy. The XENON1T experiment is the most recent generation, completed at the end of 2018. The XENON1T dark matter search results from the one ton-year exposure have set the most stringent limit on the WIMP-nucleon spin-independent elastic scatter cross-section over a wide range of masses, with a minimum upper limit of 4.1 x 10-47 cm2 at 30 GeV ∙ c-2 and a 90% confidence level.XENON1T is the first WIMP dark matter experiment which has deployed a dual-phase xenon TPC at the multi-ton scale, with 3.2 t of LXe used. The large xenon mass posed new challenges in reliable and stable xenon cooling, in achieving and maintaining ultra-high purity as well as in efficient and safe xenon storage, transfer and recovery. The Cryogenic Infrastructure was designed and constructed to solve these challenges. It consists of four highly interconnected systems --- the Cryogenic System, the Purification System, the Cryostat and Cryogenic Pipe, and the ReStoX System. The XENON1T Cryogenic Infrastructure has performed successfully and will continue to serve the next generation experiment, called XENONnT, with a new Cryostat containing a total of 8.4 tons of xenon.I first give an instrument overview of the systems in XENON1T. I then review the cooling methods in LXe detectors which led to the design of the cooling system implemented in the XENON1T experiment, and suggest a design of the cooling system for future LXe dark matter experiments at the 50 tons scale. I describe and discuss in detail the design and the performance of the XENON1T Cryogenic Infrastructure. Finally, I describe the detector stability and the corresponding data selection in all three XENON1T science runs, and describe the dark matter search results from the one ton-year exposure.