RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 학위유형
        • 주제분류
          펼치기
        • 수여기관
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 지도교수
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • Essays on macroeconomics

        Gunes, Ali University of Rochester 2013 해외박사(DDOD)

        RANK : 2591

        In the first chapter, I reconcile the savings rate profile with the earnings profile by education. The permanent income hypothesis suggests that an individual with a steeper income profile should save a lower fraction of his income. However, life-cycle profiles of the savings rate in U.S. data are at odds with this prediction. College graduates, who on average experience a much steeper income profile than non-college graduates, exhibit a higher savings rate profile. To reconcile this, I incorporate imperfect information about deterministic (ability, skill) and stochastic (shock) components of earnings profiles into a life-cycle model of consumption with heterogeneous agents and incomplete markets. I find that the imperfect information model is able to achieve a higher savings rate profile for college graduates. Higher skill heterogeneity for college graduates, which calls for a precautionary savings motive under imperfect information, accounts for this at the early ages. As labor market outcomes are realized, this uncertainty is gradually resolved. However, agents with low ability, who fail to accumulate assets in the early period, are unable to dissave in the later period. This accounts for the higher savings rate for college graduates at the later ages. I also quantitatively assess an alternative hypothesis, different time preference rates, for the savings rate gap by education. I conclude that accounting for the savings rate profile that we see in the data requires an unrealistically large gap between the time preference rates of the two groups. In the second chapter, I try to uncover the health production function. I construct the life-cycle profiles of health status, health expenditure, and time allocated to exercise. I find that there are important and interesting differences in the profile of inputs and output of health: (i) more educated individuals are persistently healthier, (ii) health expenditures across education groups are similar, and (iii) more educated individuals allocate persistently more time to exercise. I compute a life-cycle model with an augmented health production function. I find that the higher relative efficiency of exercise to expenditure for more educated individuals might account for inputs and output of health at the early ages.

      • New Imaging Approaches for Process Tomography Based on Capacitive Sensors

        Gunes, Cagdas The Ohio State University ProQuest Dissertations & 2018 해외박사(DDOD)

        RANK : 2591

        Process tomography is the investigation and imaging of a physical process in region of interest (RoI), such as fluid flow for example, on time and spatial scales around those of the process dynamics. The data gathered from the RoI may be utilized for diverse purposes such as characterization of industrial monitoring and control, design and optimization of industrial hardware, combustion flame imaging, and flow imaging, to name just a few. Due to nature of these applications, the associated sensors often need to be operated in harsh environments under very high pressure and/or temperature conditions. This reduces the currently available sensing modalities to a handful of choices as the possible candidates. Among these modalities, electrical capacitance tomography (ECT) holds great potential due to its relatively fast, non-invasive, non-intrusive imaging characteristics in addition to lightweight and inexpensive hardware. These attractive characteristics also carry over to electrical capacitance volume tomography (ECVT) which find applications in petroleum, chemical, and biochemical industries. Despite all these benefits, ECT and ECVT systems also have a few challenges that demand research efforts. First, typical operational frequencies are below 10 MHz, which make these "soft-field" modalities yield relatively low resolution compared with "hard-field" imaging counterparts such as X-ray. Second, current hardware design imply a high degree of correlation between mutual capacitance measurements and therefore an highly ill-conditioned inverse (imaging) problem. In addition, with the increasing demand for volume tomography, more challenging applications are being sought after by industry such as exploration of larger RoI with better resolutions. Therefore, these scenarios imply increased computational costs for the volumetric imaging problem and make it more difficult real-time ECVT imaging applications.In this dissertation, we introduce displacement-current phase tomography (DCPT) for process tomography. The operation principle of DCPT is based on the imaging of the imaginary part of the permittivity inside the RoI, which is complementary to real-part permittivity imaging obtained by ECT. While using the same ECT hardware, DCPT provides better resolution for certain classes of applications involving lossy media. This method is also extended to 3D volume tomography based on the use of ECVT hardware. DCPT is also extended to velocimetry applications, where the objective is to image the flow velocity in the RoI, based on ECVT hardware. Finally, a faster reconstruction approach for ECT/ECVT systems based on sparse representation of images in the Fourier domain is proposed and studied to facilitate real-time imaging for applications involving volumetric RoIs.

      • Essays on Operations Management

        Gunes, Canan Carnegie Mellon University 2010 해외박사(DDOD)

        RANK : 2591

        This thesis focuses on the design and analysis of discrete-event stochastic simulations involving correlated inputs, input modeling for stochastic simulations, and application of OM/OR techniques to the operations of food banks. Chapter 1: "Accounting for Parameter Uncertainty in Large-Scale Stochastic Simulations with Correlated Inputs". This chapter considers large-scale stochastic simulations with correlated inputs having Normal-To-Anything (NORTA) distributions with arbitrary continuous marginal distributions. Examples of correlated inputs include processing times of workpieces across several workcenters in manufacturing facilities and product demands and exchange rates in global supply chains. Our goal is to obtain mean performance measures and confidence intervals for simulations with such correlated inputs by accounting for the uncertainty around the NORTA distribution parameters estimated from finite historical input data. This type of uncertainty is known as the parameter uncertainty in the discrete-event stochastic simulation literature. We demonstrate how to capture parameter uncertainty with a Bayesian model that uses Sklar's marginal-copula representation and Cooke's copula-vine specification for sampling the parameters of the NORTA distribution. The development of such a Bayesian model well suited for handling many correlated inputs is the primary contribution of this chapter. Chapter 2: "Comparison of Least-Squares and Bayesian Inferences for Johnson's SB and SL Distributions". Johnson translation system is a flexible system of distributions with the ability to match any finite first four moments of a random variable. This chapter considers the problem of fitting lognormal and bounded distributions of the Johnson translation system to finite sets of stationary, independent and identically distributed input data. The focus on the Johnson translation system is due to the flexibility it provides in comparison to the standard input models that are built in commercial input-modeling software. The main goal of this chapter is to investigate the relative performance of the least-squares estimation method and the Bayesian method in fitting the parameters of the distributions from Johnson's lognormal and bounded families, and provide guidelines to the simulation practitioner on when to use each fitting method. Chapter 3: "Food Banks Can Enhance Their Operations with OR/OM Tools: A Pilot Study with Greater Pittsburgh Community Food Bank". Food assistance programs have been challenged to serve an increasing number of low-income families in the recent economic downturn. Soaring demand is combined with diminishing supply (donations) attributed to both recession and donors' improved inventory management. As a remedy for food demand-supply mismatch, many food banks, whose primary goal is to reach as many needy people as possible, are trying to purchase more food by reducing their operational costs and by improving fundraising. In this study, we work together with our local food bank, Greater Pittsburgh Community Food Bank (GPCFB), in order to achieve the following two goals: First, by using the limited available data we illustrate the extent to which GPCFB is being affected by the recent economic downturn. We identify how they can collect better data for future use. We believe this will help GPCFB in their fundraising efforts. Second, we focus on the 1-PDVRP that arises in the context of the food rescue program of GPCFB. In addition to the practical value of this work for GPCFB, this study contributes to the theory by being the first academic work to provide a rigorous treatment of the 1-PDVRP. Overall, this study not only seeks to help GPCFB, but it is intended as a good starting place for other food banks around the U.S as they struggle with similar issues. (Abstract shortened by UMI.).

      • Understanding Carry Trade Risks using Bayesian Methods: A Comparison with Other Portfolio Risks from Currency, Commodity and Stock Markets

        Gunes, Damla Columbia University 2012 해외박사(DDOD)

        RANK : 2591

        The purpose of this dissertation is to understand the risks embedded in Carry Trades. For this, we use a broad range of stochastic volatility (SV) models, estimate them using Bayesian techniques via Markov chain Monte Carlo methods, and analyze various risk measures using these estimation results. Many researchers have tried to explain the risk factors deriving Carry returns with standard risk models (factor models, Sharp ratios etc.). However, the high negative conditional skewness of Carry Trades hints the existence of jumps and shows that they have non normal returns, suggesting looking only at first two moments such as sharp ratios or using standard risk models are not enough to understand their risks. Therefore, we investigate Carry risks by delving into its SV and jump components and separate out their effects for a more thorough analysis. We also compare these results with other market portfolios (S&P 500, Fama HML, Momentum, Gold, AUD/USD, Euro/USD, USD/JPY, DXY, Long Rate Carry and Delta Short Rate Carry) to be able to judge the riskiness of Carry relative to other investment alternatives. We then introduce a new model diagnostic method, which overcomes the flaws of the previous methods used in the literature. This is important since model selection is a central question in SV literature, and although various methods were suggested earlier, they do not provide a reliable measure of fit. Using this new diagnostic method, we select the best-fitted SV model for each portfolio and use their estimation results to carry out the risk analysis. We find that the extremes of volatility, direct negative impact of volatilities on returns, percent of overall risk due to jumps considering both returns and vols, and negative skewness are all more pronounced for Carry Trades than for other portfolios. This shows that Carry risks are more complicated than other portfolios. Hence, we are able to remove a layer from the Carry risks by analyzing its jump and SV components in more depth. We also present the rolling correlations of these portfolio returns, vols, and jumps to understand if they co-move and how these co-movements change over time. We find that despite being dollar-neutral, Carry is still prone to dollar risk. DXY-S&P appear to be negatively correlated after 2003, when dollar becomes a safe-haven investment. S&P-AUD are very positively correlated since both are risky assets, except during currency specific events such as central bank interventions. MOM becomes negatively correlated with Carry during crisis and recovery periods since MOM yields positive returns in crisis and its returns plunge in recovery. Carry-Gold are mostly positively correlated, which might be used to form more enhanced trading and hedging strategies. Carry-S&P are mostly very positively correlated, and their jump probability correlations peak during big financial events. Delta Carry, on the other hand, distinguishes from other portfolios as a possible hedging instrument. It is not prominently correlated to any of the portfolios. These correlations motivate us to search for common factors deriving the 11 portfolios under consideration. We find through the Principal Component Analysis that there are four main components to explain their returns and two main components to explain their vols. Moreover, the first component in volatility is the common factor deriving all risky asset vols, explaining 75% of the total variance. To model this dynamic relationship between these portfolios, we estimate a multivariate normal Markov switching (MS) model using them. Then we develop a dynamic trading strategy, in which we use the MS model estimation results as input to the mean-variance optimization to find the optimal portfolio weights to invest in at each period. This trading strategy is able to dynamically diversify between the portfolios, and having a sharp ratio of 1.25, it performs much better than the input and benchmark portfolios. Finally, MS results indicate that Delta Carry has the lowest variance and positive expected return in both states of the MS model. This supports our findings from risk analysis that Delta Carry performs well during volatile periods, and vol elevations have a direct positive impact on its returns.

      • Effect of gamma irradiation and modified atmospheres on physiology and quality of minimally processed apples

        Gunes, Gurbuz Cornell University 2001 해외박사(DDOD)

        RANK : 2591

        Demand for minimally processed fresh produce has increased dramatically over the last two decades, but production of minimally processed apples has been limited due to lack of methods to enhance quality and safety. The use of irradiation and high CO<sub>2</sub> atmosphere has therefore, been studied. Apple slices from four cultivars and with different maturities were irradiated at doses ranged from 0 to 11 kGy using a CO<super>60</super> source. Effects of calcium and modified atmospheres (MA) containing high CO<sub>2</sub> levels (up to 30 kPa) on the response of slices to irradiation were also investigated. Physiology and quality of slices were assessed by measuring respiration, ethylene production, texture, pectin content, and color of the slices. Doses above 1.2 kGy increased respiration of ‘Idared’, ‘Law Rome’ and ‘Empire’ slices curvilinearly with maximum respiration occurring at 3–6 kGy. However, the response of ‘Delicious’ slices was linear between 0 and 11 kGy. The stimulatory effect decreased with post-irradiation storage. Respiration rate of pre-climacteric ‘Delicious’ apples slices was stimulated more than the post-climacteric slices, but the reverse occurred for ‘Empire’. The respiratory quotient increased with dose. Irradiation reduced ethylene production of apple slices. Fruit firmness decreased as irradiation dose increased beyond 0.34 kGy. The high dose rate initially prevented softening, but not by day 3. O<sub> 2</sub> level M the irradiation atmosphere did not affect firmness. The softening was associated with increased water-soluble pectin and decreased oxalate-soluble pectin. Calcium prevented irradiation-induced softening of 3–4 mm thick slices, but was not effective with wedges due to its limited penetration. High pCO<sub>2</sub> and low pO<sub>2</sub> resulted in reduced respiration and ethylene production. Inhibition of respiration was explained best by an enzyme kinetics model that combined competitive and uncompetitive types of inhibition. Browning of slices during storage was only slightly reduced by CO<sub>2</sub>. Firmness decreased during storage, but was not affected by atmosphere. CO<sub>2</sub> levels (>7.5 kPa) reduced accumulation of fermentation products. These results indicate that use of irradiation is limited for apple slices because of increased softening. Storage atmospheres with high CO<sub>2</sub> (15–30 kPa) and low O<sub>2</sub> (0.5 kPa or lower) have beneficial effects on physiology and quality of minimally processed apples.

      • Automatic Calibration of Freeway Models with Model-Based Sensor Fault Detection

        Dervisoglu, Gunes University of California, Berkeley 2012 해외박사(DDOD)

        RANK : 2590

        This dissertation presents system identification, fault detection and fault handling methodologies for automatically building calibrated models of freeway traffic flow. Using these methodologies, data driven algorithms were developed as part of a larger scheme of a suite of software tools designed to provide traffic engineers with a simulation platform where various traffic planning strategies can be analyzed. The algorithms that are presented work with loop detector data that are gathered from California freeways. The system identification deploys a constrained linear regression analysis that estimates the so-called fundamental diagram relationship between flow and density at the location of a given sensor. A triangular fundamental diagram is assumed that establishes a bi-modally linear relationship between flow and density, the two modes being free flow and congestion. An approximate quantile regression method is used for the estimation of the congested regime due to this mode's high susceptibility to various external factors. The fault detection algorithm has been developed to facilitate the automatic model building procedure. The macroscopic cell transmission model, which is the model assumed in this study, requires consistent observations along the modeled freeway section for an accurate calibration to be possible. When detectors are down or missing, the model has to be modified to a less accurate representation to conform with a configuration where a sensor is assigned to each cell of the model. In addition, on most California freeways the ramp flows in and out of the mainline are not observed. Since the estimation of these unknown inputs to the system also hinge on healthy mainline data, the identification of faulty mainline sensors becomes crucial to the automatic model building process. The model-based fault detection algorithm presented herein analyzes the parity between simulated and measured state, along with estimated unknown input profiles. Subsequently, it makes use of a look-up table logic and a threshold scheme to flag erroneous detectors along the freeway mainline. Finally, the fault handling algorithm that accompanies the fault detection aims to revert the model to its original configuration after the aforementioned modifications are made to the model due to missing or bad sensors. Using a relaxed model-constrained linear optimization, this algorithm seeks to fill in the gaps in the observations along the freeway that are a result of poor detection. This method provides a reconstruction of the unobserved state that conforms with the rest of the measurements and does not produce a state estimate in a control theoretical sense.

      • EPR spectroscopic and computational studies of the paramagnetic intermediates in the reaction of ethanolamine ammonia lyase with ethanolamine

        Bender, Gunes The University of Wisconsin - Madison 2008 해외박사(DDOD)

        RANK : 2590

        Ethanolamine Ammonia-Lyase (EAL) is an adenosylcobalamin (AdoCbl) dependent enzyme that catalyzes the elimination of ammonia from ethanolamine and other short chain aminoalcohols to produce their oxo products. After homolysis of the carbon-cobalt bond of AdoCbl, the highly reactive 5'-deoxyadenosyl radical abstracts a hydrogen atom from carbon-1 (C1) of ethanolamine, forming the substrate radical. In this dissertation, the substrate radical was identified as the observed radical intermediate during catalysis by acquiring electron paramagnetic resonance (EPR) spectra of 13C-labeled intermediates. The EPR spectra of reactions performed in D2O were analyzed by spectral calculations. The following isotopically labeled ethanolamine samples were used as substrates in these reactions: [1,1,2,2-2H 4]-ethanolamine, [1,1,2,2-2H4, 15N]-ethanolamine, [1,1-2H2]-ethanolamine, [2,2-2H 2]-ethanolamine, [1-13C, 1,1,2,2-2H 4]- ethanolamine, and [1-13C, 1,1,2,2 2H 4, 15N]-ethanolamine. The spectral analysis provided the exchange interaction parameter and the axial dipole-dipole interaction parameter, J and D. The J and D values are -53 and -43 Gauss respectively, and the D value corresponds to a distance of 8.7 A between the substrate radical and cob(II)alamin. The EPR analysis also showed that the principal values of the 13C hyperfine splitting tensor are [7, 13, 110] Gauss. The first Euler angle of the 13C hyperfine splitting tensor indicates that the p orbital on C1 of the substrate radical makes an angle of ∼98° with the dz2 orbital of Co2+. The experimental hyperfine splittings of the beta-hydrogens were used to determine the dihedral angle between the amino group and the singly occupied p orbital on C1 by comparing them to theoretical hyperfine splittings from QCISD(T) level electronic structure calculations. It was determined that the amino group eclipses the p-orbital on C1 in the substrate radical. Larger differences were observed between the theoretical and experimental hyperfine splittings for C1 and the alpha-hydrogen, which are possibly due to the effects of hydrogen bonding with the active site residues. Theoretical energy calculations showed that the dihedral angle of the substrate radical has considerable influence on the energy barrier for elimination of the amino group.

      • Investment analysis and future potential of controlled-environment agriculture hydroponic production systems for Boston lettuce

        Ilaslan, Gunes Cornell University 2000 해외박사(DDOD)

        RANK : 2590

        Cornell University's Controlled Environment Agriculture (CEA) program has been involved in greenhouse hydroponic vegetable production research since 1991. The unique aspect of the CEA system developed by Cornell University is accurate greenhouse climate control and integration of supplemental lighting to provide consistent year-round rapid plant growth, which results in higher yield than any other existing systems. Boston lettuce, <italic>Lactuca sativa cv</italic>. ‘<italic>Vivaldi </italic>’, grown under CEA conditions achieves the same quality and quantity of product everyday of the year. The CEA system creates many advantages, providing efficient use of nutrients, water, and labor while assuring better control of the plant development that results in earlier production, higher yields, and a qualitatively better product. The product, hydroponic lettuce, will be marketed as great tasting, fresh, locally-grown, safe (produced following the Hazard Analysis and Critical Control Point, HACCP, principles) and pesticide-free. These features, combined with year-round constant quantity and quality production, indicate potential for success. In the introductory stage of this CEA technology, it is crucial to determine the economic feasibility of the system, as it requires a large capital investment and sophisticated technical knowledge to operate. This study assessed production costs and economic viability of the production system in nine different US locations under different climatic conditions. This research is expected help the prospective owner or operators to make a more informed investment decision. A commercial-sized CEA demonstration greenhouse in Ithaca, NY is used as a reference for input data. A net present value analysis (NPV) is performed to determine the profitability of the system. The NPV analysis also provided the minimum price at which the product should be sold in order to receive enough revenue to cover all of the associated production costs and provide the rate of return required for the capital invested. The highest grower price required for economic feasibility of CEA system was in Ithaca, NY whereas the lowest price needed was in Miami, FL. The effects of various inputs including product price, electricity, and heating costs on economic feasibility of CEA hydroponic lettuce operations were also evaluated. The results of the one-way sensitivity analysis showed the product price, production level, and initial investment were the most important variables on the profitability of the all potential locations. A quantitative risk analysis of Monte Carlo simulation was also incorporated into the CEA hydroponic system investment analysis. The risk level of the investment was the highest in Los Angeles, CA while lowest in Miami, FL. The results revealed that, under uncertainty, grower prices to receive a minimum positive mean net present value have to be a couple cents higher than the grower price required for economic feasibility. The implications of economic pressure created by other hydroponic lettuce producing locations if a price premium can not received be for the local producer was studied. The results indicated that all selected locations except for Chicago, Ithaca, and Los Angeles markets provided the cheapest hydroponic lettuce to their own local market.

      • Observational Properties of Gigaelectronvolt-Teraelectronvolt Blazars and the Study of the Teraelectronvolt Blazar RBS 0413 with VERITAS

        Senturk, Gunes Demet Columbia University 2013 해외박사(DDOD)

        RANK : 2590

        Blazars are active galactic nuclei with a relativistic jet directed towards the observer's line of sight. Characterization of the non-thermal continuum emission originating from the blazar jet is currently an essential question in high-energy astrophysics. A blazar spectral energy distribution (SED) has a typical double-peaked shape in the flux vs. energy representation. The low-energy component of the SED is well-studied and thought to be due to synchrotron emission from relativistic electrons. The high-energy component, on the other hand, is still not completely understood and the emission in this part of the blazar spectrum can extend to energies as high as tera electron volts in some objects. This portion of the electromagnetic spectrum is referred to as the very-high-energy (VHE or TeV, E > 0.1 TeV) regime. At the time of this writing, more than half a hundred blazars have been detected to emit TeV gamma rays, representing the high energy extreme of these objects and constituting a population of its own. Most of these TeV blazars have also been detected in the high-energy (HE or GeV, 0.1 GeV < E < 0.1 TeV) gamma-ray range. In this work, we report on our discovery of the TeV emission from the blazar RBS 0413 and perform a detailed data analysis on this source, including contemporaneous multi-wavelength observations to characterize the broad-band SED and test various emission models for the high-energy component. Further, we extend our focus on the high-energy component to all archival TeV-detected blazars and study their spectral properties in the framework of GeV and TeV gamma-ray observations. To do this, we assemble for the first time the GeV and TeV spectra of a complete sample of TeV-detected blazars available in the archive to date. In the Appendix we present an analysis method for improved observations of large zenith angle targets with VERITAS.

      • Using graphs and random walks for discovering latent semantic relationships in text

        Erkan, Gunes University of Michigan 2007 해외박사(DDOD)

        RANK : 2590

        We propose a graph-based representation of text collections where the nodes are textual units such as sentences or documents, and the edges represent the pairwise similarity function between these units. We show how random walks on such a graph can give us better approximations for the latent similarities between two natural language strings. We also derive algorithms based on random walk models to rank the nodes in a text similarity graph to address the text summarization problem in information retrieval. The similarity functions used in the graphs are intentionally chosen to be very simple and language-independent to make our methods as generic as possible, and to show that significant improvements can be achieved even by starting with such similarity functions. We put special emphasis on language modeling-based similarity functions since we use them for the first time on problems such as document clustering and classification, and get improved results compared to the classical similarity functions such as cosine. Our graph-based methods are applicable to a diverse set of problems including generic and focused summarization, document clustering, and text classification. The text summarization system we have developed has ranked as one of the top systems in Document Understanding Conferences over the past few years. In document clustering and classification, using language modeling functions performs consistently better than using the classical cosine measure reaching as high as 25% improvement in accuracy. Random walks on the similarity graph achieve additional significant improvements on top of this. We also revisit the nearest neighbor text classification methods and derive semi-supervised versions by using random walks that rival the state-of-the-art classification algorithms such as Suppor Vector Machines.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼