Refine
Has Fulltext
- yes (238)
Year of publication
- 2019 (238) (remove)
Document Type
- Master's Thesis (136)
- Bachelor Thesis (100)
- Article (2)
Language
- English (238) (remove)
Is part of the Bibliography
- yes (238)
Keywords
- Deep Learning (5)
- LSTM (5)
- Transfer Learning (4)
- CNN (2)
- Consumer Behavior (2)
- Factor Models (2)
- FinTech (2)
- GloVe (2)
- Neural Networks (2)
- Stock Market (2)
Institute
- International Business Management (IBMAN) B.A. (74)
- International Finance M.Sc. (27)
- Business Intelligence and Process Management M.Sc. (26)
- International Economics M.A. (21)
- Poltical Economy of European Integration M.A. (17)
- Business Administration B.A. (14)
- International Marketing Management M.A. (13)
- International Business Administration Exchange (IBAEx) B.A. (9)
- Accounting and Controlling M.A. (8)
- International Business and Consulting: Human Resource Management M.A. (7)
Models of labor demand usually use cost or production functions to derive profit-maximizing firm performance. These models often rely on the assumption of symmetrical behavior,i.e., the response to a positive or negative wage shock of the same relative size is identical to theshock, and the estimated labor demand elasticities are the same for increasing and decreasingemployment. However, behavioral economics models like loss aversion and endowment effectsquestion the assumption of symmetry in labor demand. In addition, the influence of a labor shortageshould be reflected in the investigations. Estimations of Fractional Panel Probit models for threedifferent skill levels are applied to evaluate these findings with a large panel of Germanestablishments. The results indicate asymmetrical structures for long-run own-wage elasticities andfor some cross-wage elasticities, putting some doubt on the assumption of strict rationality in labordemand and indicating the influence of labor shortages.
The Role of Fintech in Promoting Financial Inclusion in Developing Countries: THE CASE OF MEXICO
(2019)
Financial inclusion is globally on the rise since 2011; however, still, nearly 1.7 billion adults worldwide do not have a bank account at a formal institution (Demirguc- Kunt et al, 2018). Supply, demand and societal factors may lead to barriers to financial inclusion, and accelerate voluntarily and involuntarily exclusions in the developing countries (World Bank, 2008; Beck and De La Torre, 2006; Beck el at, 2008). With the mobile phone technology and advancements in the innovations, the number of participants has increased with the entrance of new challenger i.e. Fintech startups, in the financial services market. The literature reveals that Fintech has a great
potential to broaden access to financial services through lowering costs, reducing information asymmetries, enabling more transparency, increasing competitiveness, etc. This thesis analyzes the role of Fintech in promoting financial inclusion with the case of Mexico, approaching the lack of financial inclusion issue from the perspective of the household. To approach this issue, one of the objectives of this thesis is to investigate the barriers refraining households from accessing and using financial services.
In Mexico, more than half of the adult population do not have a bank account at a formal institution including mobile money account (Demirguc-Kunt et al, 2018). The G20’s financial inclusion indicators revealed that account ownership as well as saving and borrowing at financial institutions has decreased since 2014. The main barriers are the difficulties to use financial services, financial illiteracy and insufficient financial infrastructure leads to barriers to
financial inclusion in Mexico. Ideally, Fintech has a great potential to reduce the barriers to financial inclusion and promote access to and use of financial services for the people who are excluded by traditional financial institutions due to prohibitive prices, lack of documentation, etc. However, Fintech alone is not sufficient to transmit its full potential benefits. Even though Fintech ecosystem is growing rapidly in Mexico, and seen as a potential solution for lack of financial inclusion issue in Mexico, this study reveals that Fintech ecosystem couldn’t achieve its potential in Mexico so far.
This Master’s thesis examines the impact of heterogeneity on the assessment of systemic risk in the context of the German banking sector. Precisely, it is questioned whether currently employed, official systemic risk indicators are able to account for the German banking sector’s heterogeneity and to signal systemic risk reliably regardless of different bank types’ individual characteristics. For the assessment, a two-step procedure is employed. First, currently employed, official risk indicators are applied to bank-type-specific data for six different bank types from 1990 until 2018 and benchmarked against crises that occurred during the assessment period. Second, the implications of sectoral characteristics on systemic risk are assessed. The findings suggest that indicators are indeed able to account for the German banking sector’s heterogeneity, issuing different signals for various bank types. Moreover, the indicators allow for the identification of individual bank types’ behavior and their role in the accumulation of systemic risk. Yet, they are only partially able to signal crises correctly and behave more like thermometers than barometers of risk. Lastly, structural features of the German banking sector amplify the risk of individual institutions and thus their contribution to systemic risk at large.
This thesis investigates the impacts of large-scale asset purchases (LSAPs), which are an unconventional monetary policy (UMP) used by the Fed in the response to the 2008 global financial crisis and recession, on gender and racial wealth inequality in the US. After demonstrating that monetary policies have gendered and racial impacts and that none of these studies have yet considered UMPs, the thesis will then explain theoretically what the transmission channels of LSAPs to the wealth distribution are. Empirical studies show that LSAPs created a wealth effect, through increasing the price of some asset owned by households, primarily stocks prices, and to a lesser
extent house prices. Current literature on the impact of LSAPs in the US is still in dissension over whether it increased net wealth inequality or not. However, there is ample evidence from the current gender and racial economic literature that the wealth distribution in the US is significantly unequal, and a hole in the literature on the impact of LSAPs on the highly gendered and racial US wealth distribution. The thesis then begins to fill in some of these gaps in the literature by investigating what has happened to the financial and non-financial wealth of households disaggregated by gender and race in the period of the LSAPs, and whether the LSAPs did contribute to or reinforce these wealth inequalities. Due to limitations in the data collection the thesis is not able to conclude that there was a net negative gender wealth inequality effect. Nevertheless, there is strong empirical evidence that the LSAPs did increase racial wealth
inequality, due to white households disproportionately owning stocks and having higher rates on homeownership.
This thesis aims at analysing the potential benefits and obstacles of an Employer-of-Last-resort(ELR) policy in the case of Germany. Three main conclusions can be drawn from the analysis. First, as a bottom-up approach an ELR policy can tackle the issue of unemployment on the macroeconomic, socioeconomic and individual level in a unique way and promotes social inclusion of the unemployed. An ELR addresses non-pecuniary costs of unemployment and has the potential to tackle further socioeconomic problems. Second, this work points out that an ELR’s impact on inflation depends on excess production capacities of economic sectors as well as wage bargaining structures. In this regard, trade unions, government and employer representatives must
cooperate comprehensively in order to adapt to a potential raise of workers’ class consciousness. Third, the institutional setup of the European union does not allow for the application of functional
finance and the necessary fiscal spending for financing an ELR. Hence, Germany would need to abolish or redefine the debt break strategy, e.g. in accordance to the so-called Golden Rule, in order to implement a comprehensive ELR policy.
This paper aims to introduce a new topic for the economic policies associated with financial instability, both theoretically and through data analysis, discussing the roots of instability and the policy recommendations in the literature of Minsky and examining the role of National Development Banks (NDBs) in the economy. The purpose of the work is to acknowledge NDBs as Thwarting Institutions in a Minskyian sense. These special financial institutions have two sides of effects in the economy, one side related to their financial nature, through their capacity to provide finance to key sectors of the economy, and the other related to the real effects the financing of development projects implicate. Due to these influences, National Development Banks can be considered strong pillars to strengthen the economy, providing additional mechanisms for economic policies for stabilisation and recovery purposes, which is the definition of Thwarting Institution. The study case of the Brazilian National Bank for Economic and Social Development (BNDES) provided a detailed data analysis of the general performance of the bank between 2000-2018, its importance for the economy and especially the countercyclical role of the bank after the 2008s’ financial crisis. The last section will provide the analysis of financial stability in Brazil, showing important variables both from the financial and real side of the economy, in accordance with the Minskyian theory.
European Neighboorhood Policy as a power instrument of the Euopean Union: the case Azerbaijan
(2019)
The Economic Partnership Agreements between ACP (African, Caribbean, and Pacific) countries and the EU has been ongoing since 2002 until current times. Outcome of the negotiations has been varied with some Regional Economic Communities (RECs) quickly concluding a comprehensive agreement whereas other RECs have not and negotiations have prolonged beyond expected deadline date.
This paper looks at the relevant research question of what causes the variations witnessed about outcome of the negotiations of the EPAs. The explanatory power of the Best Alternative to Negotiated Agreement – BATNA - as an independent variable derived from the Negotiation Analysis framework is chosen to explain this situation as the traditional Eurocentric theories of the EU as power have proven limited in explaining the varied conclusions of the respective EPAs.
Findings derived was that, outcomes of the negotiated EPAs were principally function by the availability or otherwise of a BATNA as perceived by the parties involved.
Today’s labour market is exposed to rapidly changing influences. Digitalization and globalization play a major role in this development. Since, generation Y and Z grew up with these changes, they are used to live in fast-moving times and are mostly able to adapt and learn quickly. According to demographic changes both generations are becoming more and more important for organizations as employers. To ensure the future success of a business, employer branding and recruitment need to consider the upcoming generation’s needs to gain valuable talents. This paper examines generation Y and Z and the job and workplace conditions they prefer. The study identified trends of both generations as well as some discrepancies between them. The results give insights about the attraction, engagement, and retention of employees from generation Y and Z.
Mobile retention has recently become a focused issue for app-based companies due to its importance and contribution in achieving revenue goals. A crucial aspect of mobile retention is to monitor and control the churn rate of app users. This thesis therefore investigates how churn analysis can enable the improvement of mobile retention campaigns and give recommendations for better integration of churn analysis in planning retention activities. Six expert interviews conducted at app-based companies in multiple industries have provided precious insights, especially the usage of churn analysis in practical business situations. The findings were interesting: at most companies, churn analysis is not a new concept, yet has not been implemented properly due to lack of resources and customer data. However, it is believed by experts that the planning of mobile retention strategy should follow the suggestions from churn analysis since certain positive effects have been recorded. With these solid findings, a framework was presented to help improve the performance of mobile retention with churn analysis.
The role of branding in a marketing strategy against counterfeiting for high-fashion companies
(2019)
In order to contribute to our planet’s sustainability, a sustainable diet is inevitable. It turns out to be a challenge for consumers to identify the food products’ degree of sustainability. In this regard, quality labels can be a help. However, the latter contributes, inter alia, to consumer confusion. The latter occurs when the cognitive processing of product information is being disrupted, which can lead to postponing or abandoning the purchase decision. The present work determines the most relevant aspects of sustainability in the food context on the German market from a consumer’s perspective. Furthermore, it is being figured out, which requirements a quality label needs to meet when aiming to reduce consumer confusion.
This thesis analyzes whether financial news articles can predict the stock prices of companies and the S&P 500 Index. The selected companies for the analysis are Facebook Inc., Apple Inc., Microsoft Corp., Google (Alphabet Inc.), and Amazon.com Inc. This thesis evaluates the predictive power of financial news by comparing the sentiment of financial news articles published on a business day to
the corresponding closing value of stock prices and the S&P 500 Index. For the analysis, financial news from various resources is used for a time period between January 2018 and May 2018. The sentiment of the news articles is analyzed using a lexicon-based approach. Then, regression models are used to predict the stock market. The models used for forecasting the stock prices and the S&P 500 Index
are Auto-Regressive Integrated Moving Average (ARIMA), Support Vector
Regression (SVR), and Linear Regression (LR). The results of the research in overall indicate that ARIMA performs better for predicting the stock prices and the S&P 500 Index.
In this research paper, the main focus was to concerns the role of Business Intelligence (BI) in decision-making processes in organizations. The biggest goal of this thesis is to explore how decision-makers use and deploy this BI output to structure collective vision and achieve organizational choices. Business intelligence
can be described not as a process or a product or even a framework, but as a new strategy in organizational architecture based on velocity in data analysis in order to make correct strategic choices in company with maximum performance in a minimum quantity of time. After a study evaluating a conceptual model of the impact of business intelligence on strategic choices was constructed together with interviews among decision-makers. In order to evaluate the model, few questions
were raised and later solved during the studies. The research concentrated on knowing how the information supplied by the business intelligence system were used by the staff in the organization. The project focused on conducting in-depth interviews in a diverse organization with managers with six different backgrounds.
Each of the interviewers make decisions based on the output of BI.
In recent years, music genre classification has been studied widely within the musical information retrieval community to detect music genre (e.g., pop, rap) automatically. The existing methods, reported in the literature, usually extract features from the melodic content or lyrics of the song and address this classification as a multi-class problem. This thesis presents a comprehensive investigation of the prediction of the music genre solely from an examination of the lyrical content of the songs. The lyrics were thoroughly analyzed to obtain the features as inputs to the various machine learning algorithms and the features were represented using tf-idf values. In order to perform the algorithms, a dataset with a total of 12,000 songs for 12 genres was created by crawling the music websites.
Furthermore, this study considers the genre classification as a multi-label task in which a song can belong to more than one genre as it is encountered. Therefore, multi-label approaches were examined in-depth and applied along with the popular classifiers. The experiments in this thesis show that binary relevance method conducted with logistic regression model outperforms among all for the lyric-based multi-label genre classification.
Search Engine Opt imisation, also known as SEO, is one of the online marketing channels t hat, when it is s et u p s uitably, it could continue to pay dividends over time without investment. Recently, SEO teams of some companies keep investigating historical data to predict the future trends of revenue associated with the number of clicks, number of imp ressions, and number of sear ches which is prop osed to help with companies’ quality planning and campaign investment. Some of the challenges experienced by SEO analysts when attempting to forecast the revenue is that there is currently no way to standardize or forecast customers’ behaviour, which means the trends could be different every day, month, and year. In this research, SEO traffic data from one of the online travel a gencies are collected for the purpose of data exploration, analysis and forecasting which are expected to bring business values and give some be neficialinsights. Moreover, different time-series forecasting models are selected to conduct experiments seeking the best fit model for SEO data; Autoregressive Integrated Moving Average (ARIMA) model is initially performed, followed by Long-Short Term Memory (LSTM) of Recurrent Neural Networks (RNNs). As a result, it is proved that ARIMA is yet a classical statistics model but powerful enough for such small-size data, albeit the data is non-stationary and has too much white noise. Me anwhile, the LSTM is a deep learning tool which could deal with different types of data, but still need to be applied with a larger size of data to prove its competence.
Entrepreneurship and innovation are regarded as decisive elements for economic success and growth. Since no entrepreneur can be successful on his own, it is worth considering the entrepreneurial environment. In this so-called Entrepreneurial Ecosystem, all stakeholders interact with the founders who are the focus of attention. Since the concept in this form is relatively new, there are many different models and opinions. One goal of this work is therefore to create a uniform understanding of the Entrepreneurial Ecosystem and then apply it to the leading startup hubs in Germany and Israel. These two economies have very different backgrounds: Germany is considered as a strong industrial nation and Israel as “Startup Nation” (Senor und Singer 2009). Both nations have particular strengths and weaknesses that have grown out of their histories. Germany has special advantages as the more populous country and its central position within Europe, while Israel is a small country, surrounded by enemies and does not possess any natural resources. How one of the most innovative economies in the world could develop there despite this fact, or perhaps precisely because of it and how they could even overtake Germany in some highly technological fields, will be explained in the course of this thesis. However, the main research goal of this thesis was to find out which synergistic potentials result from this exciting combination for the Entrepreneurial Ecosystems of both countries. To create synergies, the Entrepreneurial Ecosystems of Germany and Israel should focus on their strengths and try to reduce their weaknesses through collaboration with the other country. Nine qualitative expert interviews were conducted to underline the theoretical foundation and to enable a stronger argumentation in the results. The research found, that the exchange of strengths should take place at three relevant levels: private business engagement, education and IP / technology transfer, and location marketing to attract foreign founders.
The purpose of this thesis is to enhance knowledge in the field of foreign assignments and the thereto relating cultural aspects. More in detail, this research will examine the question of what makes international assignments special, with a strong focus on cultural challenges and the subsequent adjustment process linked to coping strategies and methods. The author chose a qualitative research method to receive in-depth and rich data from the participants of the study and decided for semi-structured interviews.
The 25 interviews have been mostly done in a face-to-face manner and through the telephone, as the expatriates were located in different countries around the globe. Most of the interviews were done on-site in Kuala Lumpur, Malaysia and in Berlin, Germany during spring 2019. Based on 25 semi-structured interviews, findings uncovered a diverse range of challenges and coping strategies in work and non-work-related
contexts. Different perceptions of time, interpersonal interactions and efficiency were outstanding challenges, whereas the chosen coping strategy was related to the type of challenge someone faced. Active and passive coping was discovered, as well as strategies of avoiding.
Artificial Intelligence and Data Science are transforming the businesses of today, and it is contributing significantly in Human resource management as well. Data science has paved the way to workforce analytics or people analytics, and Artificial Intelligence is supporting the HR department in various human resource management processes. Through this study, the author is exploring the impact of Artificial Intelligence and Data Science in Human Resource Management by analysing the underlying opportunities, challenges, and threats. It also focuses on recommending the skills, competencies, and capabilities that will be required by the HR department in the future to implement AI and Data Science systems. The review of the literature and expert interviews form the base for the research findings and recommendations. Furthermore, the study covers insights about the application of AI and Data Science tools and techniques in processes such as recruitment and selection, onboarding, performance management, employee engagement, training and development, and strategic decision making. Finally, the thesis concludes by addressing the practical implications, limitations, and the scope for future research.
The FinTech industry is very dynamic and the multitude of innovative business models created by these new entrants has increased over the past years. FinTechs display a competitive advantage in the field of technology, agility and customer-centricity that traditional banks cannot compete with. However, the regulatory requirements often pose a challenge for FinTechs to grow and expand their business models. Nevertheless, in recent years the phenomenon of BaFin licensed FinTechs such as N26 or Solaris Bank became apparent in Germany.
Since this topic is of high actuality no studies can be found on the topic of licensed FinTechs yet. Hence, this research paper will examine how the business models of these BaFin licensed FinTechs is constructed and what impact financial licenses by the BaFin can have on these business models and their positioning in the financial services industry. The focus thereby lies on FinTechs with a BaFin license active in the B2B-sector only. Based on an interview series conducted with relevant experts from different licensed FinTechs the findings show that the financial license allows these firms to become an independent entity, expand their product offering and strengthen their market position in the financial services industry.
After the fall of Lehman Brothers, systemic risk which triggers the whole financial system has gained more attentions from researchers. Recently, graph theory is applied to measure this risk. DebtRank algorithm is one of the network based models which illustrates
the on going shock propagation when no default occurs. Knowingly the important of systemic risk, this paper captures the broad picture of a potential interbank network of ASEAN region in the recent period 2013-2017 using DebtRank. This is done by assign a stress of different levels on external assets of banks by two scenarios: simultaneously and individually. Networks are constructed based on probabilities and the desired density of the network is found to be 10% of total possible links. The first case shows that the studied banking system in ASEAN is stable as systemic risk has the falling pattern. However,
the rising figure in 2017 implies the system in this year is less stale than previous years.
Moreover, the declining of loss caused by contagion is due to the decrease of interconnectedness and rise in capital during this period. Furthermore, the total loss calculated in a range of external shocks is a concave curve. In the second scenario, the results show that the most harmful banks are, at the same time, most fragile one. Besides, there are evidence of high dependence of the individual impact on systemic risk of the system and the vulnerability on bank size and connectivity.
Although a numerous reports have been published on the very imminent effects of FinTechs on Banking, there has been no in-depth analysis on individual enterprise level. This thesis focuses on not only the industry level changes caused by the rapid digitization and adoption of the FinTech services in the payment and credit market, but aims to identify the key factors to identify the cause of mass appeal to move away from the more traditional Banking services to the dynamic and ever evolving FinTech services.
This study helps shed light on the thinking behind these new FinTech services, in order to better understand how market opportunities are identified and customized products or services are created helping move the customer base from a traditional to a more revolutionized “banking” services.
Most of the studies on factor models have been based on the U.S. stock market. Results of these studies have shown that conventional factor models explain the vast part of its return variation and that each of these models strengthens the descriptive power of the traditional CAPM. However, when applying factor models on the stock markets of emerging countries, studies have yielded rather mixed results. Even though most of them find that conventional factor models explain a major part of their returns, they argue that other country-specific risk factors should also be considered, in order derive a model that describes their returns in the most comprehensive way. For this reason, this paper investigates the significance of both common and specific risk factors, in order to find which are the most important risk factors that impact the performance of the stock markets in the BRICS countries. The effect of systematic co-moments is also tested.
Empirical evidence shows that the market premium is the most significant risk factor for all of the BRICS countries. Additionally, the value, size and momentum factors are proven to be insignificant, whereas the investment and profitability factors are among the main determinants of stock returns. Specific factors also appear to be more important than the common one. Moreover, the popular extensions of the CAPM and higher-order moments do not substantially strengthen its descriptive power. Additionally, conventional factor models explain a lower part of the return variation of the BRICS countries compared to the U.S. stock market, indicating that they do not possess the same explanatory power in developed and emerging markets.
This paper adds more information to the existing literature regarding the performance analysis of Directional funds. The findings of this study suggest that both conventional market risks, strategy risks and macro-economic factors are needed for the sake of explaining the returns of Directional funds. By augmenting existing models and creating four new strategy-based factor models this study was able to explain the returns of four Directional strategies. Each of the four analysed Directional strategies applied distinctive investment approaches on diverse asset markets. Therefore, although there are some similarities present between some of the analysed strategies, this paper concluded that each Directional strategy is subject to different risk factors.
The objective of this paper was to determine the effect of macroeconomic variables on the profitability of banks in Germany using the quarterly data from the time period of 1996 to 2018. The data was collected from FRED, OECD and European Central Bank Statistical Data Warehouse. This study used multiple regression to examine the effect of macroeconomic variables (GDP, interest rate spread, share price, unemployment, exchange rate, inflation, credit loan and wage) on the profitability which is measured by ROA. The analysis was conducted in EViews10. The empirical finding from the study suggested that there is a significant relationship between interest rate spread, unemployment, share prices and return on asset. However, there is no significant relationship between GDP, exchange rate, inflation, credit loan, wage and return on asset. Therefore, the banks and government are recommended to implement better policies and monitor the macroeconomic variables to improve the financial performance of banks in Germany.
The aim of this study is to analyze if Turkish firms apply a market timing strategy on their financing decisions and to analyze the persistence of the market timing decision on their capital structure. The sample data of this study contains the 85 initial public offers from 2010 to 2015 in Istanbul Stock Exchange(BIST). Regression analysis method is used for testing the relationship between market timing and equity issues, and short-run and long-run effect of market timing. The year before IPO and the three subsequent years after the IPO considered for analyzing the impact of market timing. The results of this study show that there is a positive relationship between market timing and equity issues. Firms that are decided to go public in “hot” market periods, issue more equites and reduce their leverage ratio sharply right after the IPO, and this relationship shows an impact on capital structure only in short run. This short-term impact of market timing starts vanishing after the second year of going public.
Despite its simplicity, the yield curve is one of the best predictors of future economic activity. Empirical studies suggest that the yield curve is capable of forecasting recessions in major economies. In this paper, the relationship between the yield curve and stock bear markets will be studied with the focus on predicting bear markets in the U.S. and Germany. Also this paper seeks to answer the question if a market-timing strategy, based on yield-curve information, can outperform the market.
The results of this study suggest that for the U.S. the spread between 10-year and 1-year interest rates outperforms other spreads in predicting bear markets. Furthermore, the yield spread can be used to profitably time the market and outperform a buy-and-hold strategy.
For the Germany yield curve, the study has found a statistical significant relationship between the yield curve and bear markets. However, depending on the observation period, the forecasting ability differs tremendously. For the entire period, the yield curve was not able to predict local bear markets reliably, nor was it possible to use the information contained in the yield curve to outperform the stock market.
This paper focuses on the European stock market and the forces that determine the stock price movements on it. As a basis for the analysis, well known and used factor models’ methodology is applied for the investigation and explaining of the variance of the returns on the European stock market. An emphasis in the analysis is put on the description power of fundamental risk factors along with the momentum factor. As a result, five factors show abilities in explaining the returns in Europe. Particularly, the QMJ (Quality minus Junk), SMB (Small minus Big), PE (Price-to-Equity), ILLIQ (Illiquidity) and DE (Debt-to-Equity) show the greatest explanatory power among the overall 27 tested risk factors. Furthermore, a factor model constructed of the five aforementioned risk factors managed to achieve on average the greatest explanatory power when tested with six other famous factor models.
The Efficient Market Hypothesis would lead one to believe that stock markets are perfectly efficient and that abnormal/ deviant average returns are not possible. However, the existence of calendar anomalies empirically shows how specific periods during a week, month and year can influence the average returns in the stock markets. Research from scholarly journals and books, industry-related news sources, and industry-experts shows the occurrence of various calendar anomalies in global markets that led to unbalanced average returns and confronted the fundaments of the Efficient Market
Hypothesis.
In countries such as the UK and Australia, Public-private partnerships (PPPs) have become the preferred tool for the public sector to procure infrastructure. With this development it has become necessary to assess PPPs regarding their success, and it seems that research has been insufficient to this point. This thesis will give an in-depth insight into the field of success measurement for PPPs. First, it explains the theory behind PPPs by explaining the theory. Then, in the next step, a framework for success measurement in PPPs will be derived based on this theory. It will serve as the basis for the Success Measurement System (SMS) that will be developed. In this regard, the thesis adds value to existing research by using a fully encompassing approach. The SMS will allow for a sound measurement of success for PPPs, which includes a measurement of success for different stakeholders, different phases and categories.
In the main part of this thesis, the system is applied on eight transport PPPs through case studies. The insights, from these case studies, will be used to answer the following questions: First, are PPPs in general successful and should governments continue their implementation? Second, in which areas do PPPs fail? This is especially relevant to improve outcomes of PPPs in the future and to enhance PPP policies. Third, how can PPPs be compared with each other? In this regard, the results of the case studies will serve as a benchmark for future PPPs.
In regard of the first and the second question, the research has shown that PPPs are in general successful but often fail to meet cost and time targets. In addition, sometimes problems were encountered during the procurement phase, contract management and risk allocation. These are the areas where the public and the private sector should focus to improve outcome of future PPPs. The results of the SMS assessment led to an average success rate which can be used as a benchmark to compare outcomes of other PPPs.
Process models are often used as a Knowledge Management method, because these are able to store, visualize, and distribute knowledge within an organization. These knowledge-intensive processes can have a flexible, unstructured form, which is often hard to represent within a process model. The already available approaches of process-oriented Knowledge Management methods, which are considered for Business Process Management, are conducted manually and can therefore become time-consuming and labor-intensive. In addition, in times of Big Data, it is even more challenging to consider all possible cases. Due to the possibility of automatically generating process models through Process Mining, there is a huge potential for creating these processes with real information from event data. This study identifies the possibilities of generating knowledge-intensive process models through Process Mining. At the same time, it addresses the issue of the lack of representation of discovered flexible process models. Although few articles already appeal to the challenge of generating knowledge-intensive processes through Process Mining, the current main focus lies within the operational support in the information systems. The research question of this study is whether actual Process Mining approaches are able to generate process models that are as informative as the models created by processoriented Knowledge Management methods in their current state. To answer the research question, a comprehensive literature review as well as expert interviews have been conducted. Both approaches are part of the design science research methodology, which has been followed during the whole study. The theoretical results have been compared to the practical insights gained from the interviews. As a result, an informal, textual, best practice specification for mining knowledge-intensive process models within a Process Mining project has been developed. This should be considered for implementing an Enterprise Knowledge Medium into the already available IT infrastructure, so that process knowledge can be effectively saved and leveraged through Process Mining.
This thesis contributes to the growing interest of mobile and gait-based authentication. A realworld authentication system needs to ensure performance stability, even if the walking situation of phones holder changes. In this paper we analyze different circumstances of an environmental nature, such as phone placement (pocket position and orientation), clothing (trousers’ type, trouser width, trouser pocket’s distance to the user’s hipbone, shoe and bag), surrounding(location, surface) and walking style (walking speed, direction, group walk). All these labels were recorded by the theme ”A Walk Through Berlin”. 24 participants were equipped with a Samsung Galaxy S7 smartphone and collected their walking behavior at up to three locations in Berlin, Germany in at least two day-independent workshop sessions. Besides data collection, this research addresses limitations in regard to a one-class classification problem. An application should process data directly on the device itself and should not share these sensitive data streams with external parties or servers. We execute feature preprocessing and extraction in sliding
windows and use a one-class Support Vector Machine for user classification. After best feature evaluation, we gain an Equal Error Rate of 21% for model’s total performance, where train and validation set are enrolled on different days.
As an innovative tool, gamification has gained considerable importance in recent years to navigate and find many creative solutions to the problems and challenges faced by existing companies.
In this study, a game toolkit designed by using gamification elements that can potentially serve as a guide to creative ideation is introduced and applied in a data-driven industry. The aim is to
analyze the effects of the specifically designed game toolkit and the applied game elements on creative ideation and the attributes of users in general. The toolkit is explored using insights obtained from the analyses and observation of the behaviors of 16 players during the actual experiments. Furthermore, interviews and online surveys are used to conduct the quantitative and qualitative analyses in order to enhance the game within the scope of Action Design Research.
The toolkit proposed by the researcher is assumed to contribute to the existing literature. It can provide guidance and give valuable suggestions to possible future studies about the creation and
execution of a game toolkit in a field focused on data-driven innovations. Moreover, the results specifically indicate that the game toolkit directly drives creative idea generation while promoting engagement, motivation, and enjoyment as well as altering mindsets and thinking patterns.
Companies, public services and other institutions are increasingly turning to web-based applications, but attacks are increasing in both number and variance.
Previous approaches to avoid attacks by using web application firewalls rely primarily on pattern-based detection. This document evaluates if and which machine learning methods can be used to reliably detect web-based attacks. Classifiers such as Support Vector Machines, Neural Networks, Naïve Bayes, Decision Trees and Logistic Regression are used.
Furthermore possible use cases and visualizations of the decisions are suggested.
This thesis aims to investigate the relationship between opinions of politicians expressed in speeches and politicians’ paid side-activities of politicians, a potential channel of influence for lobbyists. With a focus on methodology development to automatically extract opinions and opinion changes for a specific item on the political agenda, the thesis reveals insights hidden in political speeches. Advancements in text mining present new opportunities to design advanced algorithms to achieve the stated objective. Along the journey of developing the methodology, the richness of used and possible algorithm components is discussed.
Poker is an interesting environment for artificial intelligence research. It is a game of imperfect information with multiple competing agents facing each other in a world of limited information, risks and deception. The observed average playrate is a commonly used indicator for holecard selection patterns of agents in Texas Hold´em poker. It serves as a fundamental feature in any modeling approach. In this paper we discuss why the average playrate is a biased metric and derive an unbiased new metric, the “maximum willingness differential” from empirical data. Finally the metric is formalized mathematically in a way that one can derive it from the observed playrate. Further implementation of this new metric could
enhance agent-modeling techniques and ahead/behind-research.
In recent years, terrorism has taken a whole new dimension and becoming a global issue because of widespread attacks and comparatively high number of fatalities. Understanding the attack
characteristics of most active groups and subsequent sta- tistical analysis is, therefore, an important aspect toward counterterrorism support in the present situation. In this thesis, we use a variety of data mining techniques and descriptive analysis to determine, examine and characterize threat level from top ten most active and violent terrorist groups and then use machine learning algorithms to avail intelligence toward counterterrorism support. We use historical data of terrorist attacks that took place around the world between 1970 to 2016 from the open-source Global Terrorism Database and the primary objective is to translate terror incident related information
into actionable intelligence. In other words, we chase the trajec- tory of terrorism in the present context with statistical methods and derive insights that can be useful.
A major part of this thesis is based on supervised and unsupervised machine learning techniques. We use Apriori algorithm to discover patterns in various groups. From the discovered patterns, one of
the interesting patterns we find is that ISIL is more likely to attack other terrorists (non-state militia) with bombing/explosion while hav- ing resulting fatalities between 6 to 10 whereas Boko Haram is more likely to target civilians with explosives, without suicide attack and resulting fatalities more than 50. Within the supervised machine learning context, we extend the previous research in time-series forecasting and make use of TBATS, ETS, Auto Arima and Neural Net- work model. We predict the future number of attacks in Afghanistan and SAHEL region, and the number of fatalities in Iraq at a monthly frequency. From time- series forecasting, we prove two things; the model that works best in one time-series data may not be the best in another time-series data, and that the use of ensemble significantly improves forecasting accuracy from base models. Similarly, in the classi- fication modeling part, previous research lacks the use of
algorithms that are recently developed. We also extend the previous research in binary classification problem and make use of a cutting-edge LightGBM algorithm to predict the probability of suicide attack. Our model achieves 96% accuracy in terms of AUC and correctly classifies “Yes” instances of suicide attacks with 86.5% accuracy.
In this thesis, a conceptual framework for content marketing is developed to broaden the understanding of content marketing and its role in enhancing customer value and, as a result, improve the return on investment (ROI) in marketing. The author explores definitional aspects of content marketing and identifies its potential for process automation. The author emphasizes the need for a cross-functional, process-oriented approach that positions content marketing at a strategic level.
Today, many organizations maintain a variety of systems and databases in a complex architecture that does not seem to fulfill the needs for a cross-functional, process-oriented approach. To address this challenge the paper aims to identify the key generic processes relevant to content marketing automation. Literature was examined to identify an appropriate categorization of required processes. Three cross-functional processes were identified to develop them into a conceptual framework for strategy development and implementation: Lead Management, Asset Management and Campaign Management. A new conceptual framework is developed based on these processes and the role and function of each module is explored. Furthermore, the interacting software systems and their functions for the process architecture are identified.
The literature review found that few reference process models exist. Those that did were not based on a process-oriented cross-functional conceptualization of content marketing. This gap in literature suggests that there is a need for a new systematic process-based content marketing strategy framework. Synthesis of the diverse concepts in the literature on content marketing and marketing automation into a single, process-based framework should provide practical insights to help companies achieve greater ROI with their content marketing automation strategy development and implementation.
The developed reference processes in this paper provide a starting point for the conceptualization, but the author recommends further exploration and modification of the reference model regarding its application.
The paper tries to provide a first data-informed taxonomy of Design Thinking. In addition to that it presents a set of prototypes of tools that could be used during data-oriented Design Thinking Workshops, or Data Thinking Workshops (DTW). New products and services increasingly follow a data-driven strategy. This creates the need for designers to create products and services with data in mind at the beginning of the
innovation circle. A common technique used is Design Thinking that has a very narrow view on data so far. Hence, the first part of the paper analyzes the core principles of design processes and derives a data-informed Design Thinking taxonomy. Additionally, the paper provides a set of tools that incorporate this data-informed Design Thinking taxonomy. Using an action design research approach, the DTW format is tested and the results are analyzed using a triangulation approach.
The suggested data-driven Design Thinking taxonomy and the proposed format of DTWs and the provided tools are of potential benefit for designers focusing on developing more digital, data-driven products and services. These help designers sharpen their perspective on data
challenges and come up with a more holistic view on data in their products and services. While the tools suggested in this paper in general work, several newly raised questions and considerations for further research can be derived by the findings. It is hoped this paper will help designers to further develop the taxonomy as well as come up with new tools that can place data at the core of the design process.
Process mining is on the rise. It consists of methods, techniques, and tools to discover, monitor and improve processes by extracting knowledge from event logs of information systems. In combination with the rapidly growing availability of event data (Big Data), challenges
arise in the field of data quality. As data quality is the utmost success factor for process mining, there is an urgent need to get familiar with the topic and leverage possibilities to improve the data quality of event logs.
This thesis contributes to the improvement of event log data quality. It outlines criteria and issues for both, data quality in general and for process mining based on a literature review. The aim is to offer an overview of existing challenges and how to approach them with a preliminary framework, which consists of standard- ized BPMN processes. In addition, design science with experimental validation was applied to master the data quality challenges of data noise and
partial-incomplete traces with common tools like ProM and R. The approach was evaluated through real-life and artificial data.
The results reveal a large variety of data quality challenges for process mining and a need to raise awareness for this topic. Due to the early stage of process mining research, the number of methods, techniques, and tools to improve data quality is limited. Nevertheless, possibilities exist to improve the data quality of event logs. In particular, a repairing technique for data noise was positively evaluated.
The author recommends further specification of the framework and development of software features regarding data quality issue detection, data quality assessment, and event log reparation. It is
advisable to monitor new publications, e.g., as most of the leveraged R libraries were launched recently. Finally, further research in the field of data quality awareness shall be conducted to point out the importance of event log data quality for the overall success of process mining.
This research thesis contributes to the growing interest of wireless sensor networks (WSN)in smart factory environments. About 80% of the German machinery and plant construction industry intensively attend to the topic in order to increase factory uptime, provide individual customer solutions and improve customer service (Berger, 2017). The implementation of wireless sensor networks thereby arises new challenges, due to the high distribution of sensor units and the special requirement of low energy consumption while still providing sufficient network transmission capacities. Great research efforts are proceeded to investigate the behaviour of wireless sensor networks for specific applications and provide more comprehensive interfaces and protocols. This research study examines a specific use case of acceleration sensor nodes, which are used in cable carriers to predict likely maintenance times. Inaccurate and faulty measurements,
transmission errors and different signal transformations make accurate predictions impossible. In that regard, various signal processing steps are conducted and evaluated in this thesis to compensate the resulting errors. Furthermore, a novel approach of estimating
the remaining error through machine learning in form of a linear regression is conducted. The results show, that the margin of error can be sufficiently estimated by an appropriate reference measurement as target value for the model training. Results and findings as well as recommendation for further investigations are provided.
An increasing amount of research papers published every year makes it more and more difficult for researchers to find papers that address the same or a similar topic as their own current work. Electronic online databases provide a good starting point when trying to get an overview of the papers published regarding a certain topic. The results presented by the online libraries, however, vary greatly in terms of relevance to the intended search query. In addition, most online libraries provide thousands of search results, of which a huge amount is not relevant for the user at all. There are recent approaches to increase the quality of the search results when looking for relevant research papers online. While most current search engines focus mainly on the title of the papers as well as a few tags, these approaches try to use other information found in the papers as their input.
This master’s thesis focuses on undirected graphs and presents a first prototype for a software that translates images of graphs into a metalanguage. Since many scientific publications contain graphs, they could be a possible future input for search algorithms. For this purpose, an image recognition model consisting of a convolutional neural network and two long short-term memory networks was developed in collaboration with Badr Bih. The model itself is based on a similar software prototype called pix2code developed by Tony Beltramelli in 2017. For computational reasons, we had to limit our graphs to a maximum of four nodes and trained our model on them. The results of the model are promising, but it will have to be trained again on graphs with more vertices to make a definitive evaluation of the approach possible.
In the world of internet marketing, search engine optimization is a popular term. Getting higher rankings on the search engine and thereby getting more views to advertiser’s site is basically what it is. However, those views will not mean a lot if they do not lead to sales or conversion. Search advertisers join an online auction in order to get a slot in the search engine results pages. This means, in Pay per Click online model, advertisers have to pay to the search engine for the number of clicks on the advertisement they posted. Therefore, predicting conversion likelihood for advertisers is highly crucial for the sake of the revenue. Besides, being able to understand and track the conversion rates not only allows advertisers to measure the performance of the web pages but also to identify areas for improvement. In this study, machine learning is used for predicting the conversion rate using search term queries in google shopping ads. The purpose is to analyse if a pattern on search term queries to predict conversion rate can be observed. Search term queries with a high probability of better conversion rate can be advertised more. While the bids on low-conversion segments can be lowered to reduce the costs. For this purpose, extracting features from the text to represent it in a way that can be understood by the machine like term frequency and paragraph vector are tested on machine learning and deep learning model. The results show that different patterns of search term queries do not lead to a predictable conversion rate in the specific use case. Thus, the search term itself is no indicator for good or bad conversion rate.
Sequential Statistical Testing Procedures as an Early Stopping for Binomial Bandit Experiments
(2019)
This study tries to provide an early stopping procedure on binomial bandits which is a type of multi-arm bandit experiment. In addition to that, it presents sequential statistical testing procedures which can be used as early stopping criteria for A/B experiment. The paper searches for the applicability of these procedures for binomial bandit case because multi-arm bandit experiment and A/B experiment are similar in the sense that rewards can be simulated as independent identical Bernoulli distribution.
It is often claimed that multi-armed bandit which use Thompson sampling requires dramatically less sample size than A/B testing while still controlling the type 1 and type 2 error rates on alpha and beta due to the concept of always switching to the better arm. However, it is also claimed that Bayesian procedures are not immune to peeking (early stopping) because of the structure of sequential testing.
The sequential statistical testing procedures reduce the required number of observations and allow the experiment to stop early when the collected data is good enough to make a conclusion. In this work, Wald’s SPRT and Max SPRT-I-AA (a modified version of Max SPRT) sequential statistical testing procedures are studied for the A/B testing, and their applicability to binomial bandit case is researched.
According to simulation results, Max SPRT-I-AA sequential statistical testing procedure perform well for the A/B test scenarios. The type 1 and type 2 error rates are remains on the acceptable level and the experiment time is reduced more than 50 %. However, for the binomial bandit case results are not so satisfactory and brings more questions about the applicability of Max SPRT-I-AA to multi-arm bandits because of the performance of upper boundary calculation.
Hypothesis Extraction from Academic Papers Using Neural Networks for Ontology Theory Learning
(2019)
In this study, we investigated a new use case of deep learning. We applied deep learning to extract causes and effects from the hypotheses of the scientific papers. The research presents a variety of RNN models, including RNN models with CRF layer for labelling the sequences. We used such models as Bi-LSTM, LSTM, SimpleRNN and GRU. The experiments were conducted with GloVe vector representation and character level vector representation of words. Moreover, along with RNN models, we evaluated various hyperparameters and model setups to achieve the highest performance scores. In the end, we obtained promising results and shared our thoughts on the future prospects of the following studies.
The Impact of Efficient Transportation Infrastructure on China Pakistan Economic Corridor (CPEC)
(2019)
This bachelor thesis investigates the influence of national culture on the feasibility of self-management practices, such as Holacracy. On the basis of qualitative interviews with six holacracy-experienced individuals from Germany, USA and Netherlands, indicators for relevant cultural influences based on the individual perception on Holacracy are examined. A conceptualized relation of Geert Hofstede’s research on cultural dimensions and employee related success factors for the implementation of Holacracy builds the framework for the analysis. The findings suggest that a correlation between the national culture and the feasibility of self-management practices exist, which is in line with the occurrence of holacratic companies across different countries. The analysis confirms the expectation that employee related indicators support the implementation of Holacracy in the Dutch and American culture. In contrast, less supporting factors were observed for the German culture, which would explain the limited number of holacratic organizations in Germany.
Inlight oftheincreasing importanceof digital andtechnological innovations,
corporations drive innovation initiatives in collaborations with startup through
corporate accelerators. How the corporate accelerator supports both parties
throughout the innovation process in regard to communication and the
exchange of information, has not been fully studied and gives an opportunity for
further analysis. Within t his thesis the supportive work of the corporate
accelerator has been empirically studied with the conduction of three i nterviews
with two experts from corporate accelerators and one startup founder and
provides in-depth insights and perspectives on the respective subject.
This thesis examines the effectiveness of the latest Transfer Learning techniques for Natural Language Processing applied to the classification of research methods used in scientific journals in the domain of Information Systems. The task of automated knowledge extraction from academic articles has seen ongoing progress in recent years. However, the combination of transfer and Deep Learning in order to assign research methods to scientific papers has not been addressed in the literature yet. The main contribution of this thesis is, therefore, an artifact that applies cutting-edge Transfer Learning techniques to a Deep Learning model by conducting several experiments and comparing their effectiveness. The prototype considers various ways of fine-tuning that are crucial to retain the knowledge transferred from pretrained models and avoid catastrophic forgetting. Additionally, this work discusses the literature with regard to the task-specific Theory Ontology Learning and the method-specific state of the art in Transfer Learning for Natural Language Processing. As a result, the artifact surpassed the performance of previously developed models for research method extraction, presented in the literature, without applying any custom feature engineering and only using around a thousand of labeled observations.
This research paper explores the application of the Universal Language Model Fine-tuning (ULMFiT) technique, a novel deep transfer learning approach in the Natural Language Processing (NLP) field, to the financial statements fraud detection task. Additionally, the artifact investigated a simpler model, represented by a one-dimensional Convolutional Neural Network (CNN). Both methods have been assessed with respect to the training time and predefined evaluation metrics. Overall, ULMFiT turned out to be considerably more computationally expensive to train and achieved an accuracy of 77% with F-measure of 13%, if used with a decision boundary of 0.5, and accuracy of 59% with F-measure of 42%, if calculated with a threshold of 0.2. In contrast, CNN model trained significantly faster and obtained the following metrics: accuracy of 82% with F-measure of 42% for threshold 0.5, and accuracy of 83% with F-measure of 58% for threshold 0.2. As a result, ULMFiT has been outperformed by one-dimensional CNN on all examined metrics. The results are reported by using two different values for decision boundary due to the precision-recall trade-off, depending on the use case. In addition, this thesis investigated the impact of data preprocessing. The findings have shown that removing all numbers and special symbols with supplementary text truncation, limiting the sequence length, had a positive effect on both models, mentioned above.
In this study, we investigate the usage of generative adversarial neural networks(GAN) for a sequence labeling task. We applied sequence generative adversarial neural network (SeqGAN) to extract cause-effect and moderator-mediator relations from hypotheses from scientific papers. This research focuses on the structure of SeqGAN and the problems that come with it. However, there are two main problems with GAN’s. Firstly, a vanilla GAN (Goodfellow et. Al. 2014) is designed for generating real-valued, continuous data but we want to label discrete words into tokens. The second problem is that a GAN can only provide a loss for a complete sequence (Yu, Zhang, Wang, & Yu, 2017, p. 1). To address these problems two things are changed. First, the sequence generation process is realized as a decisionmaking process. The sequences are going to be evaluated by the discriminator model. To solve the problem with the discrete data we follow Yu et. Al. and regard the generative model as a policy gradient. To approximate the sequence value a Monte Carlo Search is employed in the generative model. We show the performance of different parameter settings and which tricks improve the results. The python and r scripts are appended in this document. They can also be found in GitHub (https://github.com/clamkewitz/GANCause).
A Deep Reinforcement Learning Agent Using Multiple Assets Financial Signals for Portfolio Management
(2019)
In this study, we investigated possible applications of reinforcement learning in the area of portfolio management. Aa specific topology of reinforcement learning is choosen to study the feasibility, Deep Deterministic Policy Gradient (DDPG), to train neural networks to perform trade in an environment simulated real world trading. The results show that the DDPG agent is able to learn price pattern and perform profitable trades. In single stock backtest, the DDPG is able to generate an annual return of 7%. While in multiple stocks backtest, DDPG agent can generate an annual return of 12%.
Deep Learning, a topic of broad and current interest, has undergone rapid development in the last decade. The performance of the algorithms is already above human level. However, the area of Natural Language Processing is still a great challenge for the researchers. This study aims at exploring the novel ULMFiT method for text classification by applying it to a dataset of scientific articles with advanced rhetorical categories. This classification task is challenging even for humans and it requires a substantial analysis of the texts when conducted by machine learning algorithms. The objective of this attempt is to achieve text classification with minimal preparation. The ULMFiT method is the first successful effort to apply transfer learning to NLP tasks. Its performance will be evaluated on a task that requires a level of understanding beyond the semantic meaning of the text.
Blockchain technology is often associated with Bitcoin, but the technology enables more possibilities, such as smart contract which offers the opportunity to conclude contracts between at least two parties without a central instance e.g. a bank or notary. However, this technology is relatively new and undiscovered. It remains to be clarified whether the technology can establish in the business world. This thesis analyses the feasibility of this approach. Social, legal and technical aspects and their challenges are considered. The aim is to discuss the use of smart contracts, to examine the feasibility of establishing and to give recommendations for further measures. For this, several literatures were read to understand the different aspects and to make an overall assessment. In general, it can be said that time is needed to understand the technology and the resulting new opportunities and thereby to build trust. Finally, it can be argued that in all three aspects changes are needed to discover the full potential of the technology. Data protection should become less stringent to not hinder further progress, technology must continue to research and set standards, and society should be educated to overcome the fear of innovation.
The goal of this research is to investigate the use of deep convolutional neural networks for racing bib number recognition in sport images. Several deep neural network architectures are studied. Three final architectures are trained on three different sets of data: 1) Street View House Numbers (SVHN) Dataset, 2) A private dataset from Flashframe.io from different running events, and 3) A combination of dataset 1 and 2. This thesis investigates the performance that can be obtained on racing bib numbers from a neural network that has been trained on solely images from street house numbers, on a mixture of SVHN and RBN images as well as only on RBN images. The motivation behind this is to see how well this problem can be solved by transfer learning, as labelled images of racing bib numbers are scarce.
The models are tested on the RBNR Dataset (Ben-Ami et al., 2012) and a subset of the private dataset from Flashframe.io. The study shows that the best recognition results were obtained by a model trained on the hybrid dataset of all SVHN images plus an additional 50.000 images. This model outperformed the models that had been trained solely on the SVHN Dataset or the private racing bib number dataset.
The best model resulted in Recall of 0,92, Precision of 0,93 and F-measure of 0,93 on the RBNR Dataset (using the same formulas as previously reported on the RBNR dataset), and 0,97, 0,97 and 0,97 on the private dataset, respectively. The reported recognition results on the RBNR dataset are much higher than previously used methods and proves that neural networks can effectively be used for racing bib number recognition.
The Unhealthy Trade: The Expansion of the Food Industry in Latin America, case study of Mexico
(2019)
The text analyses the global and local inequalities that emerge because of the global food industry. More specifically, it analyzes how the consumption of food has changed affecting the health of the most vulnerable groups of people. In that sense, first, I analyze how the global food industry has expanded generating a health problem of obesity and Non-Communicable Diseases (NCDs), especially in the Global South. The result is that global inequalities are wider between Global North and South. This is reflected in higher rates of obesity and NCDs in peripherical countries, while Global North companies have more profits. Second, I analyze how this expansion is impacting local inequalities in Mexico. There, socioeconomic dynamics shape the way that ultra-processed food and drinks are consumed generating differentiated impacts by gender, class, and age. Women, poor people, and children are the most affected. In the end, the expansion of the global food industry has been contributing to reinforce previous inequalities at global and local levels
The objective of this master’s thesis paper is to investigate the middle class in China in a comprehensive way and examine the main defying elements of Chinese middle class. In addition to this, this master’s thesis attempts to dive deep and find the overlapping points between middle class and economic growth by talking about the middle class development issues. Historical analysis of middle class
development is also touched in this paper to understand what factors were behind its growth until these days.
In an overall view of my thesis work which is academically backed by previous researches and scholarly papers it is found that economic growth, achievements related to higher education and fair redistribution policies lead to reduced income inequality are the major elements that contribute to the growth of middle class. It is even more significant than social policies which explains class mobility. Based on the findings of my analysis of China, per capita income growth of Chinese people was the major element that expanded Chinese middle class by almost 75% through creating more open business market, and the rest of the factor was done due to inequality changes occurred in China. The growth was mainly occurred via creation of formal jobs which were fairer in payment and increased qualifications of labor force. There was also radical decrease in inequality of labor earnings which is the major factor for reducing income inequality among households. This can be explained by the fact that minimum wages kept rising throughout the 90’s which in turn modified the patterns of middle class consumption.