Monday, December 30, 2019

An Overview of Chinas One-Child Policy

Chinas one-child policy was established by Chinese leader Deng Xiaoping in 1979 to restrict communist Chinas population growth and limited couples to having only one child. Although designated a temporary measure, it remained in effect for more than 35 years. Fines, pressures to abort a pregnancy, and even forced sterilization of women accompanied second or subsequent pregnancies. The policy was not an all-encompassing rule because it was restricted to ethnic Han Chinese living in urban areas. Citizens living in rural areas and minorities living in China were not subject to the law.   Unintended Effects of the One-Child Law There have long been reports  that officials have forced women pregnant without permission to have abortions and have levied steep fines on families violating the law. In 2007 in the southwestern Guangxi Autonomous Region of China,  riots broke out as a result, and some people may have been killed, including population control officials. The Chinese have long had a preference for male heirs, so the one-child  rule  caused many problems for female infants:  abortion, out-of-country adoption, neglect, abandonment, and even infanticide were known to occur to females. Statistically, such Draconian family planning has resulted in the disparate (estimated) ratio of 115 males for every 100 females among babies born.  Normally, 105 males are naturally born for every 100 females. This  skewed ratio in China creates the problem of a generation of young men not having enough women to marry and have their own families, which has been speculated may cause future unrest in the country. These forever bachelors will not have a family to care for them in their old age either, which could put a strain on future government social services. The one-child rule has been estimated to have reduced population growth in the country of nearly 1.4 billion (estimated, 2017) by as much  as 300 million  people over its first 20 years. Whether the male-to-female ratio eases with the discontinuation of the one-child policy will come clear over  time. Chinese Now Allowed to Have Two Children Though the one-child policy may have had the goal of preventing the countrys population of spiraling out of control, after several decades, there were concerns over its cumulative demographic effect, namely the country having  a shrinking labor pool and smaller young population to take care of the number of elderly people in ensuing decades. So in 2013, the country eased the policy to allow some families to have two children. In late 2015, Chinese officials announced the scrapping the policy altogether, allowing all couples to have two children.   Future of Chinas Population Chinas  total fertility rate  (the number of births per  woman) is 1.6,  higher than slowly declining Germany at 1.45 but lower than the U.S. at 1.87 (2.1 births per woman is the replacement level of fertility, representing a stable population, exclusive of migration). The effect of the two-child rule hasnt made the population decline stabilize completely, but the law is young yet.

Sunday, December 22, 2019

Essay on Traditional vs Distance Education - 4055 Words

Education is an essential element in societies throughout the world. For many years education has been provided in classrooms on campuses worldwide, but there has been a change made to the conventional method of classroom learning. With the advancements in technology, education has been restructured so that it may be accessible to everyone through taking courses online. Distance learning takes place when the teacher and student are separated from one another due to their physical location and technology is used to communicate instructions to the student and to communicate feedback to the instructor. The virtual classroom is one of the various forms of technology used as an alternative to the traditional classroom settings. Other†¦show more content†¦According to Fox (1998), what is in dispute is not whether distance education is ideal, but whether it is good enough to merit a university degree, and whether it is better than receiving no education at all. He alludes to an argument that states students learn far too little when the teachers personal presence is not available because the student has more to learn from the teacher than the texts. Thus, in order for the student to be taught well, does the teacher have to be personally present? Many advocates of distance education are ardent about their venue and very critical of traditional education. These online education devotees view traditional classes as being unchangeable, inflexible, teacher-centered, and static (Fitzpatrick 2001). However, proponents argue that many simply would not be able to get a degree without distance educationÂâ€"the full-time police officer, the mother of four, or the individual living in a rural area approximately 100-200 miles away from any educational institution. Many individuals desperately need distance education courses because they have jobs, families, civic responsibilities. They are thirsting. But some want us to say, Sorry you dont want to drink the water there, but we cant bottle our fresh spring water, so youll have to come here or drink nothing (Fox, 1998, p. 5). Proponents contend that distance education is as good as traditional education. In other words, learning occursShow MoreRelated Distance Learning vs. Traditional Edu cation Essay2568 Words   |  11 Pagesin the classroom and online and she has written about her experiences with distance learning, also known as e-Learning, or online learning. In reference to the difference between the two types of classes she writes, â€Å"The two experiences are as different as a wedding reception and a rave† (Laird). With the growing popularity of distance education the question in many people’s minds is, â€Å"Do online classes and traditional classes have the same standards?† Since both types of courses are held in differentRead MorePreserving the Learning Process682 Words   |  3 Pagesthe Web? There is a dispute among experts that traditional learning is the best way of preserving learning process, but other models are beginning to gain attention and respect, with distance learning leading the way. When comparing learning an equal course in a traditional framework to a online learning framework, students often express higher satisfaction from the online learning, and rate the learning as more successful than the traditional framework. It has also been argued that on line learningRead MoreOnline Learning Vs. Online Education1372 Words   |  6 PagesOnline education is growing in popularity as more colleges and universities offer alternative enrollment programs. While there may be advantages and disadvantages of online learning. The advantages outweigh the disadvantages, especially for those who may face obstacles in pursuing a college education. Online education can be an alternative means to classroom instruction. Online instruction allows students to have a flexible s chedule while taking college courses. Moreover, flexibility and convenienceRead MoreOnline Education vs Traditional Education1666 Words   |  7 PagesOnline Education vs. Traditional Education Nowadays, with the technology furtherance and the increased use of the internet, online education has gained significant acceptance and popularity all over the world. Not too long ago, online education offered no competition to traditional education .Today, with more choices being offered online, traditional education is now facing a number of challenges in every level. This has as a result, the competition between those two educational programs, havingRead MoreThe Value of Online Learning1019 Words   |  5 PagesThe Value of Online Learning The nursing profession today is increasingly seeing staffing shortages. Online learning and distance education is a way that colleges have tried to fix the problem. The online programs available today offer working adults the opportunity to advance their degree while still maintaining their current job. The value of online education has increased amongst nurses who want to advance their profession that is constantly moving towards advanced technologies. One of theRead MoreEssay on Distance Learning vs. the Traditional Classroom1550 Words   |  7 PagesDistance Learning vs. the Traditional Classroom Non-traditional students are finding it easier and easier to maintain a job, a family, and pursuing a college career at the same time. This is possible because more and more non-traditional students are receiving an education using distance learning, as opposed to traditional, in-the-classroom teaching. Distance learning is basically taking college level, credit-bearing courses via the Internet. One of the most obvious advantages of distanceRead MoreEssay on Distance Learning vs. Traditional Classroom Comparative1573 Words   |  7 PagesDistance Learning vs. the Traditional Classroom Non-traditional students are finding it easier and easier to maintain a job, a family, and pursuing a college career at the same time. This is possible because more and more non-traditional students are receiving an education using distance learning, as opposed to traditional, in-the-classroom teaching. Distance learning is basically taking college level, credit-bearing courses via the Internet. One of the most obvious advantages of distanceRead MoreOnline Education Vs. Traditional Education1221 Words   |  5 PagesBenefits of Online Education Factions and supporters of online education express many advantages of online education compared to old-style classroom instruction. First, enrollment options are offered to each student to study online for most degree plans, with a flexible enrollment option and from any location, free from a competitive student classroom environment (Frame et al., 2015). Second, online education provides the student the option to work their assignments from home or any location atRead MoreOn-Line Teaching vs Traditional Teaching1302 Words   |  6 PagesTraditional Teaching vs Online Teaching Nowadays, education becomes one of the important things in human life. Most of the individual in the earth is aware to obtain the education. Besides that, every country always concern on developing the quality of the education to improve the human resources. If the quality of human resources of a country is good, indeed, the quality of that country will be good too. As the matter of fact, we cannot deny that the improvement of technology really influenceRead MoreDistance Learning: Emerging Technologies vs. Traditional Class Instruction953 Words   |  4 PagesDistance Learning: Emerging Technologies vs. Traditional Class Instruction High pace of life and time span has changed many aspects of learning. Distance education is new model of teaching and learning having the power to improve educational outcomes dramatically. As a result, many people are asking how to scale-up scattered, successful islands of innovation into universal improvements in schooling (Dede 1998). Undertaking syst emic reform (sustained, large-scale, simultaneous innovation in curriculum

Friday, December 13, 2019

A Responsible Government Must Act to Protect Its Citizens Free Essays

‘Freedom of expression constitutes one of the essential foundations of a democratic society and one of the basic conditions for its progress and for each individual’s self-fulfilment’ (Robertson G. , as cited in Petley 2009). The growing concern caused by possible abuse of censorship in modern societies has raised numerous debates in regards to an appropriate balance between censorship and freedom of expression. We will write a custom essay sample on A Responsible Government Must Act to Protect Its Citizens or any similar topic only for you Order Now This essay will argue that notwithstanding the fact that liberty of speech should hold a central place in today’s world in order to be consistent with democratic values and public interest, nonetheless this right can never be absolute due to possible repercussions resulting eventually in possible social chaos. Therefore, government’s interference in some cases is necessary to not only preserve the balance against certain rights, but also to comply with general duties involving responsibility for national security, protection of its citizens and prevention of public disorder or crime within the country. The scope of this essay will focus on defining censorship, analysing its function and most common forms as well as examining possible justifications and consequences of imposing restrictions on the public’s freedom of expression. Censorship gives governing bodies the right to not only control exchanged information, opinions and ideas but also allows to examine different forms of communication including but not limited to, press, TV, radio broadcastings or Internet, usually in pursuance of suppressing objectionable or offensive material. This right has inevitably been a hallmark of authoritarian regimes throughout the history where absence of democratic values makes it easier for government to impose repressing conditions on citizens (Petley, 2009). The Most effective form of controlling undesirable contents was prevention from ever being produced at all, what would probably be met nowadays with a wide objection across democratic countries (Petley, 2009). Censorship though is still present and exercised across different societies where expressions can often be circumscribed because they are deemed obscene, unpatriotic or immoral. From critical point of view imposing too many restrictions on a public’s right to free speech will create an intimidating environment which eventually under such pressure incline citizens to restrain themselves and discourage from a freedom of expression (Petley,2009). While this kind of suppression would seem conflicting with the values of a free country, it must be noted that certain issues are worthy of censorship and action is needed to be taken in order to provide appropriate protection to society as a whole. However, from historical point of view as well as today’s events surrounding censorship, it can be derived that very often authorities have tendency to abuse restraining power without having appropriate justification for their actions. The most common areas affected by excessive censorship are press, media, art and literature. These things are responsible for shaping people’s views, providing information and influencing public opinions. Considering that neither democracy nor freedom can be preserved without keeping the public properly informed, press nonetheless can be harmful not only for national security but also young impressionable citizens (Petley,2009). Therefore, governments should act in accordance with its duties to protect the citizens and maintain stability in the country. Unfortunately, this is not always the case, as often their power is abused by exercising censorship in extreme forms. For example intimidation of journalists is common and an increasing problem nowadays in developing countries which is one of the concerns conflicting with an appropriate execution of the law (Petley, 2009). Although some argue that press is essential during the times of fear and crisis in order to keep the public informed about current situation, authorities often use different forms of intimidation to prevent journalists from investigating events, such as in war zones, by excluding them, harassing or even attacking (Petley 2009). Destruction of art and literature as well as prohibition of publishing and accessing certain types of works are yet other examples of inappropriate and overused censorship infringing public’s right to know. Presented approach of misused right to censor results in impairing the flow of information, repress important data, and restraining disagreement (Graber, 2003). However, in the light of today’s claims for freedom of expression the question arises though when censorship could actually be considered appropriate and justifiable? The most problematic part of censorship is probably determining what deserves to be censored in the first place. As much as the use of censorship can inevitably be abused, it is also possible for speech to cross the line and express offensive or harmful intent which shows that there are situations in which society could actually benefit from certain restrictions. Although in order to justify imposed restrictions there must be reasonable grounds for it (Petley,2009). The problem must clearly be seen as a threat or danger to the individual, certain groups or society as a whole. Some opinions or ideas can be identified as threatening, immoral or offensive and as a result seriously affect a wider group of people which makes it difficult for governments to avoid censorship in such an instance. Hate speech is one of the examples where freedom of expression needs to be appropriately regulated not only to protect minorities from serious racial hatred and prejudice but also to avoid acts of violence being a possible response to such behaviour (Petley,2009). Another problem arises at times of war, where free press is considered to be essential in keeping a public informed about current situation, although government must then act to protect sensitive information about military missions or anti-terrorist operations, from the enemy. In this case ‘the fewer people with access to state secrets the better’ (McMullen, 1972). Some standards regarding the censorship and freedom of speech are therefore needed to protect society. Thus Article 10 of Human Rights Act states that while freedom of expression is a foundation of a democratic country, the exercise of these freedoms must be subjected to certain restrictions regulated by law in order to ensure public safety, national security and protection of individual rights of people within the society. The censorship inevitably meets with abundant objection in democratic societies, however this essay has shown that there are some exceptions where imposing restrictions is essential to protect citizens. It is for government to comply with duties it has towards the public to ensure that appropriate and justifiable actions are undertaken. The abuse of neither the right to censor nor free speech can be exercised within the democratic society. Therefore the appropriate approach for balancing censorship with a freedom of expression should be undertaken where free speech should be used in civilised and logical way and censorship imposed only where absolutely necessary How to cite A Responsible Government Must Act to Protect Its Citizens, Papers

Thursday, December 5, 2019

Agency theory free essay sample

Introduction â€Å"It is the things towards which we have the stronger natural inclination that seem to us more opposed to the mean† Aristotle (2004, p. 47) The documentary ? Inside Job? portrays a riveting account of a financial industry festered with greed and conflicts of interest. As bankers gambled creatively with the life savings of laymen investors, ratings agencies and regulators closed their eyes to the full picture, whilst scholars supported the development of over the counter derivatives designed to safeguard the ever-increasing rate of subprime mortgages. Beginning in mid-2007 the largest American financial crisis since the Great Depression began to unfold (Jickling 2010) with thousands of homeowners defaulting on their mortgages (Pinyo 2008). The consequences were to be felt around the world and the Global Financial Crisis (GFC), as it came to be known, soon had national governments scrambling to ? bail out? private institutions in effort to keep the financial industry afloat and mitigate the fallout from digressing into pandemonium (Shah 2010, Sidelsky 2009). Inevitably, the pressing questions of governments, media and the public alike were how could it have gone this wrong and who was to be blamed? Shots were fired left, right and center, targeted at a range of factors from regulation and credit agencies to financial innovation and central banks. Particularly, the intertwined aspects of executive remuneration and the auspices of corporate governance (CG) were targeted as having failed to safeguard the company and incentivized risk-taking. The attacks were not only directed at ? institutional constructs? , a recurrent character was also the greedy banker and his apparent disregard for ethics and morality in pursuit of his own gain. As we enter the ? post-crisis? era, governments and regulators seek to redevelop regulations and standards to prevent the recurrence of a GFC. Generally however, their focus only addresses what is visible (Dobbin et al. 2010). The purpose of this thesis is to delve deeper and review the underlying theoretical construct of best practice CG mechanisms utilized today, agency theory (AT), a construct that has also been criticized as ? green lighting? a higher propensity towards risk, along with unethical and immoral behavior (Ghoshal 2005). This thesis therefore poses the questions: Did the agency theory prescriptions of corporate governance and directors’ financial literacy impact the risk profile of Scandinavian banks during the Global Financial Crisis? And are there differences in the moral and ethical perceptions of business majors in comparison to other majors? Based on hypotheses derived from AT and through the utilization of data on Scandinavian Banks‘ Thomas Rudiger Smith 7 M. Sc. FSM Master Thesis: Agency Theory Its Consequences board of directors and incentive plans, the thesis addresses the first part of the research question by investigating whether AT prescriptions contributed to the risk-taking behavior that propelled the GFC. Subsequently, the second part of the research question is analyzed on the basis of hypotheses grounded in the popular criticisms of AT in begetting immoral or unethical managers, and seeks to answer this question through a survey of ethical perceptions. Ultimately the result of the research question is discussed with a view to management education and moral philosophy. Prior to investigating these issues, it is important to understand the motivation driving the aims of this thesis. 2 Motivation The GFC has not only been a contentious topic for regulators, bankers and the media, business schools have also debated the causes and consequences in effort to find ways to better prepare their students for future challenges1. This debate, in combination with previous research on agency theory in banking (Smith et al. 2009) sparked the author‘s initial interest through the simple question â€Å"What role have agency theory prescriptions played in the crisis? †. What started as a simple question has evolved into this thesis, wherein the consequences and side-effects of the AT perspective is reviewed due to its prominent role in business education (Dobbin et al. 2010) and its potential relationship to the GFC. What further augmented the interest was the perceived simultaneous incapability of agency theory as a descriptive theory of CG (Dalton et al. 1998) in combination with its strong normative capability, and potential side-effects. Essentially the question that remained after the review of scholarly writings on agency theory, was whether the side-effects of encouraging risk-taking and the presumed postulation of creating immoral managers in fact was true, and if so, what would this mean for management education. Out of this emerged the research questions under investigation here, for which the obvious choice for data collection was the banking industry as both greed and excessive risk-taking have been argued as causes of the crisis (Shah 2009). The specificity of the area of interest however meant that as opposed to much of the current business research on the GFC, this thesis has never intended to provide input for how financial regulation should be formulated. Rather, the goal has been to highlight the potential consequences for management education, given the lack of research herein even though many future bankers will be the product of business schools. Additionally, the specificity of the research questions means that Discussions on the impact of the financial crisis on management education were observed at a CEMS Executive Board meeting in Singapore in May 2010. CEMS is an alliance of 26 leading world-wide business schools. 1 Thomas Rudiger Smith 8 M. Sc. FSM Master Thesis: Agency Theory Its Consequences the structure must be qualified properly before commencing, as it handles two simultaneously independent and intertwined questions. The subsequent section will thus introduce the thesis structure. 3 Structure As a result of the research questions and the data collections, the structure of the thesis will make a topical split when deemed necessary to avoid confusion between the treated data and hypotheses. The structure for the thesis will therefore set out accordingly, first by outlining the context of the GFC, thereafter assumptions and limitations will be presented in order to demarcate the research area. Subsequently, the theoretical background will be introduced, first highlighting the core theoretical Figure 1 Structure foundation of agency theory and subsequently moving into the two different consequences under investigation – risk-taking and ethics. Hereafter the hypotheses for each consequence will be introduced, which will be followed by a joint methodology section. Thereafter the thesis is divided, first focusing solely on risk-taking and governance mechanisms, their analysis and partial conclusion, followed by the analysis of the second strand, the ethical hypotheses. Finally once all hypotheses have been investigated, these two strands will be integrated in the discussion and the findings will be summed up in the conclusion. Throughout the thesis, a graphical representation of the structure (Figure 1) will indicate shifts from one section to another. Having outlined the motivation and structure, the following section seeks to qualify the predominant focus on governance and greed with respect to the GFC and their connection to the economic theory. Thomas Rudiger Smith 9 M. Sc. FSM Master Thesis: Agency Theory Its Consequences 4 Greed, Governance the Financial Crisis 4. 1 Greed The populist cause of the GFC is greed, (Pinyo 2008, Guina 2008) wherein investment bankers gambled with customer funds (Shah 2010). Credit was cheap, needed to be lent out and with no more prime borrowers, bankers went to sub-prime borrowers to cash in more money (Jarvis 2009). The gamble was almost a safe bet provided housing prices kept rising, but when the housing bubble began to constrict and interest rates rose, sub-prime borrowers began to default (Jickling 2010, Time 2011). Though acknowledged as a contributing factor (Anderson 2008), the events preceding the GFC are too multifarious to be attributed to greed alone. 4. 2 Governance 4. 2. 1 Distorted Bonus Bonanza A bonus culture that effectively espoused excessive risk-taking did not help. The potential for upside gains were significant and the downside costs negligible, or so it seemed (Sidelsky 2009). As noted by Krugman (2008) in the New York Times, „The pay system †¦lavishly rewards the appearance of profit, even if that appearance later turns out to have been an illusion?. Variable pay packages that tied managerial wealth to the wealth of shareholders were commonplace. Rajan noted back in 2005 that these created distorted incentives and promoted risk taking, even proclaiming that „They may create a greater probability of a catastrophic meltdown? (p. 318). Lord Turner, head of FSA, would later support Rajan in claiming that the bonus culture indeed had an effect on the financial crisis (BBC 2010). Their arguments were also supported academically by Bechmann and Raaballe on a sample of Danish banks (2010). Rajan (2005) and Blundell-Wignall et al. (2008) argued that the inherent problem of incentive schemes was that they were not risk adjusted, effectively accentuating risk-taking behavior. The hefty bonuses accumulated by bank managers were also targeted for criticism in the post-GFC finger-pointing game, as politicians either questioned or sought regulatory action on bonus levels (Arentoft 2010, Condon 2010). However Sidelsky (2009) contended that bankers, though also selfinterested, acted largely in accordance with the adage of the system – profit maximization. Thomas Rudiger Smith 10 M. Sc. FSM Master Thesis: Agency Theory Its Consequences 4. 2. 2 Corporate Governance Failure Closely related to the issue of bonus schemes is the perspective that contemporary CG has failed in safeguarding the firm (Jickling 2010, Blundell-Wignall et al. 2008). Foong (2009) also pointed to weak CG mechanisms to explain the effectual failure of the market. OECD (2010) provided similar critique, describing a system that failed to provide and cultivate sound business practices. Professor Hasung Jang posited that like the 1997 Asian financial crisis, shortcomings in CG was a root cause of the GFC (Jang in Sharma 2008). Others point specifically to the general ineffectiveness of boards to stem incessant risk-taking behavior (Dobbin et al. 2010, Abdullah 2006). The governance best practices that may have failed, the distorted bonus culture and the greedy manager share common ground through the perspective of agency theory, a facet that remains unaddressed by regulators. 4. 3 The Connection to Economic Theory A less espoused argument for the cause of the GFC attacks the underlying economic theory that underpins the development of established governance mechanisms and may have adversely impacted the moral compass of business managers. Dobbin et al. (2010) noted that the political responses to the GFC have focused on the regulatory environment, ignoring the contributions of economic paradigms, particularly agency theory, in promulgating the wealth maximization environment that abetted the crisis. Daianu (in ALDE 2008) argued that the theoretical underpinnings of policies were problematic in general, and the principal-agent problem in particular fuelled the crisis. Policies based upon economic theories that expect humans to be rational and discount complex realities to achieve perfect models have essentially failed (The Times 2010). Priester (in ALDE 2008) criticized the proclivity of business models towards short term wealth maximization as „fundamentally flawed? on the grounds of being both „economically obsolete? and „morally indefensible? (p. 38) by transferring all power to the shareholder. From an ethics perspective, he further argues that the permeation of economic theory has dehumanized business and only heralded innovation for the purpose of private gains, when in fact â€Å"innovation [is] for-or-about [serving] the substantive interest of the Human Person† (p. 38). In essence, the crisis may not only be a consequence of poorly constructed institutions of control, but rather of poorly constructed financial theories supporting and dictating the development of Thomas Rudiger Smith 11 M. Sc. FSM Master Thesis: Agency Theory Its Consequences these institutions (Kou 2009). Therefore this thesis investigates whether the agency theoretical prescriptions added to more risk with regards to the GFC and whether it creates immoral managers. Before delving into the theoretical background, hypotheses, methodology and data testing, it is relevant to define the appropriate assumptions as well as demarcate the research area through some limitations. 5 Assumptions Throughout the thesis a number of assumptions are made, none of which are believed to distort the overall picture, though they may in fact have an influence on the generalizability of the thesis (Bryman et al. 2003). For both areas it is assumed that the constructs measure the intended effects. Through the qualification of measures by previous studies investigating similar variables, the assumption is assessed to be fair. It is additionally assumed for both data sets that Agency Theory is part of education and financial literacy ergo also means a familiarity and understanding of agency theory. This assumption although grand in its scope is not unrealistic, as noted by Zajac et al. (2004) and Dobbin et al. (2010). A more questionable assumption is made with regards to the impact of education. Although some like Albert et al. (2010) highlight that education has lasting effects, it is impossible given the research design to discern between self-selection and actual impact of education. The relationship between formation and actions must therefore be treated with regards to this assumption. 6 Limitations As with any other, this thesis is limited by timeline, scope and scale which confines the ability to investigate all possible variables and contributing factors. Unlike AT, alternative models of CG, such as stewardship and stakeholder theory (Lan et al. 2010), have yet to gain a solid foothold in the practical literature and enactment of CG (Daily et al. 2003)2. As such, reflecting the real life context, the thesis does not directly investigate these alternatives, though they are referred to as points of discussion. Amongst the many potential consequences of agency theory, this thesis will focus on two due to their perceived relevance to the GFC. As noted, whilst it is acknowledged that there were many 2 An overview and short critique of these models and the director primacy model is available in Appendix 17. 1. Thomas Rudiger Smith 12 M. Sc. FSM Master Thesis: Agency Theory Its Consequences contributing factors to the GFC, the intention of this thesis is to empirically analyze the consequences of agency theory. As such, the GFC serves as the context for analysis rather than the object of investigation. The banks are not disregarded however, given that their societal role makes the application of AT prescriptions within the industry all the more intriguing. Nevertheless it is acknowledged that the findings of this thesis related to CG will be derived from a distinct and heavily regulated industry, which may limit their utility (Battilossi 2009). Upon investigating the second research objective, it is accepted that temporal limitations made the assessment of moral philosophy development challenging and the cogency of results may be restrained by the difficulty in establishing the degree to which individual moral development is influenced by business education and not also self-selection (Pfeffer 2005). Overall however these primary assumptions and limitations, by virtue of their academic support and conscious inclusion, are not believed to fundamentality compromise eventual findings. Having established these caveats, the thesis will return to outlining the connections between the presented causes of the GFC and economic theory. But before qualifying the consequences of AT on risk and morality, it is imperative to first delineate the concept itself. 7 Theoretical Background 7. 1 Agency Theory The 1976 article ? Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure? by Jensen and Meckling helped establish AT as the dominant theoretical framework of the CG literature, and position shareholders as the main stakeholder (Lan et al. 2010, Daily et al. 2003). The adoption of the agency logic increased during the 1980‘s as companies started replacing the hitherto corporate logic of managerial capitalism with the Figure 2 Structure perception of managers as agents of the shareholders (Zajac et al. 2004). The subsequent stream of Thomas Rudiger Smith 13 M. Sc. FSM Master Thesis: Agency Theory Its Consequences literature would break with the tradition of largely treating the firm as a black box and the assumption that the firm always sought to maximize value (Jensen 1994). AT addressed what had become a growing concern, that management engaged in empire building and possessed a general disregard for shareholder interest, what Michael Jensen called â€Å"the systematic fleecing of shareholders and bondholders† (1989, p. 64), through providing prescriptions as to how the principal should control the agent to curb managerial opportunism and self-interest (Perrow 1986, Daily et al. 2003). As the market reacted positively to this change in logic, with time the agency approach became institutionalized in the practice of CG, within business education, research and media (Zajac et al. 2004; Shapiro 2005, Lan et al. 2010). Out of the agency logic grew two closely related streams of research; the mathematically complex Principal-Agent literature and the more practice oriented Positive Agency Theory (Shapiro 2005). Common to both is shareholder primacy, wherein the principal is positioned both as the residual claimant and main stakeholder. Although the influence of Principal-Agent theory cannot be denied (Asher et al. 2005), the practical and empirical nature and implications of Positive Agency Theory on CG situate this stream as the main concern of this thesis. 7. 1. 1 Foundations As any theory, AT is based in a number of assumptions about man, which have a significant impact on the formation of the theory (Davis et al. 1997). The most common belief is that AT is based in the economic model of man (e. g. Brennan 1994, Perrow 1986, Shapiro 2005). Jensen and Meckling denounce this interpretation however, by arguing that the theory is grounded in what they call REMM – the Resourceful, Evaluative, Maximizing Model (Jensen et al. 1994). They argue that the REMM most closely replicates human action, and that the economic model of man is a simplified version that does not reflect the spectrum of human behavior. However, the extent to which these two models are actually different is questioned by Brunner (1996) and Tourish et al. (2010), who treat them as equals (see also table 1 for comparison and overview of assumptions). Their arguments are based in the fact that the REMM, although accepting that wealth may not be the only goal, will willingly substitute goods for monetary rewards (Baker et al. 1988). In addition, despite the fact that the REMM can act with altruism, it can only do so simultaneously with individual self-maximization3. As such pure altruistic behavior without ulterior 3 Self-interested altruism although creating a possibility of other-regarding behavior – does only so given a positive Thomas Rudiger Smith 14 M. Sc. FSM Master Thesis: Agency Theory Its Consequences motives cannot take place. Thereby the REMM is largely similar to the economic model of man, which assumes that humans are rational, selfishly motivated and will behave opportunistically, even ruthlessly, whenever advantageous (Ghoshal 2005, Daily et al. 2003). Herein, actions are undertaken according to selfinterest (Fama 1980) and opportunistic behavior is fostered when monitoring contracts and relationships becomes difficult and costly due to bounded rationality and information asymmetry (Perrow 1986, Donaldson 1990). Opportunism is therefore central to this view of man, where an actor‘s promise to do a certain action is worthless if the circumstances of the promised action changes before the action is carried out (Heath 2009). As such, changes in behavior are also driven by changes in incentives (Prendergast 1999) and behavior is directed by maximizing self-interest under game-theoretical like conditions (Perrow 1986). Human Assumptions REMM Economic Man Bounded Rational Rational Maximizer based on thorough evaluation Maximizer Self-Interested Self-Interested Actions driven by Incentives Motivated by incentives Opportunistic if beneficial Opportunistic with guile Will substitute goods if beneficial (not driven exclusively by extrinsic rewards) Altruistic if beneficial Resourceful – innovative when facing constraints and opportunities Focus on extrinsic rewards Not other-regarding (Resourceful) 4 Table 1 Comparison of REMM and Economic Model of Man Regardless of whether Jensen and Meckling‘s (1994) postulation that the REMM guides AT, Table 1 shows that the REMM in fact have few differences from the Economic Model of Man (Brunner et al. 1996). Bearing in mind the lack of self-interested altruism and the slightly stronger focus on extrinsic motivators in the Economic Model of Man, arguments against this representation of benefit to the individual. Thereby self-interested altruistic behavior can potentially be reduced to an intrinsic motivation (Brunner et al. 1996). 4 The Economic Man is like the REMM perceived to be resourceful, yet the literature is generally less focused on this aspect of his/her behavior as opposed to the other notions (Brunner et al. 1996). Thomas Rudiger Smith 15 M. Sc. FSM Master Thesis: Agency Theory Its Consequences human behavior must then also be applicable to the REMM model (see section 7. 3. 1) With the understanding that man is self-interested, ever opportunistic and driven by incentives, AT addresses the effect of having this man as a manager in the modern corporation by providing prescriptions to taming him. But what is the modern corporation in the eyes of AT and what are these effects and prescriptions? 7. 1. 2 The Modern Corporation, Effects Prescriptions in Agency Theory 7. 1. 2. 1 The Modern Corporation is Separation of Ownership and Control The model of the modern corporation used in AT is driven by the development in the mid 20th Century, where the corporation grew in size, complexity and in the need for external capital. This, combined with an increased stock market, a limit on managerial wealth and a need for efficient risk allocation (Fama 1980, Fama et al. 1983, Demsetz et al. 1997), meant an increase in the diffused ownership of companies amongst shareholders. As shareholders have a willingness to bear risk but do not necessarily possess the interest and time to actively manage the company (Brealey et al. 2008), a contractual relationship is created wherein an agent (manager) will manage the risk and control the company on behalf of the principal (shareholder), who is the residual claimant, risk bearer and owner of the company (Jensen et al. 1985, Fama et al. 1983). As such, the modern corporation is reduced to a ? nexus of contracts‘ between principals and agents and the separation of ownership and control is created (Jensen et al. 1976). 7. 1. 2. 2 The Effect of Conflict of Interest and Moral Hazard Given the separation of ownership and control, and the diverging risk profiles of the participating parties (Eisenhardt 1989, Jensen 1989), it cannot be expected that risk-averse managers (agents) will act in the interest of risk-neutral shareholders (principals) as it may not be in the manager‘s selfinterest to pursue shareholder wealth maximization (Bonazzi et. al. 2007, Lan et al. 2010, Demsetz et al. 1985). Jensen et al. (1985) argue that the three prominent problems with management that cause the conflict of interest are, 1) the choice of effort, 2) differential risk exposure, and 3) differential time horizon. The agency problem in separating ownership and control is therefore the assumed diverging goals of the ? cooperating parties? – the residual claimant and manager (Donaldson 1990, Hendrikse 2003). This inevitably increases the incentives for moral hazard and opportunistic Thomas Rudiger Smith 16 M. Sc. FSM Master Thesis: Agency Theory Its Consequences behavior as self-interest guides action (Demsetz et al. 1985). Moral hazard is central to AT, and is also referred to as hidden action or opportunistic behavior (Hendrikse 2003). However, hidden action refers specifically to the information asymmetry in the contractual relationship (Arrow 1968, Eisenhardt 1989), whereas opportunistic behavior is an inclination in the human (Jensen 1994)5. Moral hazard on the other hand, is the combination of these two terms together with the above described conflict of interest (Hendrikse 2003) and refers to the actual actions taken by the agent once the contract has been entered. The imperfect contract (Prendergast 1999) in the agency relationship makes the observation of true effort very difficult and as such causes the hidden action problem of asymmetric information (Arrow 1968). This inherently leads to an encouragement of moral hazard (Perrow 1986), where the principal will not know whether the agent has acted in accordance to the principal‘s interest (Shapiro 2005, Hendrikse 2003). It is therefore to be expected that the self-interested agent will shirk on the contract and carry out actions that are not in the interest of the principal (Hendrikse 2003, Eisenhardt 1989). Although moral hazard presumably is present in all types of relationships, Boyd et al. (1998) researched the possibilities for moral hazard in banking and found two possible areas of moral hazard. One is the relationship between the bank and their borrowers, the other is the moral hazard created from the cushion of the deposit insurance (John et al. 2000, Demsetz et al. 1997), as the deposit insurance reduces the interest from monitoring whilst simultaneously increasing the incentives for risk taking (Macey et al. 2003). Moral hazard is the exact problem that AT is designed to address through various mechanisms – most notable incentives and monitoring (Eisenhardt 1989). 7. 1. 2. 3 The Creation of Agency Costs The problem of moral hazard leads to costs for the firm associated with administering the contract, hereunder contracting, transaction, moral hazard and information costs – namely agency costs (Gomez-Mejia et al. 2005, Jensen et al. 1985). The level of the costs will depend on the ability of the principal to find an appropriate solution to reducing information asymmetries through measuring managerial performance, determining effective incentives, as well as implementing rules and Adverse Selection follows the same patterns as Moral Hazard, but deals with the selection of contracts and staff, and are more focused on pre-contractual areas of opportunistic behavior. Although a central part of agency theory, this section has less relevance for this thesis, and has therefore been described in the appendix 17. 2. 5 Thomas Rudiger Smith 17 M. Sc. FSM Master Thesis: Agency Theory Its Consequences regulations to limit unwanted behavior or moral hazard (Brickley et al. 1994, Gomez-Meija et al. 2005). Whilst achieving zero agency costs is practically impossible, as the marginal costs of doing so will eventually be higher than the accompanying benefits of perfect alignment (Jensen et al. 1976), monitoring and incentives intends to minimize them (Eisenhardt 1989, Jensen et al. 1985, Shapiro 2005)6. 7. 1. 2. 4 Monitoring and Incentives as Prescriptions of Agency Theory The proposed mechanisms for curbing moral hazard are generally monitoring and incentive contracts (Jensen 1993, Daily et al. 2003), where the board of directors (BOD) comprises the main monitoring mechanism. According to AT, they should act on behalf of the shareholders and hold foremost responsibility for the functioning of the firm, with the goal of reducing information asymmetries through ratifying and monitoring important decisions (Fama et al. 1983, Heath 2009, Shapiro 2005, Fama 1980). The BOD is therefore also responsible for controlling resource allocation and accompanying risks (Tufano 1998). The monitoring system provides an ex post control system (Jensen et al. 1976, Fama et al. 1983), where the extent of the monitoring in place will depend on the proclivities of management for opportunistic behavior and the costs and benefits related to its implementation (Jensen et al. 1976). The more effective the board is in obtaining information about agent behavior, the more likely the manager will be to act in the interest of the shareholder, and therefore fewer resources need be spent on aligning the interests through incentives (Hermalin et al. 1988, Eisenhardt 1989). Besides the BOD, incentives can be similarly employed to limit moral hazard on the part of the manager. The conflict of interest addressed earlier is in part caused by differing risk preferences, where managers are risk averse and shareholders risk-neutral. This often leads to contrasting predilections, where the manager will make less risky investments than preferred by the shareholders (Shapiro 2005, Eisenhardt 1989). This conflict can be mitigated by introducing a compensation scheme, in the form of a risk premium (Prendergast 1999), where rewards are based on outcome, commonly stock price (Hendrikse 2003). By tying part of managerial wealth to shareholder wealth, the incentive system can be utilized to create alignment between management and shareholders (Lan et al. 2010, Aulakh et al. 2000, Stroh et al. 1996). 6 Empirically speaking the possibility to accurately measure agency costs is near impossible, but the conceptual presence of these costs is what leads to the prescribed measures (Daily et al. 2003). Thomas Rudiger Smith 18 M. Sc. FSM Master Thesis: Agency Theory Its Consequences In this way, the wage becomes a bribe and a condition from the principal to the agent in order to induce certain behavior aligned with the principal‘s interest (Prendergast 1999). However, a noted problem with performance based pay is that „dysfunctional behavioral responses where agents emphasize only those aspects of performance that are rewarded? is present (Prendergast 1999, p. 8). As such, just as the principal may learn which incentives work the best, the agent learns which aspects of performance the principal is interested in and primarily seeks to optimize these exact aspects (Shapiro 2005, Brickley et al. 1994). The consequence becomes a system where everything is driven towards meeting measurable targets and not necessarily towards creating real value and growth (Porter 1992). A summation of the modern corporation in the eyes of AT, the effects and the prescriptions can be made as follows; ? The Modern Corporation = The Separation of Ownership Control and a Nexus of Contracts, where shareholders are the owners. ? The Effect of Separation of Ownership and Control = Conflict of Interest, Moral Hazard Agency Costs. ? The Prescriptions of Control = Monitoring Incentives. Upon understanding AT, its assumptions and focus on shareholder primacy, it is relevant to also critically question these. Particularly, how do the AT prescriptions impact the risk-taking in banking? 7. 2 The Consequence of Risk Taking Aligning managerial interests with that of shareholders may seemingly make sense. However the usage of outcome based incentives packages and a shareholder aligned board as prescribed by AT may lead to increased risk levels (John et al. 2000). In order to comprehend why, one has to understand the consequence of the diverging risk interests between shareholders and debtholders. Here option theory can provide a relevant reasoning. Figure 3 Structure 7. 2. 1 Equity as a call option According to option theory, equity can be viewed as a call option on the firm‘s assets (Brealey et al. Thomas Rudiger Smith 19 M. Sc. FSM Master Thesis: Ag

Thursday, November 28, 2019

The acquisition of People soft company by Oracle.

From an individual point of view, the largely publicized dispute between People soft and Oracle; companies in the business of developing and installing software for business entities, which took centre stage in 2003 still triggers varied reactions from major players in the enterprise resource planning industry.Advertising We will write a custom coursework sample on The acquisition of People soft company by Oracle. specifically for you for only $16.05 $11/page Learn More A highly emotive debate has been evoked among academic and technical circles to try and put the tale of Oracle’s move to acquire People soft into perspective. Oracle on its part had considered to acquire people soft a year before it came up with the widely disputed antic of taking over the company, a move largely viewed by critics as being malicious and of bad intent. The board at People soft took a rigid stand against Oracle’s intension to acquire the company it had come up with an insulting and rather unusual bid of $ 16 which represented a mere six percent premium. This preposition was quite unacceptable since the norm in serious bidding activities held the threshold at a whopping twenty percent or more. The company’s chief executive Craig Conway supposedly sensed bad faith on Oracle’s part which also played a major role in the company’s unanimous decision to reject the deal since it viewed the move as a ploy to prevent them from taking over another major player by the name J.D. Edwards. The move would also destabilize their stake at the stock market. The bid brought to light by Oracle Company also came out to be a unique one with respect to the fact that it would prevent customers from continuing to seek services from People soft as a result of the fear of what a takeover by another company would imply. Under these circumstances, if Oracle would have been willing to pay a higher price for the competitor’s shares to induc e its shareholders into selling their shares, then the board would have been rendered helpless and unable to stop the former from taking over the company’s ownership. A litany of scandals also worked against Oracle’s bid to acquire its competitor firm with critics terming the move as having been actuated by malice and being utterly insensitive, allegations which necessitated the management’s introduction of stringent measures to counter. This state of affairs held no grounds to victimize Oracle since not even a saintlier of evidence could be tabled to attest to that fact.Advertising Looking for coursework on business economics? Let's see if we can help you! Get your first paper with 15% OFF Learn More In spite of all the odds that surrounded the Oracle acquisition of People soft, certain measures had been put in place by the board of directors to ensure that in the event of an imminent takeover, a reasonable criteria would be observed to ensure th at everybody’s best interests be taken into account. Among those conditions to be considered included the introduction of a customer assurance plan which would ensure the protection of customer interests so as to build customer confidence. The board also put a lot of emphasis on the acquisition of J.D. Edwards so as to secure the company’s stability. The rejection of the 16% share bid on the grounds of being too low also came up to be a determining condition for consideration by the board before making the all important decision of selling the company’s shares. At the inception of People soft, foresight is quite evident since measures were put in place to ensure that in the event of a hostile and non-friendly acquisition of the company, formidable opposition would be rolled out to counter them. Popularly known as the poison pill, it basically stipulated conditions which failure to adhere to would deter one from assuming ownership of the company. It stipulated co nditions which included a minimum share purchase of not less than twenty percent which would increase every time an acquirer increased their net worth above that minimum. The objective of this move was to maintain an acquirer’s stakes at less than twenty percent. Despite the well placed objective of seeing to it that a rogue takeover would not occur, the poison pill was not a complete barricade that would keep wealthy skimmers at bay since they could still take their time and wedge a proxy battle which would eventually see them install their own board members who would subsequently discard the poison pill. These concerns hence formed the basis for the protracted court battles between the two companies which resulted in Oracle’s unprecedented increase of their bidding price resoundingly by five times. This move eventually brokered the deal which saw Oracle part with 10.3 billion dollars and eventually putting a stop to the unending court battles.Advertising We will write a custom coursework sample on The acquisition of People soft company by Oracle. specifically for you for only $16.05 $11/page Learn More In conclusion, it is imperative to appreciate the fact that despite Oracle’s intensions which fueled the urge to acquire People soft company which were rather harsh and unethical, what is quite eminent is the fact that a more respectful and liberal approach towards acquiring the company by Oracle would have saved both companies time, money and the agony of going through the tedious court and settlement procedures. References Chaturvedi, R. (2005). Oracle’s Acquisition of PeopleSoft. ICFAI center for Management research. European Case Clearing House , Case no.305-169-1. Madpati, R. (2005). Oracle’s PeopleSoft Bid (Part D). ICFAI Knowledge Center.  European Case Clearing House , Case no. 305-072-01. Watson, R. (2012). Ethics in finance. ethics and conduct of business, sixth edition , 341- 344. This coursework on The acquisition of People soft company by Oracle. was written and submitted by user Asher Sheppard to help you with your own studies. You are free to use it for research and reference purposes in order to write your own paper; however, you must cite it accordingly. You can donate your paper here.

Monday, November 25, 2019

Malcom X and Martin Luther King essays

Malcom X and Martin Luther King essays During the twentieth century Black people faced a lot of discrimination from the whites and found it very difficult to achieve civil rights. Black people were at one point denied of voting. In order for blacks to achieve civil rights they needed a leader to follow. Many black leaders did rise for the fight for civil rights, some had some ways of thinking some had others. Two of the most powerful and influential leaders of the twentieth century had to Malcolm X and Martin Luther King. These two leaders had different approaches, and different views towards white people, but fought for the same thing. Malcolm X was Born Malcolm Little in 1925 in Omaha, Malcolm was six years old, when his father was murdered by the Black Legion, a group of white racists belonging to the KKK. He changed his name to Malcolm X while in prison. He was serving ten years because of a robbery. Also while in prison he became a follower of Elijah Muhammad. Muhammad was the leader of an group called the Nation of Islam. During the 1950's, Malcolm became the spokesman for the Nation. Malcolm became a powerful speaker in the movement. As King captured the spirit of the Southern Black, Malcolm became the messiah of the ghettos of Harlem, Chicago, Detroit, and Los Angeles. Originally a small group, the Nation grew rapidly under Malcolm's leadership. He not only spoke the words of the Koran and his spiritual adviser, Elijah Muhammad, but he also lived it to its fullest. As the crowds grew to hear him speak, so did the disapproval to his rising popularity. Malcolm taught a message of self help and personal responsibility. This is the message from the Nation of Islam. Like the Nation, he also spoke of a separate nation for Blacks only, which was also the view of Marcus Garvey, a leader that Malcolm followed, and also the view that Black is beautiful. .The beginning of Malcolm's problems with the Nation of Islam was whether or not to participate in the civil rights march on...

Thursday, November 21, 2019

Human Resource Management Essay Example | Topics and Well Written Essays - 2500 words - 2

Human Resource Management - Essay Example strategic way. It is focused on the management of the workforce in an organization and the provision of direction to them. The aim of the HRM is to deal with and solve all the problems, within the organization, that are related to the workforce. These include hiring and recruitment, performance management, appraisals, compensation and benefits, organizational development, communication, training, safety and well-being, employee motivation, administration and conflict resolution. HRM also deals with all the issues pertaining to corporate social responsibility. In addition to this, HRM serves as the only association that a company usually has with the trade union. More than anything else, Human Resource Management is a comprehensive as well as strategic approach of managing not only the employees but the entire workplace culture (Budhwar, 2000). Effective HRM is needed in order to ensure that employees contribute positively and effectively to the goals and objectives of the company. Th us HRM is extremely important if the organization wants to ensure that the employees do not go astray. It provides a policing arm to the organization. SIGNIFICANCE OF HRM It is a very important part of the organization and its significance can be judged from the fact that most organizations now have a separate Human Resource Management department, given that the organization is big enough to afford it. From being a low scale and low scope department, Human Resource Management has now become a strategic business partner of the organization since its function is to provide constant support to the vision and mission of the organization. This also because HRM aims to implement the business strategies and ensuring that they work. HRM is now believed to be the management of people in the organization, not employees. It is responsible for ensuring that the organization complies with the labor as well as employment laws. According to Cheddie (2001) the aim is to gain competitive advantage b y using a wide range of structural, personnel and cultural techniques. THEORIES AND PRACTICES As the discipline of HRM continues to grow and gain momentum across the globe, more theories and studies are being devoted to it. Most HRM theories and practices are directly drawn from the field of behavioral sciences as well as from theories related to strategic management (Som, 2008). For HRM to work effectively there are certain practices that the organization must adopt. Among the first theories on the HRM concept was proposed by the Michigan school. According to this theory, the HR system must be managed in a way so that it is in line with the organizational goals and strategies. This concept became very popular as the ‘matching model’. It was further developed that there is human resource cycle which comprises four functions. These are selection, performance appraisal, rewards and compensation and training and development. Delegation to Line Managers Budhwar and Khatri ( 2001) argue that in

Wednesday, November 20, 2019

No Child Left Behind standardized testing Research Paper

No Child Left Behind standardized testing - Research Paper Example Every school-child has to undergo high-test standardized testing so as to move from different levels of education and to be compared to others from different regions. In this chapter, we are going to look at differences between high standard test and regular tests and the effects they impose on both the teachers and the students. The author, Smith M.L, of the book ‘The Effects of External Testing on Teachers’, conducted an educational research, on the implications of conducting standardized tests in the school, for teachers. The main aim of the study was to find out if there exists some difference in the teacher’s psychological and emotional response when the regular classroom exams are conducted and the standardized tests (Smith, 1991). After the research, he found that there were some significant changes in both the teachers’ anxiety and psychological states, due to some effects impacted on them by these tests. In the journal, ‘Psychology in the Schools’, the author talks of the anxious responses that students undergo due to high-stakes testing (Natasha, 2013). The authors’ talk of the anxious responses that students undergo during the time they face the standardized tests and the number of preparations they undergo so as to face these tests. In this journal, the authors say that students are more used to the normal tests than the standardized ones, hence the change in the responses towards these different tests. In the book ‘Academy of Management Learning & Education’, the authors talk of the different preparation students can be given when facing the standardized tests (Dean & Joly, 2012). In the book, the author says that at times students become disengaged, lose their identity and have lowered morale towards learning. They address they way of handling the different situations created by standardized tests and different methods in managing learning and education. The informal measures of text anxiety

Monday, November 18, 2019

Extream leader Essay Example | Topics and Well Written Essays - 250 words - 1

Extream leader - Essay Example Their main traits are their tenacity, positive attitude and humility. They welcome other people’s opinion and views. They are capable of turnaround strategy through sheer force of will, flexibility and desire to find the solution. McDonald’s, Apple, Citibank, Amazon etc. have extreme leaders at the helm. These companies not only have leadership position in the industry but their innovative ideas and subsequent high growth have made indelible mark in the corporate world. Leaders like, Ray Croc of McDonald’s, Steve Job of Apple, Charles Prince of Citibank and Jeffery Bezos of Amazon, have all been extraordinary in their vision which they had the guts to transform into success. They were all dynamic leaders who accepted challenges and saw opportunities in adversity. They relentlessly pursued and brought their company to the pinnacle of success despite adverse circumstances. Moreover, they were leaders who shared their vision with the workers and appreciated their input. Indeed, these traits are rare and therefore make them the most sought after leaders for companies who want to make a distinct place in the highly volatile

Friday, November 15, 2019

Computer Network Security within Organisations

Computer Network Security within Organisations Networking and Management Introduction A computer network is a connection of two or more computers in order to share resources and data. These shared resources can include devices like printers and other resources like electronic mail, internet access, and file sharing. A computer network can also be seen as a collection of Personal computers and other related devices which are connected together, either with cables or wirelessly, so that they can share information and communicate with one another. Computer networks vary in size. Some networks are needed for areas within a single office, while others are vast or even span the globe. Network management has grown as a career that requires specialized training, and comes with management of important responsibilities, thus creating future opportunities for employment. The resulting expected increase in opportunities should be a determining and persuasive factor for graduates to consider going into network management. Computer networking is a discipline of engineering that involves communication between various computer devices and systems. In computer networking, protocols, routers, routing, and networking across the public internet have specifications that are defined in RFC documents. Computer networking can be seen as a sub-category of computer science, telecommunications, IT and/or computer engineering. Computer networks also depend largely upon the practical and theoretical applications of these engineering and scientific disciplines. In the vastly technological environment of today, most organisations have some kind of network that is used every day. It is essential that the day-to-day operations in such a company or organisation are carried out on a network that runs smoothly. Most companies employ a network administrator or manager to oversee this very important aspect of the company’s business. This is a significant position, as it comes with great responsibilities because an organisation will experience significant operational losses if problems arise within its network. Computer networking also involves the setting up of any set of computers or computer devices and enabling them to exchange information and data. Some examples of computer networks include: Local area networks (LANs) that are made up of small networks which are constrained to a relatively small geographic area. Wide area networks (WANs) which are usually bigger than local area networks, and cover a large geographic area. Wireless LANs and WANs (WLAN WWAN). These represent the wireless equivalent of the Local Area Network and Wide Area Networks Networks involve interconnection to allow communication with a variety of different kinds of media, including twisted-pair copper wire cable, coaxial cable, optical fiber, and various wireless technologies. The devices can be separated by a few meters (e.g. via Bluetooth) or nearly unlimited distances (e.g. via the interconnections of the Internet. (http://en.wikipedia.org/wiki/Computer_networking) TASK 1 TCP connection congestion control Every application, whether it is a small or large application, should perform adaptive congestion control because applications that perform congestion control use a network more efficiently and are generally of better performance. Congestion control algorithms prevent the network from entering Congestive Collapse. Congestive Collapse is a situation where, although the network links are being heavily utilized, very little useful work is being done. The network will soon begin to require applications to perform congestion control, and those applications which do not perform congestion control will be harshly penalized by the network, probably in the form of preferentially dropping their packets during times of congestion (http://www.psc.edu/networking/projects/tcpfriendly/) Principles of Congestion Control Informally, congestion entails that too many sources are sending too much data, and sending them too fast for the network to handle. TCP Congestion Control is not the same as flow control, as there are several differences between TCP Congestion Control and flow control. Other principles of congestion control include Global versus point-2-point, and orthogonal issues. Congestion manifests itself by causing loss of packets (buffer overflow at routers), and long delays (queuing in router buffers). Also, during congestion, there is no explicit feedback from network routers, and there is congestion inferred from end-system observed loss. In network-assisted congestion control, routers provide feedback to end systems, and the explicit rate sender sends at –Choke Packet. Below are some other characteristics and principles of congestion control: When CongWin is below Threshold, sender in slow-start phase, window grows exponentially. When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly. When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold. When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS. Avoidance of Congestion It is necessary for the TCP sender to use congestion avoidance and slow start algorithms in controlling the amount of outstanding data that is injected into a network. In order to implement these algorithms, two variables are added to the TCP per-connection state. The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK), while the receivers advertised window (rwnd) is a receiver-side limit on the amount of outstanding data. The minimum of cwnd and rwnd governs data transmission. (Stevens, W. and Allman, M. 1998) TCP Flow Control In TCP flow control, the receiving side of the TCP connection possesses a receive buffer, and a speed-matching service which matches the send rate to the receiving application’s drain rate. During flow control, Rcvr advertises any spare room by including value of RcvWindow in segments, and the sender limits unACKed data to RcvWindow. TCP flow control also ensures that there is no overflow of the receive buffer. Round-trip Time Estimation and Timeout TCP Round Trip Time and Timeout are usually longer than RTT, but RTT varies, and has a slow reaction to segment loss. SampleRTT is measured time from segment transmission until ACK receipt, ignore retransmissions, and will vary, want estimated RTT â€Å"smoother† Round-trip time samples arrive with new ACKs. The RTT sample is computed as the difference between the current time and a time echo field in the ACK packet. When the first sample is taken, its value is used as the initial value for srtt. Half the first sample is used as the initial value for rttvar. (Round-Trip Time Estimation and RTO Timeout Selection) There are often problems due to timeouts, including the restriction of the sender that is compelled to wait until a timeout, and is able to do nothing during this period. Also, the first segment in the sliding window is often not acked, and retransmission becomes necessary, waiting again one RTT before the segment flow continues. It should be noted that on receiving the later segments, the receiver sends back ACKs. Estimated RTT EstimatedRTT = 0.875 * EstimatedRTT + 0.125 * SampleRTT DevRTT DevRTT = (1 0.25) * DevRTT + | SampleRTT – EstimatedRTT Timeout interval TimeoutInterval = EstimatedRTT + 4 * DevRTT The integrated services (IntServ) and DiffServ (Differentiated Services) architecture are two architectures that have been proposed for the provision of and guaranteeing of quality of service (QoS) over the internet. Whereas the Intserv framework is developed within the IETF to provide individualized QoS guarantees to individual application sessions, Diffserv is geared towards enabling the handling of different classes of traffic in various ways on the internet. These two architectures represent the IETF’s current standards for provision of QoS guarantees, although neither Intserv nor Diffserv have taken off or found widespread acceptance on the web. (a) Integrated Service Architecture In computer networking, the integrated services (IntServ) architecture is an architecture that specifies the elements for the guaranteeing of quality of service (QoS) on the network. For instance, IntServ can be used to allow sound and video to be sent over a network to the receiver without getting interrupted. IntServ specifies a fine-grained Quality of service system, in contrast to DiffServs coarse-grained system of control. In the IntServ architecture, the idea is that each router inside a system implements IntServ, and applications which require various types of guarantees have to make individual reservations. Flow Specs are used to describe the purpose of the reservation, and the underlying mechanism that signals it across the network is called RSVP. TSPECs include token bucket algorithm parameters. The idea is that there is a token bucket which slowly fills up with tokens, arriving at a constant rate. Every packet which is sent requires a token, and if there are no tokens, then it cannot be sent. Thus, the rate at which tokens arrive dictates the average rate of traffic flow, while the depth of the bucket dictates how large the traffic is allowed to be. TSPECs typically just specify the token rate and the bucket depth. For example, a video with a refresh rate of 75 frames per second, with each frame taking 10 packets, might specify a token rate of 750Hz, and a bucket depth of only 10. The bucket depth would be sufficient to accommodate the burst associated with sending an entire frame all at once. On the other hand, a conversation would need a lower token rate, but a much higher bucket depth. This is because there are often pauses in conversations, so they can make do with fewer tokens by not sending the gaps between words and sentences. However, this means the bucket depth needs to be increased to compensate for the traffic being larger. (http://en.wikipedia.org/wiki/Integrated_services) (b) Differentiated Service Architecture The RFC 2475 (An Architecture for Differentiated Services) was published In 1998, by the IETF. Presently, DiffServ has widely replaced other Layer 3 Quality of Service mechanisms (such as IntServ), as the basic protocol that routers use to provide different service levels. DiffServ (Differentiated Services) architecture is a computer networking architecture which specifies a scalable, less complex, coarse-grained mechanism for the classification, management of network traffic and for provision of QoS (Quality of Service) guarantees on modern IP networks. For instance, DiffServ can be used for providing low-latency, guaranteed service (GS) to video, voice or other critical network traffic, while ensuring simple best-effort traffic guarantees to non-critical network services like file transfers and web traffic. Most of the proposed Quality of Service mechanisms which allowed these services to co-exist were complicated and did not adequately meet the demands Internet users because modern data networks carry various kinds of services like streaming music, video, voice, email and also web pages. It would probably be difficult to implement Intserv in the core of the internet because most of the communication between computers connected to the Internet is based on a client/server structural design. This Client/server describes a structure involving the connection of one computer to another for the purpose of giving work instructions or asking it questions. In an arrangement like this, the particular computer that questions and gives out instructions is the client, while the computer that provides answers to the asked questions and responds to the work instructions is the server. The same terms are used to describe the software programs that facilitate the asking and answering. A client application, for instance, presents an on-screen interface for the user to work with at the client computer; the server application welcomes the client and knows how to respond correctly to the clients commands. Any file server or PC can be adapted for use as an Internet server, however a dedicated computer should be chosen. Anyone with a computer and modem can join this network by using a standard phone. Dedicating the server that is, using a computer as a server only helps avoid some security and basic problems that result from sharing the functions of the server. To gain access to the Internet you will require an engineer to install the broadband modem. Then you will be able to use the server to network the Internet on all machines on a network. (www.redbooks.ibm.com/redbooks/pdfs/sg246380.pdf) TASK 5 Network security These days, computers are used for everything from shopping and communication to banking and investment. Intruders into a network system (or hackers) do not care about the privacy or identity of network users. Their aim is to gain control of computers on the network so that they can use these systems to launch attacks on other computer systems. Therefore people who use the network for these purposes must be protected from unknown strangers who try to read their sensitive documents, or use their computer to attack other systems, and send forged email, or access their personal information (such as their bank or other financial statements) Security Clauses The International Organisation for Standardizations (ISOs) 17799: 2005 Standard is a code of practice for information security management which provides a broad, non-technical framework for establishing efficient IT controls. The ISO 17799 Standard consists of 11 clauses that are divided into one or more security categories for a total of 39 security categories The security clauses of the ISO standard 17799:2005- code of practice for Information Security Management include: The security Policy clause Organizing Information Security Asset Management. Human Resources Security. Physical and Environmental Security. Communications and Operations. Access Control. Information Systems Acquisition, Development, and Maintenance. Information Security Incident Management. Business Continuity Management. Compliance. (http://www.theiia.org/ITAuditArchive/index.cfm?act=ITAudit.printiiid=467aid=2209) Here is a brief description of the more recent version of these security clauses: Security Policy: Security policies are the foundation of the security framework and provide direction and information on the companys security posture. This clause states that support for information security should be done in accordance with the companys security policy. Organizing Information Security: This clause addresses the establishment and organizational structure of the security program, including the appropriate management framework for security policy, how information assets should be secured from third parties, and how information security is maintained when processing is outsourced. Asset Management: This clause describes best practices for classifying and protecting assets, including data, software, hardware, and utilities. The clause also provides information on how to classify data, how data should be handled, and how to protect data assets adequately. Human Resources Security: This clause describes best practices for personnel management, including hiring practices, termination procedures, employee training on security controls, dissemination of security policies, and use of incident response procedures. Physical and Environmental Security: As the name implies, this clause addresses the different physical and environmental aspects of security, including best practices organizations can use to mitigate service interruptions, prevent unauthorized physical access, or minimize theft of corporate resources. Communications and Operations: This clause discusses the requirements pertaining to the management and operation of systems and electronic information. Examples of controls to audit in this area include system planning, network management, and e-mail and e-commerce security. Access Control: This security clause describes how access to corporate assets should be managed, including access to digital and nondigital information, as well as network resources. Information Systems Acquisitions, Development, and Maintenance: This section discusses the development of IT systems, including applications created by third-parties, and how security should be incorporated during the development phase. Information Security Incident Management: This clause identifies best practices for communicating information security issues and weaknesses, such as reporting and escalation procedures. Once established, auditors can review existing controls to determine if the company has adequate procedures in place to handle security incidents. Business Continuity Management: The 10th security clause provides information on disaster recovery and business continuity planning. Actions auditors should review include how plans are developed, maintained, tested, and validated, and whether or not the plans address critical business operation components. Compliance: The final clause provides valuable information auditors can use when identifying the compliance level of systems and controls with internal security policies, industry-specific regulations, and government legislation. (Edmead, M. T. 2006 retrieved from http://www.theiia.org/ITAuditArchive/?aid=2209iid=467) The standard, which was updated in June 2005 to reflect changes in the field of information security, provides a high-level view of information security from different angles and a comprehensive set of information security best practices. More specifically, ISO 17799 is designed for companies that wish to develop effective information security management practices and enhance their IT security efforts. Control Objectives The ISO 17799 Standard contains 11 clauses which are split into security categories, with each category having a clear control objective. There are a total of 39 security categories in the standard. The control objectives in the clauses are designed to meet the risk assessment requirements and they can serve as a practical guideline or common basis for development of effective security management practices and organisational security standards. Therefore, if a company is compliant with the ISO/IEC 17799 Standard, it will most likely meet IT management requirements found in other laws and regulations. However, because different standards strive for different overall objectives, auditors should point out that compliance with 17799 alone will not meet all of the requirements needed for compliance with other laws and regulations. Establishing an ISO/IEC 17799 compliance program could enhance a companys information security controls and IT environment greatly. Conducting an audit evaluation of the standard provides organizations with a quick snapshot of the security infrastructure. Based on this snapshot, senior managers can obtain a high-level view of how well information security is being implemented across the IT environment. In fact, the evaluation can highlight gaps present in security controls and identify areas for improvement. In addition, organizations looking to enhance their IT and security controls could keep in mind other ISO standards, especially current and future standards from the 27000 series, which the ISO has set aside for guidance on security best practices. (Edmead, M. T. 2006 retrieved from http://www.theiia.org/ITAuditArchive/?aid=2209iid=467) Tree Topology Tree topologies bind multiple star topologies together onto a bus. In its most simple form, only hub devices are directly connected to the tree bus and the hubs function as the root of the device tree. This bus/star hybrid approach supports future expandability of the network much better than a bus (limited in the number of devices due to the broadcast traffic it generates) or a star (limited by the number of hub ports) alone. Topologies remain an important part of network design theory. It is very simple to build a home or small business network without understanding the difference between a bus design and a star design, but understanding the concepts behind these gives you a deeper understanding of important elements like hubs, broadcasts, ports, and routes. (www.redbooks.ibm.com/redbooks/pdfs/sg246380.pdf) Use of the ring topology should be considered for use in medium sized companies, and the ring topology would also be the best topology for small companies because it is ensures ease of data transfer. Ring Topology In a ring network, there are two neighbors for each device, so as to enable communication. Messages are passed in the same direction, through a ring which is effectively either counterclockwise or clockwise. If any cable or device fails, this will break the loop and could disable the entire network. Bus Topology Bus networks utilize a common backbone to connect various devices. This backbone, which is a single cable, functions as a shared medium of communication which the devices tap into or attach to, with an interface connector. A device wanting to communicate with another device on the network sends a broadcast message onto the wire that all other devices see, but only the intended recipient actually accepts and processes the message. (www.redbooks.ibm.com/redbooks/pdfs/sg246380.pdf) Star Topology The star topology is used in a lot of home networks. A star network consists of a central connection point or hub that can be in the form of an actual hub, or a switch. Usually, devices will connect to the switch or hub by an Unshielded Twisted Pair (UTP) Ethernet. Compared to the bus topology, a star network generally requires more cable, but a failure in any star network cable will only take down one computers network access and not the entire LAN. If the hub fails, however, the entire network also fails. (www.redbooks.ibm.com/redbooks/pdfs/sg246380.pdf) Relating the security clauses and control objectives to an organisation In an organisation like the Nurht’s Institute of Information Technology (NIIT), the above mentioned security clauses and control objectives provide a high-level view of information security from different angles and a comprehensive set of information best security practices. Also, the ISO 17799 is designed for companies like NIIT, which aim to enhance their IT security, and to develop effective information security management practices. At NIIT, the local network relies to a considerable degree, on the correct implementation of these security practices and other algorithms so as to avoid congestion collapse, and preserve network stability. An attacker or hacker on the network can cause TCP endpoints to react in a more aggressive way in the face of congestion, by the forging of excessive data acknowledgments, or excess duplicate acknowledgments. Such an attack could possibly cause a portion of the network to go into congestion collapse. The Security Policy clause states that â€Å"support for information security should be done in accordance with the companys security policy.† (Edmead, M. T. 2006). This provides a foundation of the security framework at NIIT, and also provides information and direction on the organisation’s security posture. For instance, this clause helps the company auditors to determine whether the security policy of the company is properly maintained, and also if indeed it is to be disseminated to every employee. The Organizing Information Security clause stipulates that there should be appropriate management framework for the organisation’s security policy. This takes care of the organizational structure of NIIT’s security program, including the right security policy management framework, the securing of information assets from third parties, and the maintenance of information security during outsourced processing. At NIIT, the Security clauses and control objectives define the company’s stand on security and also help to identify the vital areas considered when implementing IT controls. The ISO/IEC 17799s 11 security clauses enable NIIT to accomplish its security objectives by providing a comprehensive set of information security best practices for the company to utilize for enhancement of its IT infrastructure. Conclusion Different businesses require different computer networks, because the type of network utilized in an organisation must be suitable for the organisation. It is advisable for smaller businesses to use the LAN type of network because it is more reliable. The WAN and MAN would be ideal for larger companies, but if an organisation decides to expand, they can then change the type of network they have in use. If an organisation decides to go international, then a Wireless Area Network can be very useful Also, small companies should endeavor to set up their network by using a client/server approach. This would help the company to be more secure and enable them to keep in touch with the activities of others are doing. The client/server would be much better than a peer-to-peer network, it would be more cost-effective. On the average, most organisations have to spend a good amount of money and resources to procure and maintain a reliable and successful network that will be and easy to maintain in the long run. For TCP Congestion Control, when CongWin is below Threshold, sender in slow-start phase, window grows exponentially. If CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly. When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold, and threshold set to CongWin/2 and CongWin is set to 1 MSS when a timeout occurs. For a Small Office/Home Office (SOHO), networks such as wireless networks are very suitable. In such a network, there won’t be any need to run wires through walls and under carpets for connectivity. The SOHO user need not worry about plugging their laptop into docking stations every time they come into the office or fumble for clumsy and unattractive network cabling. Wireless networking provides connectivity without the hassle and cost of wiring and expensive docking stations. Also, as the business or home office grows or shrinks, the need for wiring new computers to the network is nonexistent. If the business moves, the network is ready for use as soon as the computers are moved. For the wired impossible networks such as those that might be found in warehouses, wireless will always be the only attractive alternative. As wireless speeds increase, these users have only brighter days in their future. (http://www.nextstep.ir/network.shtml) It is essential to note that the computer network installed in an organisation represents more than just a simple change in the method by which employees communicate. The impact of a particular computer network may dramatically affect the way employees in an organisation work and also affect the way they think. Bibliography Business Editors High-Tech Writers. (2003, July 22). International VoIP Council Launches Fax-Over-IP Working Group. Business Wire. Retrieved July 28, 2003 from ProQuest database. Career Directions (2001 October). Tech Directions, 61(3), 28 Retrieved July 21, 2003 from EBSCOhost database Edmead, M. T. (2006) Are You Familiar with the Most Recent ISO/IEC 17799 Changes? (Retrieved from http://www.theiia.org/ITAuditArchive/?aid=2209iid=467) FitzGerald, J. (1999), Business Data Communications And Networking Pub: John Wiley Sons Forouzan, B. (1998), Introduction To Data Communications And Networking Pub: Mc- Graw Hill http://www.theiia.org/itaudit http://www.theiia.org/ITAuditArchive/index.cfm?act=ITAudit.printiiid=467aid=2209 http://www.psc.edu/networking/projects/tcpfriendly/ ISO/IEC 17799:2000 – Code of practice for information security management Published by ISO and the British Standards Institute [http://www.iso.org/] ISO/IEC 17799:2005, Information technology – Security techniques – Code of practice for information security management. Published by ISO [http://www.iso.org/iso/en/prods-services/popstds/informationsecurity.html] Kurose, J. F. Ross, K. W. 2002. Computer Networking A Top-Down Approach Featuring the Internet, 2nd Edition, ISBN: 0-321-17644-8 (the international edition), ISBN: 0-201-97699-4, published by Addison-Wesley, 2002 www.awl.com/cs Ming, D. R. Sudama (1992) NETWORK MONITORING EXPLAINED: DESIGN AND APPLICATION Pub: Ellis Horwood Rigney, S. (1995) NETWORK PLANNING AND MANAGMENT YOUR PERSONAL CONSALTANT Round-Trip Time Estimation and RTO Timeout Selection (retrieved from http://netlab.cse.yzu.edu.tw/ns2/html/doc/node368.html) Shafer, M. (2001, June 11). Careers not so secure? Network Computing, 12(12), 130- Retrieved July 22, 2003 from EBSCOhost database Stevens, W. and Allman, M. (1998) TCP Implementation Working Group (retrieved from http://www.ietf.org/proceedings/98aug/I-D/draft-ietf-tcpimpl-cong-control-00.txt) Watson, S (2002). The Network Troubleshooters. Computerworld 36(38), 54. (Retrieved July 21, 2003 from EBSCOhost database) Wesley, A. (2000), Internet Users Guide to Network Resource Tools 1st Ed, Pub: Netskils www.microsoft.co.uk www.apple.com www.apple.co.uk www.bized.com http://www.nextstep.ir/network.shtml www.novell.com www.apple.com/business www.microsoft.com/networking/e-mails www.engin.umich.edu www.microsoft.com Computer Network Security within Organisations Computer Network Security within Organisations Networking and Management Introduction A computer network is a connection of two or more computers in order to share resources and data. These shared resources can include devices like printers and other resources like electronic mail, internet access, and file sharing. A computer network can also be seen as a collection of Personal computers and other related devices which are connected together, either with cables or wirelessly, so that they can share information and communicate with one another. Computer networks vary in size. Some networks are needed for areas within a single office, while others are vast or even span the globe. Network management has grown as a career that requires specialized training, and comes with management of important responsibilities, thus creating future opportunities for employment. The resulting expected increase in opportunities should be a determining and persuasive factor for graduates to consider going into network management. Computer networking is a discipline of engineering that involves communication between various computer devices and systems. In computer networking, protocols, routers, routing, and networking across the public internet have specifications that are defined in RFC documents. Computer networking can be seen as a sub-category of computer science, telecommunications, IT and/or computer engineering. Computer networks also depend largely upon the practical and theoretical applications of these engineering and scientific disciplines. In the vastly technological environment of today, most organisations have some kind of network that is used every day. It is essential that the day-to-day operations in such a company or organisation are carried out on a network that runs smoothly. Most companies employ a network administrator or manager to oversee this very important aspect of the company’s business. This is a significant position, as it comes with great responsibilities because an organisation will experience significant operational losses if problems arise within its network. Computer networking also involves the setting up of any set of computers or computer devices and enabling them to exchange information and data. Some examples of computer networks include: Local area networks (LANs) that are made up of small networks which are constrained to a relatively small geographic area. Wide area networks (WANs) which are usually bigger than local area networks, and cover a large geographic area. Wireless LANs and WANs (WLAN WWAN). These represent the wireless equivalent of the Local Area Network and Wide Area Networks Networks involve interconnection to allow communication with a variety of different kinds of media, including twisted-pair copper wire cable, coaxial cable, optical fiber, and various wireless technologies. The devices can be separated by a few meters (e.g. via Bluetooth) or nearly unlimited distances (e.g. via the interconnections of the Internet. (http://en.wikipedia.org/wiki/Computer_networking) TASK 1 TCP connection congestion control Every application, whether it is a small or large application, should perform adaptive congestion control because applications that perform congestion control use a network more efficiently and are generally of better performance. Congestion control algorithms prevent the network from entering Congestive Collapse. Congestive Collapse is a situation where, although the network links are being heavily utilized, very little useful work is being done. The network will soon begin to require applications to perform congestion control, and those applications which do not perform congestion control will be harshly penalized by the network, probably in the form of preferentially dropping their packets during times of congestion (http://www.psc.edu/networking/projects/tcpfriendly/) Principles of Congestion Control Informally, congestion entails that too many sources are sending too much data, and sending them too fast for the network to handle. TCP Congestion Control is not the same as flow control, as there are several differences between TCP Congestion Control and flow control. Other principles of congestion control include Global versus point-2-point, and orthogonal issues. Congestion manifests itself by causing loss of packets (buffer overflow at routers), and long delays (queuing in router buffers). Also, during congestion, there is no explicit feedback from network routers, and there is congestion inferred from end-system observed loss. In network-assisted congestion control, routers provide feedback to end systems, and the explicit rate sender sends at –Choke Packet. Below are some other characteristics and principles of congestion control: When CongWin is below Threshold, sender in slow-start phase, window grows exponentially. When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly. When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold. When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS. Avoidance of Congestion It is necessary for the TCP sender to use congestion avoidance and slow start algorithms in controlling the amount of outstanding data that is injected into a network. In order to implement these algorithms, two variables are added to the TCP per-connection state. The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK), while the receivers advertised window (rwnd) is a receiver-side limit on the amount of outstanding data. The minimum of cwnd and rwnd governs data transmission. (Stevens, W. and Allman, M. 1998) TCP Flow Control In TCP flow control, the receiving side of the TCP connection possesses a receive buffer, and a speed-matching service which matches the send rate to the receiving application’s drain rate. During flow control, Rcvr advertises any spare room by including value of RcvWindow in segments, and the sender limits unACKed data to RcvWindow. TCP flow control also ensures that there is no overflow of the receive buffer. Round-trip Time Estimation and Timeout TCP Round Trip Time and Timeout are usually longer than RTT, but RTT varies, and has a slow reaction to segment loss. SampleRTT is measured time from segment transmission until ACK receipt, ignore retransmissions, and will vary, want estimated RTT â€Å"smoother† Round-trip time samples arrive with new ACKs. The RTT sample is computed as the difference between the current time and a time echo field in the ACK packet. When the first sample is taken, its value is used as the initial value for srtt. Half the first sample is used as the initial value for rttvar. (Round-Trip Time Estimation and RTO Timeout Selection) There are often problems due to timeouts, including the restriction of the sender that is compelled to wait until a timeout, and is able to do nothing during this period. Also, the first segment in the sliding window is often not acked, and retransmission becomes necessary, waiting again one RTT before the segment flow continues. It should be noted that on receiving the later segments, the receiver sends back ACKs. Estimated RTT EstimatedRTT = 0.875 * EstimatedRTT + 0.125 * SampleRTT DevRTT DevRTT = (1 0.25) * DevRTT + | SampleRTT – EstimatedRTT Timeout interval TimeoutInterval = EstimatedRTT + 4 * DevRTT The integrated services (IntServ) and DiffServ (Differentiated Services) architecture are two architectures that have been proposed for the provision of and guaranteeing of quality of service (QoS) over the internet. Whereas the Intserv framework is developed within the IETF to provide individualized QoS guarantees to individual application sessions, Diffserv is geared towards enabling the handling of different classes of traffic in various ways on the internet. These two architectures represent the IETF’s current standards for provision of QoS guarantees, although neither Intserv nor Diffserv have taken off or found widespread acceptance on the web. (a) Integrated Service Architecture In computer networking, the integrated services (IntServ) architecture is an architecture that specifies the elements for the guaranteeing of quality of service (QoS) on the network. For instance, IntServ can be used to allow sound and video to be sent over a network to the receiver without getting interrupted. IntServ specifies a fine-grained Quality of service system, in contrast to DiffServs coarse-grained system of control. In the IntServ architecture, the idea is that each router inside a system implements IntServ, and applications which require various types of guarantees have to make individual reservations. Flow Specs are used to describe the purpose of the reservation, and the underlying mechanism that signals it across the network is called RSVP. TSPECs include token bucket algorithm parameters. The idea is that there is a token bucket which slowly fills up with tokens, arriving at a constant rate. Every packet which is sent requires a token, and if there are no tokens, then it cannot be sent. Thus, the rate at which tokens arrive dictates the average rate of traffic flow, while the depth of the bucket dictates how large the traffic is allowed to be. TSPECs typically just specify the token rate and the bucket depth. For example, a video with a refresh rate of 75 frames per second, with each frame taking 10 packets, might specify a token rate of 750Hz, and a bucket depth of only 10. The bucket depth would be sufficient to accommodate the burst associated with sending an entire frame all at once. On the other hand, a conversation would need a lower token rate, but a much higher bucket depth. This is because there are often pauses in conversations, so they can make do with fewer tokens by not sending the gaps between words and sentences. However, this means the bucket depth needs to be increased to compensate for the traffic being larger. (http://en.wikipedia.org/wiki/Integrated_services) (b) Differentiated Service Architecture The RFC 2475 (An Architecture for Differentiated Services) was published In 1998, by the IETF. Presently, DiffServ has widely replaced other Layer 3 Quality of Service mechanisms (such as IntServ), as the basic protocol that routers use to provide different service levels. DiffServ (Differentiated Services) architecture is a computer networking architecture which specifies a scalable, less complex, coarse-grained mechanism for the classification, management of network traffic and for provision of QoS (Quality of Service) guarantees on modern IP networks. For instance, DiffServ can be used for providing low-latency, guaranteed service (GS) to video, voice or other critical network traffic, while ensuring simple best-effort traffic guarantees to non-critical network services like file transfers and web traffic. Most of the proposed Quality of Service mechanisms which allowed these services to co-exist were complicated and did not adequately meet the demands Internet users because modern data networks carry various kinds of services like streaming music, video, voice, email and also web pages. It would probably be difficult to implement Intserv in the core of the internet because most of the communication between computers connected to the Internet is based on a client/server structural design. This Client/server describes a structure involving the connection of one computer to another for the purpose of giving work instructions or asking it questions. In an arrangement like this, the particular computer that questions and gives out instructions is the client, while the computer that provides answers to the asked questions and responds to the work instructions is the server. The same terms are used to describe the software programs that facilitate the asking and answering. A client application, for instance, presents an on-screen interface for the user to work with at the client computer; the server application welcomes the client and knows how to respond correctly to the clients commands. Any file server or PC can be adapted for use as an Internet server, however a dedicated computer should be chosen. Anyone with a computer and modem can join this network by using a standard phone. Dedicating the server that is, using a computer as a server only helps avoid some security and basic problems that result from sharing the functions of the server. To gain access to the Internet you will require an engineer to install the broadband modem. Then you will be able to use the server to network the Internet on all machines on a network. (www.redbooks.ibm.com/redbooks/pdfs/sg246380.pdf) TASK 5 Network security These days, computers are used for everything from shopping and communication to banking and investment. Intruders into a network system (or hackers) do not care about the privacy or identity of network users. Their aim is to gain control of computers on the network so that they can use these systems to launch attacks on other computer systems. Therefore people who use the network for these purposes must be protected from unknown strangers who try to read their sensitive documents, or use their computer to attack other systems, and send forged email, or access their personal information (such as their bank or other financial statements) Security Clauses The International Organisation for Standardizations (ISOs) 17799: 2005 Standard is a code of practice for information security management which provides a broad, non-technical framework for establishing efficient IT controls. The ISO 17799 Standard consists of 11 clauses that are divided into one or more security categories for a total of 39 security categories The security clauses of the ISO standard 17799:2005- code of practice for Information Security Management include: The security Policy clause Organizing Information Security Asset Management. Human Resources Security. Physical and Environmental Security. Communications and Operations. Access Control. Information Systems Acquisition, Development, and Maintenance. Information Security Incident Management. Business Continuity Management. Compliance. (http://www.theiia.org/ITAuditArchive/index.cfm?act=ITAudit.printiiid=467aid=2209) Here is a brief description of the more recent version of these security clauses: Security Policy: Security policies are the foundation of the security framework and provide direction and information on the companys security posture. This clause states that support for information security should be done in accordance with the companys security policy. Organizing Information Security: This clause addresses the establishment and organizational structure of the security program, including the appropriate management framework for security policy, how information assets should be secured from third parties, and how information security is maintained when processing is outsourced. Asset Management: This clause describes best practices for classifying and protecting assets, including data, software, hardware, and utilities. The clause also provides information on how to classify data, how data should be handled, and how to protect data assets adequately. Human Resources Security: This clause describes best practices for personnel management, including hiring practices, termination procedures, employee training on security controls, dissemination of security policies, and use of incident response procedures. Physical and Environmental Security: As the name implies, this clause addresses the different physical and environmental aspects of security, including best practices organizations can use to mitigate service interruptions, prevent unauthorized physical access, or minimize theft of corporate resources. Communications and Operations: This clause discusses the requirements pertaining to the management and operation of systems and electronic information. Examples of controls to audit in this area include system planning, network management, and e-mail and e-commerce security. Access Control: This security clause describes how access to corporate assets should be managed, including access to digital and nondigital information, as well as network resources. Information Systems Acquisitions, Development, and Maintenance: This section discusses the development of IT systems, including applications created by third-parties, and how security should be incorporated during the development phase. Information Security Incident Management: This clause identifies best practices for communicating information security issues and weaknesses, such as reporting and escalation procedures. Once established, auditors can review existing controls to determine if the company has adequate procedures in place to handle security incidents. Business Continuity Management: The 10th security clause provides information on disaster recovery and business continuity planning. Actions auditors should review include how plans are developed, maintained, tested, and validated, and whether or not the plans address critical business operation components. Compliance: The final clause provides valuable information auditors can use when identifying the compliance level of systems and controls with internal security policies, industry-specific regulations, and government legislation. (Edmead, M. T. 2006 retrieved from http://www.theiia.org/ITAuditArchive/?aid=2209iid=467) The standard, which was updated in June 2005 to reflect changes in the field of information security, provides a high-level view of information security from different angles and a comprehensive set of information security best practices. More specifically, ISO 17799 is designed for companies that wish to develop effective information security management practices and enhance their IT security efforts. Control Objectives The ISO 17799 Standard contains 11 clauses which are split into security categories, with each category having a clear control objective. There are a total of 39 security categories in the standard. The control objectives in the clauses are designed to meet the risk assessment requirements and they can serve as a practical guideline or common basis for development of effective security management practices and organisational security standards. Therefore, if a company is compliant with the ISO/IEC 17799 Standard, it will most likely meet IT management requirements found in other laws and regulations. However, because different standards strive for different overall objectives, auditors should point out that compliance with 17799 alone will not meet all of the requirements needed for compliance with other laws and regulations. Establishing an ISO/IEC 17799 compliance program could enhance a companys information security controls and IT environment greatly. Conducting an audit evaluation of the standard provides organizations with a quick snapshot of the security infrastructure. Based on this snapshot, senior managers can obtain a high-level view of how well information security is being implemented across the IT environment. In fact, the evaluation can highlight gaps present in security controls and identify areas for improvement. In addition, organizations looking to enhance their IT and security controls could keep in mind other ISO standards, especially current and future standards from the 27000 series, which the ISO has set aside for guidance on security best practices. (Edmead, M. T. 2006 retrieved from http://www.theiia.org/ITAuditArchive/?aid=2209iid=467) Tree Topology Tree topologies bind multiple star topologies together onto a bus. In its most simple form, only hub devices are directly connected to the tree bus and the hubs function as the root of the device tree. This bus/star hybrid approach supports future expandability of the network much better than a bus (limited in the number of devices due to the broadcast traffic it generates) or a star (limited by the number of hub ports) alone. Topologies remain an important part of network design theory. It is very simple to build a home or small business network without understanding the difference between a bus design and a star design, but understanding the concepts behind these gives you a deeper understanding of important elements like hubs, broadcasts, ports, and routes. (www.redbooks.ibm.com/redbooks/pdfs/sg246380.pdf) Use of the ring topology should be considered for use in medium sized companies, and the ring topology would also be the best topology for small companies because it is ensures ease of data transfer. Ring Topology In a ring network, there are two neighbors for each device, so as to enable communication. Messages are passed in the same direction, through a ring which is effectively either counterclockwise or clockwise. If any cable or device fails, this will break the loop and could disable the entire network. Bus Topology Bus networks utilize a common backbone to connect various devices. This backbone, which is a single cable, functions as a shared medium of communication which the devices tap into or attach to, with an interface connector. A device wanting to communicate with another device on the network sends a broadcast message onto the wire that all other devices see, but only the intended recipient actually accepts and processes the message. (www.redbooks.ibm.com/redbooks/pdfs/sg246380.pdf) Star Topology The star topology is used in a lot of home networks. A star network consists of a central connection point or hub that can be in the form of an actual hub, or a switch. Usually, devices will connect to the switch or hub by an Unshielded Twisted Pair (UTP) Ethernet. Compared to the bus topology, a star network generally requires more cable, but a failure in any star network cable will only take down one computers network access and not the entire LAN. If the hub fails, however, the entire network also fails. (www.redbooks.ibm.com/redbooks/pdfs/sg246380.pdf) Relating the security clauses and control objectives to an organisation In an organisation like the Nurht’s Institute of Information Technology (NIIT), the above mentioned security clauses and control objectives provide a high-level view of information security from different angles and a comprehensive set of information best security practices. Also, the ISO 17799 is designed for companies like NIIT, which aim to enhance their IT security, and to develop effective information security management practices. At NIIT, the local network relies to a considerable degree, on the correct implementation of these security practices and other algorithms so as to avoid congestion collapse, and preserve network stability. An attacker or hacker on the network can cause TCP endpoints to react in a more aggressive way in the face of congestion, by the forging of excessive data acknowledgments, or excess duplicate acknowledgments. Such an attack could possibly cause a portion of the network to go into congestion collapse. The Security Policy clause states that â€Å"support for information security should be done in accordance with the companys security policy.† (Edmead, M. T. 2006). This provides a foundation of the security framework at NIIT, and also provides information and direction on the organisation’s security posture. For instance, this clause helps the company auditors to determine whether the security policy of the company is properly maintained, and also if indeed it is to be disseminated to every employee. The Organizing Information Security clause stipulates that there should be appropriate management framework for the organisation’s security policy. This takes care of the organizational structure of NIIT’s security program, including the right security policy management framework, the securing of information assets from third parties, and the maintenance of information security during outsourced processing. At NIIT, the Security clauses and control objectives define the company’s stand on security and also help to identify the vital areas considered when implementing IT controls. The ISO/IEC 17799s 11 security clauses enable NIIT to accomplish its security objectives by providing a comprehensive set of information security best practices for the company to utilize for enhancement of its IT infrastructure. Conclusion Different businesses require different computer networks, because the type of network utilized in an organisation must be suitable for the organisation. It is advisable for smaller businesses to use the LAN type of network because it is more reliable. The WAN and MAN would be ideal for larger companies, but if an organisation decides to expand, they can then change the type of network they have in use. If an organisation decides to go international, then a Wireless Area Network can be very useful Also, small companies should endeavor to set up their network by using a client/server approach. This would help the company to be more secure and enable them to keep in touch with the activities of others are doing. The client/server would be much better than a peer-to-peer network, it would be more cost-effective. On the average, most organisations have to spend a good amount of money and resources to procure and maintain a reliable and successful network that will be and easy to maintain in the long run. For TCP Congestion Control, when CongWin is below Threshold, sender in slow-start phase, window grows exponentially. If CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly. When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold, and threshold set to CongWin/2 and CongWin is set to 1 MSS when a timeout occurs. For a Small Office/Home Office (SOHO), networks such as wireless networks are very suitable. In such a network, there won’t be any need to run wires through walls and under carpets for connectivity. The SOHO user need not worry about plugging their laptop into docking stations every time they come into the office or fumble for clumsy and unattractive network cabling. Wireless networking provides connectivity without the hassle and cost of wiring and expensive docking stations. Also, as the business or home office grows or shrinks, the need for wiring new computers to the network is nonexistent. If the business moves, the network is ready for use as soon as the computers are moved. For the wired impossible networks such as those that might be found in warehouses, wireless will always be the only attractive alternative. As wireless speeds increase, these users have only brighter days in their future. (http://www.nextstep.ir/network.shtml) It is essential to note that the computer network installed in an organisation represents more than just a simple change in the method by which employees communicate. The impact of a particular computer network may dramatically affect the way employees in an organisation work and also affect the way they think. Bibliography Business Editors High-Tech Writers. (2003, July 22). International VoIP Council Launches Fax-Over-IP Working Group. Business Wire. Retrieved July 28, 2003 from ProQuest database. Career Directions (2001 October). Tech Directions, 61(3), 28 Retrieved July 21, 2003 from EBSCOhost database Edmead, M. T. (2006) Are You Familiar with the Most Recent ISO/IEC 17799 Changes? (Retrieved from http://www.theiia.org/ITAuditArchive/?aid=2209iid=467) FitzGerald, J. (1999), Business Data Communications And Networking Pub: John Wiley Sons Forouzan, B. (1998), Introduction To Data Communications And Networking Pub: Mc- Graw Hill http://www.theiia.org/itaudit http://www.theiia.org/ITAuditArchive/index.cfm?act=ITAudit.printiiid=467aid=2209 http://www.psc.edu/networking/projects/tcpfriendly/ ISO/IEC 17799:2000 – Code of practice for information security management Published by ISO and the British Standards Institute [http://www.iso.org/] ISO/IEC 17799:2005, Information technology – Security techniques – Code of practice for information security management. Published by ISO [http://www.iso.org/iso/en/prods-services/popstds/informationsecurity.html] Kurose, J. F. Ross, K. W. 2002. Computer Networking A Top-Down Approach Featuring the Internet, 2nd Edition, ISBN: 0-321-17644-8 (the international edition), ISBN: 0-201-97699-4, published by Addison-Wesley, 2002 www.awl.com/cs Ming, D. R. Sudama (1992) NETWORK MONITORING EXPLAINED: DESIGN AND APPLICATION Pub: Ellis Horwood Rigney, S. (1995) NETWORK PLANNING AND MANAGMENT YOUR PERSONAL CONSALTANT Round-Trip Time Estimation and RTO Timeout Selection (retrieved from http://netlab.cse.yzu.edu.tw/ns2/html/doc/node368.html) Shafer, M. (2001, June 11). Careers not so secure? Network Computing, 12(12), 130- Retrieved July 22, 2003 from EBSCOhost database Stevens, W. and Allman, M. (1998) TCP Implementation Working Group (retrieved from http://www.ietf.org/proceedings/98aug/I-D/draft-ietf-tcpimpl-cong-control-00.txt) Watson, S (2002). The Network Troubleshooters. Computerworld 36(38), 54. (Retrieved July 21, 2003 from EBSCOhost database) Wesley, A. (2000), Internet Users Guide to Network Resource Tools 1st Ed, Pub: Netskils www.microsoft.co.uk www.apple.com www.apple.co.uk www.bized.com http://www.nextstep.ir/network.shtml www.novell.com www.apple.com/business www.microsoft.com/networking/e-mails www.engin.umich.edu www.microsoft.com