Saturday, August 31, 2019

E-Procurement And E-Logistics

our site – CUSTOM ESSAY WRITING – Business Management Dissertation Ideas ABSTRACT In this paper, we analyze the e-procurement and e-logistics of the Dell Inc. Company. This will include a brief overview of the company, an exploration of its Customer Relationship Management, the Supply Chain management and an analysis of the various softwares used by Dell Inc in promoting its relationship marketing. INTRODUCTION Today, many people have discovered the significance of E-commerce. E-commerce, also known as electronic commerce refers to business transactions and communication via computers especially over the internet and networks (Botha, Bothma and Geldenhuys, 2008: p.23). This involves buying and selling of services and goods, and transfer of funds among other commercial communications through the internet, mainly through the World-Wide Web (Botha, Bothma and Geldenhuys, 2008: p.23). E-commerce takes place in different situations such as between businesses and customers (B2C), between one business/company and another (B2B), and between customer and customer (C2C). It is mainly divided into two main parts, which are e-procurement and e-logistics. E-procurement is defined as an electronic method of conducting business transactions while e-logistics refers to the transfer of goods sold over the internet to customers (Botha, Bothma and Geldenhuys, 2008: p.24). A well implemented e-procurement system is highly effective in connecting businesses and other business processes with suppliers while running all interactions between them. According to Botha, Bothma and Geldenhuys (2008: p.23), the development and advancement of technology, many businesses now sell their products through computer technology, which is a brilliant way of making companies reduce overhead costs and reach a wide customer base. Thus, e-procurement benefits not only the business owners, but also customers since they can shop without leaving their homes. Also, customers can easily find the lowest price of products when buying their goods via the internet. In this paper, we analyze the e-procurement and e-logistics of the Dell Inc. Company. DELL INCORPORATED Dell Inc. is a computer company that was established by Michael S. Dell, in 1984 (Krauss 2003: p.7). It offers a wide range of technology product categories (Krauss (2003: p.8). These products range from personal computers to services such as storage solutions. Also, it gives a variety of services, which range from business services and configurable information technology including product-related support services, consulting and applications and infrastructure technology (Krauss, 2003: p.8). As stated by Levy (1999: p.20), Dell Inc. operates in four global business segments, which include public, Large Enterprise, Consumer, and small and Medium Business. The company designs its own products, manufactures and markets them, sells, as well as supports a range of products and services, which can be modified to individual requirements of customers (Perret and Jaffeux, 2007: p.4). Dell Inc. is considered among the companies that are most profitable. The company offers the most innovative customer service, as well as product custom configuration in the world (Perret and Jaffeux, 2007: p.5). For this reason, the company is faced with the challenge of satisfying the customers’ needs while maintaining a stable relationship with them. E-PROCUREMENT AT DELL Dell Inc. is widely known for selling its computers and others services through the internet to other business (B2B) and to individual customers (B2C) (Perret and Jaffeux, 2007: p.5). B2B refers to business transactions between one company and another such as business customers, suppliers and distributors. The B2C refers to business transactions between a company and consumers. At the beginning of the 1990’s, Dell Inc. attempted to distribute wares by retailing. However, the management found out later that this method was unprofitable for business (Gattorna, 2003: p.51). Hence, Dell Inc. decided to key on boosting its customer support and services by allowing customers to make orders directly (Gattorna, 2003: p.52). This was considered a unique strategy for Dell customization. Recently, Dell Inc. improved its sourcing and buying processes by implementing a leading e-procurement solution known as Ariba Buyer (Krauss, 2003: p.8). In order to ease the business processes between Dell Inc. and its supplier companies, Ariba Buyer which is an e-procurement solution is used. It is quite useful in automating and streamlining sourcing. (Li, 2007: p.20). In earlier years, making purchase orders at Dell was a highly laborious process since company workers filled out forms for each purchase process every time they ordered an item, which included collecting about ten approval signatures (Li, 2007: p.21). The buyers were then expected to re-enter the data into two different systems that included a home-grown Access database and the legacy purchasing system. This paper-based process was challenging for Dell to track its purchases by commodity, as well as analyze its purchasing patterns in terms of where, how much and from whom the supplies were bought, hence the change in its procurement process. Thus, Dell Inc. implemented an e-procurement solution known as Ariba Buyer. E-procurement enabled Dell to streamline its supplying base. This helped in the elimination of maverick spending, as well as standardization of the ordering processes for its suppliers. (Krauss, 2003: p.8). This was followed, by Dell’s move, to assess 3 e-procurement systems depending on five criteria. These criteria included a user-friendly boundary, cost-effectiveness, and integration with existing back-end system (Krauss, 2003: p.8). Others included e-commerce links to most of Dell’s supplying companies, and compatibility with the current IT policy of Dell servers (Li, 2007: p.20). According to Gattorna (2003: p.50), close to seven months were spent by the personnel that were responsible for implementing Ariba. This time was spent in developing twenty interfaces that would facilitate connection of Ariba buyer with Dell’s legacy systems. They created linkages for Ariba and Dell’s purchase order, catalog data, cost center, accounting code validation, and employee data among other systems (Gattorna, 2003: p.50). This was made to ascertain that all the processed orders had been validated. This resulted in a final product, which facilitates making purchases online. This product is known as Dell Internet Requisition Tool (DIREQT) (Gattorna, 2003: p.51). Currently, DIREQT has made it easy for Dell employees to complete purchasing orders online by loging into DIREQT Web site, as well as conducting searches for certain products, suppliers or services, which usually give accurate status reports (Levy, 1999: p.23). Immediately, Ariba Buyer forwards the catalog items and requisition straight to the right manager at the cost center who signs the order electronically. The system then automatically creates an approving chain before directing it to an employee network. (Gattorna, 2003: p.51). However, if the product ordered is not present in the catalogue, Ariba Buyer includes a Dell buyer to source the product and hands over the request for last signatures (Perret and Jaffeux, 2007: p.6). After the requisition has been approved, it is moved to the Ariba Commerce Services Network (ASCN). ASCN is a shared network infrastructure that helps to connect with buyers and marketplaces, on the Ariba Business to Business (B2B). Commerce stand (Perret and Jaffeux, 2007: p.6). Ariba uses ASCN to communicate its orders to suppliers, which includes shipping through e-mail, faxes, Extensive Markup Language (XML) and electronic data interchange (EDI) (Perret and Jaffeux, 2007: p.6). Moreover, Ariba Buyer also accelerates the payment process in Dell Inc. The receipts that Dell’s central receiving department prepares for wares are brought into the organization and matched automatically with the right invoice. This is then fed into the system by the account payable processors (Bothma and Geldenhuys, 2008: p.25). In addition, the purchasers create receipts of the service given to them, which is also matched in an automatic manner. Therefore, this practice helps to avoid the early routine of service invoices, which is time-consuming, when making purchases for approval. As stated by Botha, Bothma and Geldenhuys (2008: p.25), with the Ariba Buyer at Dell, the requisition cycle time is likely to be reduced by 62%, and lessen operation costs by 61%. However, Dell Inc. believes that it stands to benefit on a larger scale from the perception into the buying process attained through combining customers’ information. Moreover, through the use of Ariba, Dell is able to gather information necessary to evaluate its supply base and re-evaluate key business to market communications services, office products and consulting, among many more kinds of expenditures (Gattorna, 2003: p.50). CUSTOMER RELATIONSHIP MANAGEMENT According to Perret and Jaffeux (2007: p.7), Customer Relationship Management (CRM) is the creation and maintenance of relations with customers. The key aim of Dell is to offer its customers technologically reliable customer service requirements. Perret and Jaffeux (2007: p.7) argue that the software that help in facilitation facilitate Dell’s CRM include marketing automation software, a system that benefits the sales, and custom designed Web pages that contain purchase data. According to Ross (2010: p.88), today, one fifth of standard-based computers sold in the world is Dell’s product. The key concept of Dell Inc. is to sell computers directly to customers. This will increase their success in the computer business. (Ross, 2010: p.88). Before Dell Inc. invented the made-to-order concept, its customers used to buy its products from electronic shops and retail stores. In this case, customers interacted only with the salesperson of the store and not the manufacturer. Therefore, Dell introduced the concept of interacting directly with the customer via the internet so as to fulfil the demands of its clients and deliver quality services. E-LOGISTICS AT DELL INC. For Dell Inc., the E-logistics has entirely changed it way of distributing its products. Traditionally, Dell used to pick up components from the warehouses of suppliers then collect them in its central or regional distribution centres, and finally merge them in stock in order to deliver the final products to customers (Ross, 2010: p.88). Currently, through implementation of e-logistics, Dell Inc. can now pick up components from the ware houses of suppliers and then forward the merging of components made during the transit to the logistic-service providers through USP or Airborne Express (Li, 2007: p.36). This has resulted in less fixed costs spent in warehouse centers and distribution, no product technological obsolescence, and no stock-keeping units (SKU). SUPPLY CHAIN MANAGEMENT AT DELL INC. Supply Chain Management (SCM) is a system that Dell established to ensure the availability of precise computer components for its customers on demand and location. SCM describes how the company manages how raw materials are transformed into the end products and how products and services get to all its consumers (William, 2003: p.150). This has enabled the company to develop a tight bond with its supplier companies and consumers. In this regard, Mencarini (2003: p.19) states that Dell Inc. have one of the most effective SCM system in the world, and that it is focusing on creating the best SCM through the i2. This will improve the supply chain process through connecting its suppliers and planners in order to satisfy the requirements, as well as demands of their customers. SOFTWARE USED BY DELL IN PROMOTING RELATIONSHIP MARKETING Dell also uses a variety of software to promote relationship marketing such as Hotlink, Premier Pages and an enhanced CRM system, among others (Gattorna, 2003: p.57). Its database software is highly efficient and effective with customer relationship management, which stores tables of data used to check the information of customers and establish promotional campaigns. These databases mainly include the information of customers, their products and interests. According to Gattorna (2003: p.57), customer database helps to increase profits since it contains the information of clients, which determines the efficient and effective ways to target and divide the consumers. Hotlink is an automation software program, which facilitates tergeting and marketing communication, monitoring of customers and market development (Mencarini, 2003: p.21). This software gives Dell a free opportunity to advertise its products through the word of mouth. Also, it impacts its customer base to ensure that customers receive better services than before. Premier Pages are a transparent online system/software custom designed Web pages, which contain all the purchasing data (Gattorna, 2003: p.57). In addition, the software contains a paperless ordering process, which captures the technology configurations of customers. Mencarini (2003: p.21) argues that Dell created Premier Pages in order to gather less clientele details than they already have and develop a win-win situation that is more realistic. This starts when the clients places their orders for a computer and built later. Another system that Dell uses is an enhanced CRM system, helped by an information system company called the IS Partners (Moon, 2003: p.45). ProClarity offers a comprehensive analytical ability that highlights negative and positive areas of the business. Moreover, the company breaks down its sales by region where each team enables Dell to measure its own trend and success. ProClarity significantly benefits all the financial sections of the company. It also helps the Dell staff to easily access detailed demographic information about customers. The marketing department is able to follow product sales, customer activity and marketing mixes via this software. The management can follow activities in customer accounts, and act on lapsed quotes. Additionally, Dell installed the e-commerce software i2 Collaboration Planner, i2 Supply Chain Planner and i2 Factory Planner in order to meet its supply chain needs (Moon, 2003: p.45). This is applicable in the management of build-order procedures that exist between placing orders and customer support. The software enables Dell Inc. to classify customers and target them through their most preferred medium, obtain and analyze results (Moon, 2003: p.45). Moreover, Dell Inc. has signed an agreement with Part-Miner (Gattorna, 2003: p.51). Part-Miner is a vertical portal in electronic components industry, which provides information and helps to meet the demand and supply of the components. FUTURE PLANS OF DELL INC In future, Dell plans to update its processes of purchase such as the establishment of online auctions for products and services like printing, shipping, and paper (Li, 2007: p.20). The company also plans to make order status, payment information and receipts easily accessible to suppliers online. In coming years, Dell intends to expand its catalogue base and purchase choices by convincing its main suppliers to use the Ariba Business to Business Commerce Platform (Li, 2007: p.20). CONCLUSION CRM-SCM integration tries to satisfy clients through prompt delivery of products, ensuring its accessibility and maintain the manufacturer’s profits and returns. Thus, there are several lessons that can be drawn from Dell’s application of e-business. This trend can be emulated by other organizations in the industry. This will result in offering of better services to customers. It can be portrayed via the way Dell Inc. uses CRM to its advantage. Customer satisfaction will increase their trust in the organization, improving its reputation. In addition, custom-building a PC desired by the clients has formed a particularly strong relationship between Dell and its customers (Moon, 2003: p.50). In addition to this, implementing technology in a phased fashion has helped Dell to achieve a strong relationship with its clients. Dell set up simulated environments in order to support the i2 system in blotches without affecting the live form. Dell ensured that all stages of the comp leted process allowed future growth of the company before developing the whole system. Hence, this reduced risk and increasing efficiency. Another significant lesson from Dell would be to extend the link from the customer to the supplier, while maximizing its operation efficiency as well as customer satisfaction (Ross,2010: p.92). As a result, customers were able to spend less money on purchasing customized machines. This is because Dell approved the savings that resulted from managing its inventories efficiently. The company was, therefore, able to share information with suppliers about customer requirements and buying patterns in real-time. REFERENCES Botha, J., Bothma, C. & Geldenhuys, P. 2008. Managing E-commerce in Business, New York, Juta and Company Ltd. Gattorna, J. 2003. Gower handbook of supply chain management, Burlington, Gower Publishing Ltd. Krauss, M. 2003. Dell looks to Sears to extend buyer reach. Marketing News, April 28, 2003, Vol. 37, Issue 9. Li. L. 2007. Supply chain management: concepts, techniques and practices enhancing the value through collaboration, Tokyo,World Scientific. Moon, K. 2003. Dell Computers: A Leader in CRM. Retrieved February 20, 2010 Mencarini A. 2003. E-Business: Dell Case Study, UK, Strathclyde Business School. Perret, F. & Jaffeux, C. 2007. Essentials of logistics and management, London, EPFL Press. Levy, R. H. 1999. The Visible Marketer: Dell’s CRM model stresses transparent processes. Available from http://directmag.com/mag/marketing_visible_marketer_dells/index.html {Accessed 20th February 2012} Ross, D. F. 2010. Introduction to Supply Chain Management Technologies. London, CRC Press. William C. 2003. The true meaning of supply chain management. Logistics Management, June 2003, Vol. 42, Issue 6.

Beer’s Law Problem Set Essay

Beer’s Law Problem Set Spring 2013 1. Calculate the absorbances corresponding to the following values of the percentage of transmitted light: (Provide your final answer with three decimal places) a. 95% b. 88% c. 71% d. 50% e. 17.5% f. 1% 2. A solution of a compound (1.0mM) was placed in a spectrophotometer cuvette of light path 1.05cm. The light transmission was 18.4% at 470nm. Determine the molar extinction coefficient. Include units in your answer. 3. The molar extinction coefficient of reduced NADH (nicotinamide adenine dinucleotide phosphophate) at 340nm is 6220 L/mole ·cm. 3ml of solution containing 0.2 micromoles of NADH were placed in a cuvette of 1.05cm light path. Calculate the percentage light transmission of this sample at 340nm. 4. 3ml of a solution containing both the oxidized and reduced form of nicotinamide adenine dinucleotide (NAD and NADH, respectively) was placed in a 1.0cm spectrophotometer cuvette. The absorbance at 340nm (at which only the reduced form is measured) was 0.207. The absorbance at 260nm (which measures both the oxidized and reduced forms together) was 0.900. The molar extinction coefficient of NADH is 6220 L/mole ·cm, and the molar extinction coefficient at 260nm is 18,000 L/mole ·cm. Calculate the molar concentrations of oxidized and reduced forms of the nucleotide in the mixed solution. 5. A mixture of ortho, meta, and para cresoles dissolved in cyclohexane may be analyzed spectrophotometrically because each exhibits an absorption in a region where absorption due to the other cresols is negligible. The absorption maxima occur at 752nm, 776nm, and 815nm for ortho, meta, and para cresols, respectively. To test the validity of Beer’s Law for solutions of cresols, each is made up in cyclohexane at a series of concentrations and the absorbances measured. Data obtained are recorded below: Ortho Meta Para Concentration (g/100ml) 0.25 0.50 1 2 Absorbance (at 752nm) 0.120 0.235 0.465 0.820 Concentration (g/100ml) 0.60 1.15 2.35 Absorbance (at 776nm) 0.115 0.220 0.460 Concentration (g/100ml) 0.50 1 2.1 3.15 Absorbance (at 815nm) 0.09 0.2 0.415 0.60 An unknown mixture of the three cresols in cyclohexane was analyzed and the percentage of light absorbed at 752, 776, and 815nm was 27.5, 50, and 41 respectively. Determine the concentration of each cresol and the percentage of the final mixture. (Calculate your answer using 3 decimal places) (Hint: if 20% of the light is absorbed by the sample, then 80% is transmitted. Percent absorbed does not equal Absorbance. Also, the table of data above should be used to generate a graph. How would this help you?) Answers: 1) a. 0.022 b. 0.056 c. 0.149 d. 0.301 2) ~700 L/mole ·cm 3) 36.7% 4) NAD= 16.7uM , NADH= 33.3uM 5) ortho 0.259g/100ml 8.697% meta 1.549g/100ml 52.014% para 1.170g/100ml 39.288% e. 0.757 f. 2.00

Friday, August 30, 2019

Naturopathic Medicines over Pharmaceutical Medications Essay

For World Health Organization (WHO), health of an individual is not only means of getting rid of physical illness or pain but involves complete mental and social wellbeing of an individual. It is muti-dimensional and involves various aspects including social environment of a person or a society. Still eighty per cent of people are heard complaining about one or the other health problems and only one percent of people are in according to the WHO definition really healthy. Among several factors, it’s the departure of the people from our age-old system of naturopathic treatment towards hyped Pharmaceutical Industry – a product of Industrial revolution. Dr. Mathias Rath, a German-based advocate of patients’ rights and author of the book, â€Å"Why Animals Don’t Get Heart Attacks-But People Do†, rightly said, â€Å"There is an entire industry with an innate economic interest to obstruct, suppress and discredit any information about the eradication of diseases†. (Faseyin, 2004) He condemned millions of people who are eager to pay billions of dollars to the pharmaceutical industry for medicines that never cure rather kills. Pharmaceutical industry is earning more than one trillion dollars by selling drugs promising cure of various diseases and are being marketed at a cost more than 55,000 percent of the raw materials, bringing profit to the whole pharmaceutical industry and the people attached to it but without any concern to the health of the people. These drugs are merely removing symptoms instead of curing. As a result, more people are finding themselves facing deathbeds even from the preventable diseases. For e.g. few centuries back, James Lind had found that deficiency of vitamin C can cause blood loss and scurvy but still pharmaceutical industries dealing with medicines promising the cure of cardiovascular diseases are not supplying this information. The official RDA for vitamin C set at 60 mg is also not enough to prevent the disease. And the reason is the attitude of the pharmaceutical industry that looks at the cost effectiveness of the medicines rather than the health of the citizens; naturally they find vitamin C as an unprofitable venture for the pharmaceutical industry. Allopathic medicines can cure acute illnesses very effectively and pharmaceuticals and artificial respiration has also saved lives of millions.   But if we count the side effects, these are more than the actual benefits. Several cases have come to light when patients have suffered from pneumonia and acute physical dysfunction arising from the continuous use of steroid medication. Patients can also lose normal intestinal flora and can develop acute digestive problems. Nature has bounteous wealth of healing powers in its lap, which our ancient healers had fruitfully utilized, to the advantage of the patients. Naturopathic medicines go deep into cleaning of our immune system, healing hormonal, nervous systems and detoxifying them and eliminating diseases from its roots thus taking care of patient’s complete health. Medieval Jewish writer, Maimonides reflected Plato’s concept of health, when he said, â€Å"The cure of many diseases is unkown to the physicians†¦ because they are ignorant of the whole (body and soul) which ought to be studied also; for the part can never be well, unless the whole is well. For all good and evil, whether in the body or in the human nature originates†¦ in the soul, and overflows from thence†¦ and therefore if the head and body are to be well, you must begin by curing the soul; that is the first thing.† (Vaux & Stenberg 2002) Underlying principles of healing on which the naturopathic medicine is based makes it different from all other medical approaches. Naturopathic doctors take the individual’s biochemistry, biomechanics, and emotional predispositions into account while prescribing medicines.   The body’s self-healing takes into consideration various aspects of body control and strive to maintain the biological balance of the body, which is a very crucial aspect of a healthy body. The holistic or naturopathic way of healing works in combination of best scientific diagnosis methods and monitoring techniques with both ancient and innovative health promotion methods. These methods involve use of natural diet and herbal remedies, nutritional supplements, exercises, relaxation, psycho-spiritual counseling, meditation, breathing exercises, and other self-regulatory practices taking into consideration history of patient’s health and his current life including family, job, and religious life and believes in basic concept that food and nutritional supplements are the best medicine. It focuses on prevention of diseases, maintaining high-level wellness and longevity. Besides, naturopathy beckons patients to be an active participant in his or her own healing process, rather than merely becoming a passive recipient of treatment. Naturopathy deals with specific individual needs, and involves in healing process of body, mind and soul. It is quite true that to understand about the illness, knowing about mere physical symptoms are not enough but emotional aspects of patient should also taken care of. Therefore naturopathic treatment is also called as a science of life as it regulates and maintains chemical activities in the brain, controls rhythm of heart, blood pressure, resistance power of skin and other functions inside our body.   It helps persons to overcome anxiety, depression, irritability, improve memory, create emotional stability, and proves to be a healing power for our old traumatic experiences and over and above rejuvenates our lives by giving us energy and vitality. There are several herbs that have multiple uses for human body. People have been growing herbs since centuries and their medicinal properties even challenge the practitioners of medicines of today. Our ancestors were growing the herb plants in their homes. Many evidences have come to light, which show that early settlers had grown herbs like parsley, anise, pennyroyal, sorrel, watercress, liverwort, wild leeks, and lavender across America and in other parts of the world also. They are still grown in many houses all over America and their proper use can relieve the patients from number of diseases. They are many more herbs like ginger, which reduces the chances of heart attack and act as a protective cover for heart and blood vessels. (Naturals Herbs Guide Online) In 1983, World Health Organization suggested to incorporate naturopathic medicine in conventional health care systems. In 1994, Bastyr University of Natural Health Sciences, received grant of   $1 million funds from the National Institutes of Health’s Office of Alternative Medicine to facilitate the research for alternative therapies to cure the patients affected with HIV and AIDS. The diet for cancer patients recommended by the National Cancer Institute was first published in a naturopathic medical textbook in the 1940s. Government of Germany has made it mandatory for conventional doctors and pharmaceuticals also to undergo formal training in naturopathic techniques, as they are cost-effective. (Morton & Morton 1997) Graduates of naturopathic colleges have to put in more hours of study in basic and clinical science than their counterparts in Yale or Stanford medical schools and they receive more training in therapeutic nutrition than Md.’s, osteopathic physicians, or registered dietitians. In United States alone, there are more than one thousand licensed naturopathic physicians and many provinces of Canada also issue licenses to naturopathic doctors as primary care physicians and it is expected that by the end of 2010, all fifty states will start issuing licenses to naturopathic physicians. (Alan Morton, Marry Morton 1997) There are many more healing techniques like Chiropractic, Ayurvedic Medicine, Therapeutic Massage, Traditional Chinese medicine (TCM)/ Chinese Medicine, Acupuncture, Acupressure, Atlas Orthogonal, Chelation Therapy, Colonics, Psychotherapy/Counseling, Movement Therapies/ Dance, Holistic Dentistry, Ear Candling- Ear Candling/ Ear Coning/ Thermal-Auricular, Feng Shui, Flower Essences (Bach Flower Remedy), Herbalism, Hypnotherapy, Lymph Drainage Therapy, Ohashiatsu and Vitamin Therapy, whose basic principles and remedies lie in the various ingredients found in the nature. So why not fully utilize what the nature has given to us as only with the healthy body, there is healthy mind and only healthy mind can lead the world towards healthy living. REFERENCE LIST Faseyin A.Y.   2004. The Pharmaceutical Cartel: A Tool for Genocide. Retrieved on February 10, 2008 from W.W.W: http://newafrikanvodun.com/pharm.html. Grout M.M. Allopathic Medicine. Retrieved on February 10, 2008 from W.W.W: http://www.crossroadsclinic.net/articles/allopathic_medicine.html Morton M. A. & Morton M. 1997. Naturopathic Medicine. Retrieved on February 26, 2008 from W.W.W: http://www.healthy.net/asp/templates/article.asp?PageType=Article&ID=508 NaturalHerbsGuide.com. Natural Herbs, Herbal Remedies, Medicines, and Supplements Guide. Retrieved on February 26, 2008 from W.W.W: http://www.naturalherbsguide.com/ Vaux K.L. and Stenberg M. 2002. Covenants of Life: Contemporary Medical Ethics in Light of the thought of Paul Ramsay. USA: Kluwer Academic Publishers.

Thursday, August 29, 2019

Cultural relativism Essay Example | Topics and Well Written Essays - 500 words

Cultural relativism - Essay Example The cultural background of an individual determines their moral beliefs in various ways. This is as a result of difficulties to change ones beliefs and cultural practices which determine their respective moral beliefs. Culture is one of the major influences of a person’s relation with different beliefs which clearly indicate that moral beliefs are dependent on the culture. While moral beliefs can be cross-culture, it is viewed that the persons with similar moral beliefs share common cultural practices. For example, a person residing in Africa may have a moral belief that, it is hard to maintain a polygamous marriage which concurs with the same opinion of the person residing in Europe. This clearly shows that all cultures share some moral beliefs. The main difference in the two perspectives is that: all moral beliefs depend on the culture of the individual and cannot be easily changed by the environmental factors (Russ 290). On the contrary, the cross-culture moral beliefs are easily changed by the environment. The environment impact on the moral beliefs is reflected in adulthood where the person develops a different approach to particular issues and arguments in the society. In addition, the moral beliefs which are shared by all cultures tend to vary with respect to technological advancement and modernization in a particular culture as compared to other cultures. The difference arises where the environmental influence on the moral beliefs in a given culture affects all individuals in that culture implying that the moral beliefs still depend on the culture. On the other hand, environmental impact on the moral tends to vary the cross-culture moral beliefs which minimizes the similarities and creates a larger borderline (Russ 278). Culture shock occurs when a person is introduced to a different culture. Culture shock occurs in various forms such as new dialects, food and views. For example, a person

Wednesday, August 28, 2019

Managing Organisational Communication Essay Example | Topics and Well Written Essays - 1000 words

Managing Organisational Communication - Essay Example Movement through sequence is characterized by one or more of the parties making concessions in return for concessions being made by the other party (or parties). What the parties do is trade-off' some part of their original negotiating position. This process continues until the parties either reach a point of agreement--i.e. they are prepared to accept the position of the opposing side--or a stalemate is reached. (Susskind and Cruikshank, 1987). Principled negotiation is arguably harder for those in a position of relative power to achieve than for those who have less power in the relationship. For example, a director heading a team of 40 sales and marketing staff has the final say when it comes to decisions - but if that decision leaves the staff feeling unfairly treated, the director has not achieved a good result for the staff, themselves or the firm. Ethics is a set of moral principle and values. Ethics is no longer a purely personal concern. Nor is it something that organizational leaders can take for granted. Today, a well-tuned sense of the ethical has become a 'must have' for those in business wishing to create and belong to sustainable enterprises, as well as for the average person in the street who is concerned about who they work for, who they buy from and who they invest in. Therefore we have written this primer. WHAT IS CSR The ethic of corporate social responsibility has been described as "the alignment of business operations with social values. CSR consists of integrating the interest of stakeholders--all of those affected by a company's conduct--into the company's business policies and actions." Fundamentally, socially responsible behavior internalizes all external consequences of an action, both its costs and benefits. Ultimately, the corporation is only a reflection of consumers' demands and priorities; true social change necessarily involves changes in consumers' demands. Voluntary CSR is really nothing more than corporate advertising that makes consumers aware of new products with features for which they are willing to pay. Although CSR advocates portray a profit-centric corporation as socially irresponsible, the opposite is true. A profit-centric firm provides the optimal amount of socially responsible behavior. Although concern with ethics and CSR has always been a part of doing business, business leaders today are beginning to think about ethics as a set of principles and guides of behavior rather than a set of rigid rules. In this sense, business ethics is not only an attempt to set a standard by which all of the employees of a company can know what is expected, but it is also an attempt to encourage employees, managers, and board members to think about and make decisions through the prism of a shared set of values (Coors & Wayne, 2005). Q7 Part (b) A discourse that seeks to persuade or convince is not made up of an accumulation of disorderly arguments, indefinite in number; on the contrary, it requires an organization of selected arguments

Tuesday, August 27, 2019

A genetically modified organism Research Paper Example | Topics and Well Written Essays - 1750 words

A genetically modified organism - Research Paper Example Thus, in this paper I am going to examine the process of GMO production and storage in order to assess the risks connected with their consumption. Genetically modified organisms are organisms (bacteria, viruses, plants, animals) which genetics was changed in order for acquiring new functions. As genes are responsible for carrying the information in the sequences and structures of DNA, they define special characteristics of the organisms. Advances of biotechnology now permit to extract, change, and add various genes to the organisms. It is even possible to transfer genes between non-related organisms. Most often scientists add some genes to plants in order to make them stable to certain viruses (GMOs, 2010). Genetically modified organisms are used in medicine, agriculture, biology, textile production. Usually when people start speaking about genetically modified organisms they mean genetically modified crops which have become a part of everyday life of the consumers in the world. Tobacco was the first plant which received additional genes to resist herbicides. Later it was modified to be capable to resist insects and the ripening qualities of the crops were also changed. In 1995 Food and Drug Administration approved commercial usage of GM potato, corn, soya, and tomato, and the variety of plants with additional genes increase significantly (Swanson, 2013). People usually underestimate the quantity of GM crops that they consume. However, by the end of 2012 more than 144 kinds of plants received access to the market in the United States of America. So an impressive part of the crops consumed by Americans in the following years were genetically modified: according to the statistics of USDA 93% of all soy, 88 % of all corn, and 94% of cotton (Swanson, 2013). Today such products as tomatoes and cantaloupes with advanced ripening characteristics, beets and soybeans with improved herbicides

Monday, August 26, 2019

William Hill portfolio diligence Essay Example | Topics and Well Written Essays - 7500 words - 1

William Hill portfolio diligence - Essay Example Purpose, findings, and research questions that will guide the study are generated from the shareholders’ effects and the motives for acquisition. The data for this study will be the secondary data from Journal of Financial Economics. In the background study, I analyse the reasons for William Hill’s takeover and stipulate the post-takeover performance of the company. Motives for acquisition of Stanley Leisure and shareholders’ value are critiqued in the following project. In this project, I have used the capital assets pricing model (CAPM) in methodological analysis and OLS Regression for data sources. I can resolve if William’s merger was worthwhile through liquidity-based explanations. Mergers and acquisitions involve the amalgamation of two or more firms or the purchase directed to current firm within the foreign country. This was established by Whiting (1976) that acquisitions are effected through capital transfer, use of marketing skills, and presence of skill for management to increase the efficiency of the companies concerned. The development of better information systems in the global trade can enable a company increase its level of performance and meet its customer needs better. I will discuss in detail the research questions that will assist William Hill in acquisition of Stanley Leisure; due diligence need by William Hill is to enable the shareholders with adequate concept of underlying William acquisition portfolio than the prevailing market allocation of betting services.

Sunday, August 25, 2019

Ethical Study Review Assignment Example | Topics and Well Written Essays - 1250 words

Ethical Study Review - Assignment Example This paper will discuss the scenario given according to the guidelines provided. Objective analysis is paramount in understanding an ethical dilemma. According to the scenario, we are told that the 96-year-old suffers from liver cancer. There are no other complications mentioned in the details. Moreover, one is not able to assert the advancement of the disease. The National Cancer Institute points out that the symptoms associated with liver cancer are unusually severe. Some of the most regular symptoms that are presented by patients who are diagnosed with liver cancer include pain in the upper abdomen, lumps in the upper abdomen, loss of appetite, yellow skin and eyes, fever, fatigue, and weight loss. According to the details given, the 96-year-old patient depicts these symptoms. The second person of interest is the daughter of the patient who is a naturopathic physician. Naturopathic physicians are trained in naturopathy practices that are regarded as traditional approaches to healt h-related issues. The daughter insists that she has the capability of healing her father with some smelly tarry substance. Research shows that chemical poisoning symptoms are similar to those depicted by liver cancer patients. The fact that the patient is in pain can draw several suggestions. Firstly, the daughter’s drugs can be reacting with other treatments; secondly, the daughter could probably be poisoning her father; lastly, since his liver is not functioning appropriately, there is a probability that the patient has an accumulation of toxins in his body. There is an ethical dilemma according to the scenario presented. An ethical dilemma can be defined as a complex circumstance which involves an apparent mental disagreement between moral imperatives. For example, human beings have certain complex relationships that cannot be avoided: If a person tries to murder another individual, therefore, there is a high prospect that the probable murderer is mentally disturbed. There fore, the best method of resolving ethical dilemmas is through the ethical decision making techniques. In most cases, ethical decision making involves five chief steps. They should be in a position to recognize the dilemma as an ethical issue, which requires knowledgeable individuals. Secondly, the individuals involved should be able to gather all the facts that correspond to the ethical dilemma. Thirdly, they should appropriately evaluate some of the optional actions they can employ when addressing the ethical dilemma. The most prevalent approaches to ethical dilemmas include the utilitarian approach, rights-based approach, virtue approach, common good approach, and justice approach. Fourthly, the individuals have the right to test the decision that they have taken, and, lastly, they are able to resolve their ethical dilemma with the decision met. On the contrary, ethical decision-making is not easy, but one can arrive at the right decisions by following these five procedural steps . Moreover, the individuals should not overlook the consequences that might be brought about by resolution of the ethical dilemmas. This ensures that the resolution of the dilemma does not inflict any form of harm to any of the parties involved. In relation to the scenario, the other hospital attendants are seemingly upset, since they think that his daughter is hastening the death of the father. Furthermore, there is no clear license that depicts the

Saturday, August 24, 2019

See Below Essay Example | Topics and Well Written Essays - 250 words - 17

See Below - Essay Example This administrative task needs to be accomplished on the first step of the implementation of the Act. This is so because this expenditure is tired together with the Cap insurance company. The third administrative task is calculating and enforcing the refundable tax credits for the Americans with incomes of between 100% and 400% of the federal poverty line (FPL). The hard part of this task comes in because the tax credit is being calculated on a reducing scale basis (sliding scale). Another administrative task comes in for those who already are covered under other insurance schemes. This is true for example for those under 18 years and who are covered under their parents or grandparents but who also on turning 18 years will require to be changed to be independent. The last administrative task is the linking of the insurance that this health care Act is advocating for and the hospitals which will be handling the patients and will need to coordinate with the other stakeholders before any costs and charges can be made or deducted on the part of the hospital. This will also need those in charge of the Act to be quick in making the payments once the hospital forwards the hospital

Friday, August 23, 2019

Desert Exile Essay Example | Topics and Well Written Essays - 1000 words

Desert Exile - Essay Example She describes the hard life of the Japanese Americans during the Depression and after they were forced to live in the internment camp The author’s father came to the United States in 1906, and her mother came later to marry him. Belonging to a fairly well-off family, Uchida did not experience the hardships her friends did during the Depression .She describes her angst during her childhood and her mother’s sensitivity and her father’s kindness and hospitality. The book throws light on some of the customs and ways of life of the first generation Japanese Americans. Although the book is well written and is full of insights, Yoshiko Uchida who belongs to the community of Japanese Americans herself, does not seem to give a balanced view of the experiences of her community .According to a critic, Uchida â€Å"is too close to her subjects and does not have enough critical distance to give a balanced and accurate account of the internment and experience of Japanese Amer icans in the years before the internment.† Writing about the deluge of Japanese visitors they had when she was a child, Uchida writes, â€Å"I felt as though our house was the unofficial alumni headquarters of Doshisha, and I one of its most reluctant members.†(Uchida p. 11) As the author was but a young child at the time, the number of visitors must have seemed enormous to her. In chapter 2, the author speaks about her insecurities which continued into her adult life. According to her, the insecurity was probably caused by the feeling of being different. â€Å"Perhaps it was the constant sense of not being as good as the hakujin(white people)† (Uchida 27) She concedes that although they spoke Japanese at home and observed Japanese customs, her family was more liberal than many of the other Japanese families. â€Å"As a result, our upbringing was less strict than that of some of my Nesei

Literature review Essay Example | Topics and Well Written Essays - 1750 words

Literature review - Essay Example The next part, talks about the business structure and the major products and services offered by the company. The company history has been also reviewed highlighting the major achievements of the company over the past years of its business operations. The existing management structure and the corporate governance mechanisms followed by the organisation have also been studied. The business strategies followed by the company and its financial performance in the recent years have also been reviewed in this report. Company Overview EasyJet plc is a UK based organisation which operates its business in the airlines industry. The company is headquartered at Luton, UK and was founded in the year 1995. The airlines operations of the company includes 600 routes and offers its services to around 130 airports located in 30 countries all around the world (easyJet plc, 2012). Apart from carrying passengers the company is also engaged in the business of leasing and trading aircrafts. It has been es timated that the total fleet of aircrafts of the company included around 200 aircrafts as on 24 August 2012. The shares of EasyJet are publicly traded in the London Stock Exchange (LSE) with the ticker symbol â€Å"EZJ† (Yahoo Finance, 2012). Business Description Airlines services are offered by EasyJet plc within Europe. The company runs its business along with its subsidiaries at point-to-point routes on short-haul basis. The company is found to operate across 130 airports spread over 130 countries and has more than 580 routes at present. The company operated fleet mainly consists of Airbus aircraft and some Boeing aircrafts as well. The engines of the aircrafts flown by EasyJet plc are supplied by (International Aero Engines) IAE and CFM international. The maintenance of engines and the aircrafts are mostly undertaken by Virgin, SRT, GE, Aeroton, MTU, Lufthansa Technik, and BF Goodrich. Aircrafts are also obtained on lease by EasyJet provided by various organisations like BOC Aviation, AWAS, GECAS, Royal Bank of Scotland, Nomura Babcock & Brown, Santander, and Sumisho. Purchase of aircrafts by EasyJet plc is mainly financed by the financial institutions like Bank of Tokyo-Mitsubishi, Alliance & Leicester, BNP Paribas, HSH Nordbank, Caylon, KfW, PK AirFinance, Natixis, Sumitomo Mitsui Banking Corporation, Royal Bank of Scotland, and WestLB. The major insurers of the company include La Reunion, Global, Canada Life, AIG, QBE, Houston Casualty, and Chubb. EasyJet Switzerland and EasyJet Airline Company Ltd. are the two subsidiary companies of EasyJet plc who are engaged in airlines operations. The aircraft leasing and trading activities are carried on by the other subsidiaries of the company like EasyJet Sterling Ltd., EasyJet Aircraft Co. Ltd., and EasyJet Leasing Ltd. Apart from scheduled airline and in-flight services, the other associated services offered by EasyJet includes online reservations, hotel rooms and car hire facilities. Company History St elios Haji-Ioannou is attributed to be the founder member of EasyJet and his objective of setting up this company was to offer low cost scheduled airlines within Europe. The first aircraft that was wholly owned by the company was provided in the year 1996. The company’s website was launched in the year 1997. Since its inception the company continued to expand its operations all over Europe and started implementing various acquisitions

Thursday, August 22, 2019

Reviews on Financial Risk Management Essay Example for Free

Reviews on Financial Risk Management Essay The definition and types of financial risk III. Risk management and the theoretical foundation IV. The process of financial risk management V. The challenges faced by the modern financial risk management theories ?Abstract? Financial risks are exposures of uncertainties for those participants in financial market. Financial risks can be divided into four categories: market risk, credit risk, liquidity risk and operational risk. Risk management has become more and more crucial for a market participant to survive in the highly competitive market. As the development of the global financial market, there are many phenomena that cannot be explained by traditional financial risk management theories. These phenomena have accelerated the development of behavioral finance and economic physics. The financial management theories have already improved a lot over the past decades, but still facing some challenges. Therefore, this report will review some important issues in the financial risk management; introduce some theoretical foundation of financial risk management, and discuss the challenges faced by the modern financial risk management. I. Introduction Financial risk is one of the basic characteristics of financial system and financial activities. And financial risk management has become an important component of the economic and financial system since the occurrence of financial in human society. Over the past few decades, economic globalization spread across the world with the falling down of the Bretton Woods system. Under above background, the financial markets have become even more unstable due to some significant changes. Many events happened during the decades, including the â€Å"Black Monday† of the year 1987, the stock crisis in Japan in 1990, the European monetary crisis in 1992, the financial storm of Asia in 1997, the bankruptcy of Long-Term Capital Management in 1998, and the most recent global financial crisis triggered in the year 2008. All these changes brought enormous destruction of the smooth development of the world economy and the financial market. At the same time, they also helped people realized the necessity and urgency of the financial risk management. Why did the crisis happened and how to avoid the risk as much as possible? These questions have been endowed more significant meaning for the further development of the economy. Therefore, this report will review some important issues in the financial risk management; introduce some theoretical foundation of financial risk management, and discuss the challenges faced by the modern financial risk management. II. The Definition and Types of Financial Risk The word â€Å"risk† itself is neutral, which means we cannot define risk a good thing or bad. Risk is one of the internal features of human behavior, and it comes from the uncertainty of the future results. Therefore, briefly speaking, risk can be defined as the exposure to uncertainty. In the definition of risk, there are two extremely important factors: first is uncertainty. Uncertainty can be considered as the distribution of the possibility of one or more results. To study risk, we need to have a precise description about the possibility of the risk. However, from the point view of a risk manager, the possible result in the future and the characteristic of the possibility distribution are usually unknown, so subjective factors are frequently needed when making decisions. The second factor is the exposure to uncertainty. Different human activities were influenced at different level to the same uncertainty. For example, the future weather is uncertain to everyone, but the influence it has over agriculture can be far deeper than that over finance industry or other industry. Based on the above description about risk, we could have a clearer definition of financial risk. Financial risk is the exposure to uncertainty of the participants in the financial market activities. The participants mainly refer to financial institutions and non-financial institutions, usually not including ndividual investors. Financial risk arises through countless transactions of a financial nature, including sales and purchases, investments and loans, and various other business activities. It can arise as a result of legal transactions, new projects, mergers and acquisitions, debt financing, the energy component of costs, or through the activities of management, stockholders, competitors, foreign governments, or weather. (Karen A. Horcher). Financial risk can be divided into the following types according to the different sources of risk. A. Market risk. Market risk  is the  risk  that the value of a portfolio, either an investment portfolio or a trading portfolio. It will decrease due to the change in value of the market risk factors. The four standard market risk factors are stock prices, interest rates, foreign exchange rates, and commodity prices. The influence of these market factors have over the financial participants can be both direct and indirect, like through competitors, suppliers or customers. B. Credit risk. Credit risk  is an investors risk of loss arising from a borrower who does not make payments as promised. Such an event is called a  default. Almost all the financial transactions have credit risk. Recent years, with the development of the internet financial market, the problem of internet finance credit risk also became prominent. C. Liquidity risk. Liquidity risk  is the risk that a given security or asset cannot be traded quickly enough in the market to prevent a loss. Liquidity risk arises from situations in which a party interested in trading an  asset  cannot do it because nobody in the  market  wants to trade that asset. Liquidity risk becomes particularly important to parties who are about to hold or currently hold an asset, since it affects their ability to trade. D. Operational risk. Operational risk is the risk of loss resulting from inadequate or failed internal processes, people and systems, or from external events. Nowadays, the study and management of operational risk is getting more attention. The organizations are trying to perfect their internal control to minimize the possibility of risk. At the same time, the mature theory of other subjects, such as operational research methods, are also introduced to the management of operational risk. Overall, financial risk management is a process to deal with the uncertainty resulting from financial markets. It involves assessing the financial risks facing an organization and developing management strategies consistent with internal priorities and policies. Addressing financial risks proactively may provide an organization with a competitive advantage. It also ensures that management, operational staff, stockholders, and the board of directors are in agreement on key issues. III. Risk Management and the Theoretical Foundation Financial market participant’s attitude towards risk can be basically divided into the following categories. A. Avoid risk. It is irrational for some companies to think that they can avoid the financial risks though their careful management because of the following reasons. First of all, risk is the internal feature of human activities. Even though it doesn’t have direct influence, it could generate indirect influence though the competitors, suppliers or customers. Moreover, sometimes it might be a better choice for the manager of the company to accept risk. For example, when the profit margin of the company is higher than the market profit margin, the manager can increase the value of the company by using financial leverage principle. Obviously, it will be harder to increase the value of a company if the manager is always using the risk avoidance strategy. B. Ignore risk. Some participants tend to ignore the existence of risks in their financial activities, thus they will not take any measures to manage the risk. According to a research of Loderer and Pichler, almost all the Swedish multinational companies ignored the exchange rate risk that they are facing. C. Diversify risk. Many companies and institutions choose to diversify risk by putting eggs into different baskets, which means reaching the purpose of lower risk by holding assets of different type and low correlation. And the cost is relatively low. However, as to small corporations or individuals, diversifying risk is somehow unrealistic. Meanwhile, modern asset portfolio theory also tells us that diversifying risk could only lower the unsystematic risk, but not systematic risk. D. Manage risk. Presently, most people have realized that financial risk cannot be eliminated, but it could get managed though the financial theory and tools. For instance, participants can break down the risk they are exposed to by using financial engineering methods. After keeping some necessary risk, diversify the rest risk to others by using derivatives. But why do we need financial risk management? In other words, what is the theoretical foundation of the existence of financial risk management? The early financial theory argues that financial risk management is not necessary. The Nobel Prize winner Miller ;amp; Modigliani pointed out that in a perfect market, financial measures like hedging cannot influence the firm’s value. Here the perfect market refers to a market without tax or bankruptcy cost, and the market participants own the complete information. Therefore, the managers do not need to worry about financial risk management. The similar theory also says that even though there will be slight moves in the short run, in the long run, the economy will move relatively stable. So the risk management that is used to prevent the loss in short term is just a waste of time and resource. Namely, there is no financial risk in the long run, so the financial risk management in the short run will just offset the firm’s profits, and therefore reduce the firm’s value. However, in reality, financial risk management has already roused more and more attention. The need for risk management theory and measures soar to unprecedented heights for both the regulator and participants of the financial market. Those who think risk management is necessary argue that the need for risk management is mainly based on the imperfection of the market and the risk aversion manager. Since the real economy and the financial market are not perfect, the manager can increase a firm’s value by managing risk. The imperfection of the financial market is shown in the following aspects. First, there are various types of tax existing in the real market. And these taxes will influence the earning flow of the firm, and also the firm’s value. So the Modigliani ;amp; Miller theory does not work for the real economy. Secondly, there is transaction cost in the real market. And the smaller the transaction is, the higher the cost. Last but not least, the financial market participants cannot obtain the complete information. Therefore, firms can benefit from risk management. First, the firm can get stable cash flow, and thus avoid the external financing cost caused by the cash flow shortage, decrease the fluctuation range of the stock and keep a good credit record of the company. Secondly, a stable cash flow can guarantee that a company can invest successfully when the opportunity occurs. And it gets some competitive advantage compared to those who don’t have stable cash flow. Thirdly, since a firm possesses more resource and knowledge than an individual, which means it could have more complete information and manage financial risks more efficiently. If the manager of a firm is risk aversion, he can improve the manager’s utility through financial risk management. Many researches show that the financial risk management activities have close relation to the manager’s aversion to risk. For example, Tufano studied the risk management strategy of American gold industry, and found that the risk management of firms in that industry has close relation to the contract that the managers signed about reward and punishment contracts. The managers and employees are full of enthusiasm about risk management is because that they put great amount of invisible capital in the firm. The invisible capital includes human capital and specific skills. So the financial risk management of the firms became some natural reaction to protect their devoted assets. In conclusion, although controversy is still going on about the financial risk management, there is no doubt that the theory and tools of financial risk management is adopted and used by market participants, and continue to be enriched and innovated. IV. The Process of Financial Risk Management The process of financial risk management comprises strategies that enable an organization to manage the risks associated with financial markets. Risk management is a dynamic process that should evolve with an organization and its business. It involves and impacts many parts of an organization including treasury, sales, marketing, tax, commodity, and corporate finance. Company’s financial risk management can be divided into three major steps, namely identification or confirmation risk, measure risk and manage risk. Let’s illustrate it using the market risk as an example. First, confirm the market risk factors that have a significant influence to the company, and then measure the risk factors. At present, the frequently used measure of market risk approach can be divided into the relative measure and absolute measure. A. The relative measure method It mainly measures the sensitivity relationship between the market factors fluctuations and financial asset price changes, such as the duration and convexity. B. The absolute measure methods It includes variance or standard deviation and the absolute deviation indicator, mini max and value at risk (VaR). VaR originated in the 1980s’, which is defined the maximum loss that may occur within a certain confidence level. In mathematics, VaR is expressed as an investment vehicle or a combination of profit and loss distribution of ? -quantile, which stated as follows: Pr ( ? p ;lt;= VaR ) = ? , where, ? p said that the investment loss in the holding period within the confidence level (1 –? ). For example, if the VaR of a company is 100 million U. S. ollars in 95% confidence level of 10 days, which means in the next 10 days, the risk of loss that occurred more than 1 million U. S. dollars may of only 5%. Through this quantitative measure, company can clear its risks and thus have the ability to carry out the next step targeted quantitative risk management activities. (Guanghui Tian) The last step is management risk. Once the company identified the major risks and have a quantitative grasp of these risks through risk-measurement methods, those companies can use various tools to manage the risk quantitatively. There are different types of risk for different companies, even the same company at different stages of development. So it requires specific conditions for the optimization of different risk management strategies. In general, when the company considers its risk exposure more than it could bear, the following two methods can be used to manage the risk. The first way is changing the company’s operating mode, to make the risk back to a sustainable level. This method is also known as â€Å"Operation Hedge†. Companies can adjust the supply channels of raw materials, set up production plants in the sales directly or adjust the volume of inflow and outflow of foreign exchange and other methods to achieve above purpose. The second way is adjust the company’s risk exposure through financial markets. Companies can take advantage of the financial markets. Companies can take advantage of the financial markets wide range of products and tools to hedge its risk, which means to offset the risk that the company may face through holding a contrary position. Now various financial derivative instruments provide a sufficient and diverse selection of products. Derivative products are financial instruments whose value is attached to some other underlying assets. These basic subject matters may be interest rates, exchange rates, bonds, stocks, stock index and commodity prices, but also can be a credit, the weather and even a snowfall in some ski showplace. Common derivatives include forward contracts, swaps, futures and options and so on. V. The Challenges Faced by the Modern Financial Risk Management Theory Over the recent years, as the focus of risk management hifts from a control function to one of global financial optimization, the concern shifts from modeling the behavior of engineered contracts in selected markets to modeling the evolution of the entire economy. This change of focus calls for a vastly improved ability to model the time evolution of economic quantities. (Sergio Focardi). While those who do risk management are interested in predicting if assets will go up or down, the over-riding interest is in the relationship in movement to different assets. Though linear methods such as variance-covariance help to understand the co-movements of markets, a different set of tools is necessary to better manage risk. (Jose Scheinkman). Paradigms such as learning, nonlinear dynamics and statistical mechanics will affect how risk – from market and credit risk to operational risk – is managed. While the first attempts to use some of these tools were focused on predicting market movements, it is now clear that these methodologies might positively influence many other aspects of economics. For instance, they could be useful in understanding phenomena such as price formation, the emergence of bankruptcy chains, or patterns of boom-and-bust cycles. Lars Hansen, Homer J. Livingston professor of economics at the University of Chicago, remarks that these new paradigms will bring to asset pricing and risk management at enhanced understanding once the implicit underlying fundamentals are better understood. He says â€Å"What needed is a formal specification of the market structure, the microeconomic uncertainty, and the investor preferences that is consistent with the posited nonlinear models. Commenting on the need to bring together the pricing of financial assets and the real economy, he notes that an understanding of what’s behind pricing leads to a better understanding of how assets behave. â€Å"For risk management decisions that entail long-run commitments,† he observes, â€Å"it is particularly important to understand, beyond a purely statistical model, what is governing the underlying movements in security prices. † Blake LeBaron, professor of economics at the University of Wisconsin-Medison, observes that there is now more interest in macro moves than in individual markets. But traditional macroeconomics typically provides only point forecasts of macro aggregates. In the risk management context, a simple point forecast is not sufficient; a complete validated probabilistic framework is needed to perform operations such as hedging or optimization. One is after an entire statistical decision-making process. The big issue is the distinction between forecasts and decisions. (Blake LeBaron) Arriving at an entire statistical decision-making process implies reaching a better scientific explanation of economic reality. New theories are attempting to do so through models that reflect empirical data more accurate than traditional models. These models will improve our ability to forecast economic and financial phenomena. The endeavor is not without its challenges. Our ability to model the evolution of the economy is limited. Prof. Scheinkman notes that unlike in a physical system where better data and more computing power can lead to better predictions, in social systems when a new level of understanding is gained, agents start to use new methods. Prof. Scheinkman says â€Å"Less ambitious goals have to be set. Gaining an understanding of the broad features of how the structure of an economic system evolves or of relationships between parts of the system might be all that can be achieved. Prof. Scheinkman remarks that we might have to concentrate on finding those patterns of economic behavior that are not destroyed, at least not in the short-run, by the agent learning process. VI. Conclusion The theory foundation of modern financial risk management is the Efficient Markets Hypothesis, which notes that financial market is a linear balanced system. In this system, investors are rational, and they make their investment decision with rational expectations. This hypothesis shows that the changing of the future price of financial assets has no relation with the history information, and the return on assets should obey normal distribution. However, the study of economic physics shows that financial market is a very complicated nonlinear system. At the same time, behavioral finance tells us that investors are not all rational when making decisions. They usually cannot completely understand the situation they are facing unlike hypothesized. And most times they will have cognitive bias, when they use experience or intuition as the basis of making decisions. It will lead to irrational phenomena like overreaction and under reaction when reflected on investment behaviors. Therefore, it will be meaningful to study how to improve the existing financial risk management tools, especially how to introduce the nonlinear science and behavior study into the measurement of financial risk.

Wednesday, August 21, 2019

Ocean Thermal Energy Conversion Otec Environmental Sciences Essay

Ocean Thermal Energy Conversion Otec Environmental Sciences Essay The oceans cover a little more than 70 percent of the earth surface. This makes it the worlds largest solar energy collector and energy storage system. On an average day, 60 million square kilometers if tropical seas absorb and amount of solar radiation equal in heat content to about 250 million barrels of oil. The history of mankind, have depended upon its ability to conquer the forces of nature, and to utilize these forces to serve its needs. Energy technology is certainly one of the most important factors in the emergence of mankind as the dominant species of this plant. The invention of the practical steam engine by James watt, brought about development of large factories, steam ships and the steam locomotive. First wood was used, then coal. About the same time, the use of coal instigated advances in metallurgy .petroleum from natural seepage has been used since ancient times for lighting, lubrication and waterproofing. The introduction of drilling for oil greatly increased the s upply of oil. The industrial revolution switches in to high gear. One problem is that the natural seepage is limited and in a few years the elements will be used. The development of nuclear power was touted as the answers to all mankinds energy woes. It not turned out that way. The elimination of government subsidies for nuclear power plants has made them quite unaffordable. When it went so bad no insure in the world will write disaster for nuclear power plant The concept of OTEC (ocean thermal energy conversion) has existed for over a century as fantasised by Jules Verne in 1870 and conceptualised by French physicist, Jacques arsene d arsonval in 1881. Despite this an operating OTEC power facility was not developed until the 1920s. 2.2 WHAT IS OTEC OTEC, ocean thermal energy conversion is an energy technology that converts solar radiation to electric power. OTEC systems use the oceans natural thermal gradient, consequently the temperature difference between the warm surface water and the cold deep water below 600 metres by about 20c, an OTEC system can produce, a significantly amount of power. The oceans are thus a vast renewable resource; with the potential to help us in the OTEC process is also rich in nutrients and it can be used to culture both marine organism and plant life near the shore or on land The total influx of solar energy into earth is of thousands of time as a great as mankind total energy use. All of our coal, oil and natural gas are the result of the capture of solar energy by life of the past. There have been, any projects for harnessing solar energy, but most have not been successful because they attempt to capture the energy directly. The idea behind OTEC is the use of all a natural collectors, the se, instead of artificial collector. 2.3 HOW OTEC WORKS Warm water is collected on the surface of the tropical ocean and pumped by a warm water pump. The water is pumped through the boiler, where some of the water is used to heat the working fluid, usually propane or some similar material. The propane vapour expands through a turbine which is coupled to a generator that generating electric power. Cold water from the bottom is pumped through the condenser, where the vapour returns to the liquid state. The fluid is pumped back into the boiler. Some small fraction of the power from the turbine is used to pump the water through the system and to power other internal operations, but most of it is available as net power. There are two different kinds of OTEC power plants, the land based and the floating plant. First, land based power plants, the land based pilot plant will consist of a building. This building will contain the heat exchangers, turbines, generators and controls. It will be connected to the ocean via several pipes, and an enormous fish farm (100 football areas) by other pipes. Warm water is collected through a screened enclosure close to the store. A long pipe laid on the slope collects cold water. Power and fresh water are generated in the building by the equipment. Used water if first circulated in to the marine culture pond (fish farm) and then discharges by the third pipe in to the ocean, downstream from the warm water inlet. This is done so that the outflow does not reenter the plan, since re use of warm water would lower the available temperature difference. While, the other OTEC power plants is floating power plants, the floating power plant works in the same way as the land base d the apparent different is that the floating plant is floating. Where actually OTEC can be used, OTEC can be sited anywhere across about 60 million squares kilometres of tropical oceans anywhere there is deep cold water lying under warm surface water this generally means between the tropic of cancer and the tropic of Capricorn. Surface water is these regions, warmed by the sun, generally stys at 25 degrees Celsius or above. Ocean water more than 1000 meters below the surface is generally at about 4 degrees C. 2.4 TYPES OF OTEC There are three types of OTEC designs: open cycle, closed cycle and hybrid cycle. Closed cycle Closed cycle systems use fluid with a low boiling point, such as ammonia, to rotate a turbine to generate electricity. Here how it works. Warm surface sea water is pumped through a heat exchanger where the low boiling water point is vaporized. The expanding vapour turns the turbo generator, then clod, deep seawater pumped through a second heat exchanger condenses the vapour back into a liquid, which is then recycle through the system Open cycle Open cycle OTEC uses the tropical oceans warm surface water to make electricity. When warm seawater is placed in a low pressure container, it boils. The expanding steam drives a low pressure turbine attached to an electrical generator. The steam, which has left its slat behind in the low pressure container, is almost pure fresh water. It is condensed back into a liquid by exposure to cold temperature from deep oceans water Hybrid cycle Hybrid system combines the feature of both the closed cycle an open cycle system. In a hybrid system, warm seawater enters a vacuum chamber where it is flash evaporated into steam, similar to the open cycle evaporation process. The steam vaporizes a low boiling point fluid that drives a turbine to produce electricity 2.5 ADVANTAGES AND DISADVANTAGES OF OTEC The advantages of OTEC is the uses OF OTEC is clean, renewable, its natural resource. Warm surface seawater and cold water from the ocean depths replace fossil fuels to produce electricity. Second, its suitably designed OTEC plants will produce little or no carbon dioxide or other pollutant chemical Third, OTEC system can produce fresh water as well as electricity. This is a significant adapted in island areas where fresh water is limited, other there is enough solar energy received and stored in the warm tropical oceans surface layer to provide most, if not all, of present human energy needs and last the use of OTEC as a source of electricity will help reduce the state almost complete dependence on imported fossil fuels. The disadvantages of OTEC is produced electric at present would cost more than electricity generated from fossil fuels at theirs current costs. Second, OTEC plants must be located were a difference of about 20;c occurs year round. Ocean depths must be available fairly close to shore based facilities for economics operation. Floating plant ships could provide more flexibility. Third, there is no energy company will put money in this project because it only has been tested in very smell scale and last, the construction of OTEC plants and lying of pipes in coastal water may cause localised damage to reefs and near shore marine ecosystems. 2.6 ENVIRONMENTAL IMPACTS OF OTEC OTEC systems are, for the most part, environmentally benign. Although accidental leakage of closed cycle working fluids can pose a hazard, under normal conditions, the only effluents are the mixed seawater discharges and dissolved gases that come out of solution when sea water is depressurized. Although the quantities of outgassed species may be significant for large OTEC systems, with the exception of carbon dioxide, these species are benign. Carbon dioxide is a greenhouse gas and can impact global climate; however, OTEC systems release one or two orders of magnitude less carbon dioxide than comparable fossil fuel power plants and those emissions may be sequestered easily in the ocean or used to stimulate marine biomass production. OTEC mixed seawater discharges will be at lower temperatures than sea water at the ocean surface. The discharges will also contain high concentrations of nutrients brought up with the deep sea water and may have a different salinity. It is important; ther efore, that release back into the ocean is conducted in a manner that minimizes unintended changes to the ocean mixed layer biota and avoids inducing long-term surface temperature anomalies. Analyses of OTEC effluent plumes suggest that discharge at depths of 50-100 m should be sufficient to ensure minimal impact on the ocean environment. Conversely, the nutrient-rich OTEC discharges could be exploited to sustain open-ocean Mari culture

Tuesday, August 20, 2019

Comparative Analysis of Rank Techniques

Comparative Analysis of Rank Techniques Abstract There is paramount web data available in the form of web pages on the World Wide Web (WWW). So whenever a user makes a query, a lot of search results having different web links corresponding to a user’s query are generated. Out of which only some are relevant while the rest are irrelevant. The relevancy of a web page is calculated by search engines using page ranking algorithms. Most of the page ranking algorithm use web structure mining and web content mining to calculate the relevancy of a web page. Most of the ranking algorithms which are given in the literature are either link or content oriented which do not consider user usage trends. The Algorithm called Page Rank Algorithm was introduced by Google in beginning. It was considered a standard page rank because as no other algorithm of page rank was in existence. Later extensions of page rank algorithm were incorporated along with different variations like considering weights as well as visits of links. This paper presents the comparison among original page rank algorithm as well as its various variations. Keywords: inlinks, outlinks, search engine, web mining, World Wide Web (WWW), PageRank, Weighted page rank, VOL I. Introduction World Wide Web is a vast resource of hyperlinked and a variety of information including text, image, audio, video and metadata. It is anticipated that WWW has expanded by about 2000% since its progression and is doubling in magnitude with a gap of six to ten months. With the swift expansion of information on the WWW and mounting requirements of users, it is becoming complicated to manage web information and comply with the user needs. So users have to employ some information retrieval techniques to find, extract, filter and order the desired information. The technique used filters the web page according to query generated by the user and create an index. This indexing is related to the rank of web page. Lower the index value, higher will be the rank of the web page. 1. Data Mining over Web 1.1 Web Mining Data mining, which facilitates the knowledge discovery from large data sets by extracting potentially new useful patterns in the form of human understandable knowledge and structuring the same, can also be applied over the web. The application being named Web Mining thus becomes a technique for extracting useful information from a large, unstructured, heterogeneous data store. Web mining is quite a immense area with dozens of developments and technological enhancements. 1.2. Web Mining Categories According to literature, there are three categories of web mining: Web Content Mining (WCM), Web Structure Mining (WSM) and Web Usage Mining (WUM) WCM includes the web page information. In it, the actual content pages whether semi structured hypertext or multimedia information are used for searching purposes. WSM uses the central part linkage that flows through the entire web. The linkage of web content is called hyperlink. This hyperlinked structure is used for ranking the retrieved web pages on the basis of query generated by the user. WUM returns the dynamic results with respect to users’ navigation. This methodology uses the server logs ( the logs that are created during user navigation via searching. WUM is also called as Web Log Mining because it extracts knowledge from usage logs. 1.2 Page Rank Algorithm (By Google) This is the original PageRank algorithm. It was postulated by Lawrence Page and Sergey Brin. The formula is: where is the PageRank of page A is the PageRank of pages Ti which link to page A is the number of outbound links on page Ti d is a damping factor having value between 0 and 1. The PageRank algorithm is used to determine the rank of a web page individually. This algorithm is not meant to rank a web site. Moreover, the PageRank of a page say A, is recursively defined by the PageRanks of those pages which link to page A. The PageRank of pages which link to page A does not influence the PageRank of page A consistently. In PageRank algorithm, the PageRank of a page T is always weighted by the number of outbound links C(T) on page T. It means, more outbound links a page T has, the less will page A benefit from a link to it on page T. The weighted PageRank of pages Ti is then added up. But an additional inbound link for page A will always increase page As PageRank. In the end, the sum of the weighted PageRanks of all pages is multiplied with a damping factor d which can be set between 0 and 1. Thus, the extend of PageRank benefit for a page by another page linking to it is reduced. They deem PageRank as a genre of user behaviour, where a surfer clicks on links at random irrespective of content. The random surfer visits a web page with a certain probability which is solely given by the number of links on that page. Thus, one pages PageRank is not completely passed on to a page it links to, but is divided by the number of links on the page. So, the probability for the random surfer reaching one page is the sum of probabilities for the random surfer following links to this page. Now, this probability is diminish by the damping factor d. Sometimes, user doesnot move straight to the links of a page, instead the user jumps to some other page randomly. This probability for the random surfer is calculated by the damping factor d (also called as degree of probability having value between 0 and 1). Regardless of inbound links, the probability for the random surfer jumping to a page is always (1-d), so a page has always a minimum PageRank. A revised version of the PageRank Algorithm is given by Lawrence Page and Sergey Brin. In this algorithm, the PageRank of page A is given as where N is the total number of all pages on the web. This revised version of the algorithm is basically equivalent the original one. Regarding the Random Surfer Model, this version is the actual probability for a surfer reaching that page after clicking on many links. The sum of all page ranks of all pages will be one by calculating the probability distribution of all web pages. But, these versions of the algorithm do not differ fundamentally from each other. A PageRank which has been calculated by using the second version of the algorithm has to be multiplied by the total number of web pages to get the according PageRank that would have been calculated by using the first version. 1.3 Dangling Nodes A node is called a dangling node if it does not contain any out-going link, i.e., if the out-degree is zero. The hypothetical web graph taken in this paper is having a dangling node i.e. Node D. II Research background Brin and Page (Algorithm: Google Page Rank) The authors came up with an idea to use link structure of the web to calculate rank of web pages. This algorithm is used by Google based on the results produced by keyword based search. It works on the principle that if a web page has significant links towards it, then the links of this page to other pages are also considered imperative. Thus, it depends on the backlinks to calculate the rank of web pages. The page rank is calculated by the formula given in equation 1. (1) Where u represents a web page and represents the page rank of web pages u and v respectively is the set of web pages pointing to u represents the total numbers of outlinks of web page v and c is a factor used for normalization Original PageRank algorithm was modified considering that all users donot follow direct links on web data. Thus, the modified formula for calculating page rank is given in equation 2. (2) Where d is a dampening factor which represent the probability of user using direct links and it can be set between 0 and 1. Wenpu Xing and Ali Ghorbani (Algorithm: Weighted Page Rank) The authors gave this method by extending standard PageRank. It works on the theory that if a page is vital, it has many inlinks and outlinks. Unlike standard PageRank, it does not equally distribute the page rank of a page among its outgoing linked pages. The page rank of a web page is divided among its outgoing linked pages in proportional to the importance or popularity (its number of inlinks and outlinks). , the popularity from the number of inlinks, is calculated based on the number of inlinks of page u and the number of inlinks of all reference pages of page v as given in equation 3. (3) Where and are the number of inlinks of page u and p respectively represents the set of web pages pointed by v. , the popularity from the number of outlinks, is calculated based on the number of outlinks of page u and the number of outlinks of all reference pages of page v as given in equation. 4. (4) Where and are the number of outlinks of page u and p respectively represents the set of web pages pointed by v. The page rank using Weighted PageRank algorithm is calculated by the formula as given in equation 5. (5) Gyanendra Kumar et. al. (Algorithm : Page Rank with Visits of Links (VOL)) This methodology includes the browsing behavior of the user. The prior algorithms were either based on WSM or WCM. But it incluses Page Ranking based on Visits of Links (VOL). It modifies the basic page ranking algorithm by considering the number of visits of inbound links of web pages. It assists to prioritize the web pages on the basis of user’s browsing behavior. Also, the rank values are assigned in proportional to the number of visits of links in this algorithm. The more rank value is assigned to the link which is most visited by user. The Page Ranking based on Visits of Links (VOL) can be calculated by the formula given in equation 6. (6) Where and represent page rank of web pages u and v respectively d is dampening factor B(u) is the set of web pages pointing to u Lu is number of visits of links pointing from v to u TL(v) is the total number of visits of all links from v. Neelam Tyagi and Simple Sharma (Algorithm: Weighted Page Rank Algorithm Based on Number of Visits of Links of Web Page) The authors incorporate Weighted PageRank algorithm and the number of visits of links (VOL). This algorithm consigns more rank to the outgoing links having high VOL. It is based on the inlink popularity ignoring the outlink popularity. In this algorithm, number of visits of inbound links of web pages are taken into consideration in addition the weights of page. The rank of web page using this algorithm can be calculated as given in equation 7. (7) Where represent page rank of web page u and v respectively d is the dampening factor B(u) is the set of web pages pointing to u Lu is number of visits of links pointing from v to u is the total number of visits of all links from v represents the popularity from the number of inlinks of u. Sonal Tuteja (Algorithm: Enhancement in Weighted Page Rank Using Visits of Link (VOL)) The author incorporated i.e. the weight of link(v,u) and calculated based on the number of visits of inlinks of page u. the popularity from the number of visits of outlinks are used to calculate the value of page rank. is the weight of link(v, u) which is calculated based on the number of visits of inlinks of page u and the number of visits of inlinks of all reference pages of page v as given in equation 8. (8) Where and represents the incoming visits of links of page u and p respectively R(v) represents the set of reference pages of page v. is the weight of link(v, u) which is calculated based on the number of visits of outlinks of page u and the number of visits of outlinks of all reference pages of page v as given in equation 9. (9) Where and represents the outgoing visits of links of page u and v respectively R(v) represents the set of reference pages of page v. Now these values are used to calculate page rank using equation (10) (10) Where d is a dampening factor B(u) is the set of pages that point to u WPRVOL (u) and WPRVOL(v) are the rank scores of page u and v respectively represents the popularity from the number of visits of inlinks represents the popularity from the number of visits of outlinks III Numerical analysis of various page rank algorithms To demonstrate the working of page rank, consider a hypothetical web structure as shown below: Figure showing a web graph having three web pages i.e. A, B, C, D Page Rank (By Brin Page) Using equation 2, the ranks for pages A, B, C are calculated as follows: (1) (2) (3) (4) Having value d=0.25, 0.5, 0.85, the page ranks of pages A, B and C become: Dampening Factor PR(A) PR(B) PR(C) PR(D) 0.25 0.9 0.975 1.22 0.99 0.5 0.8 0.9 1.35 0.95 0.85 0.85 0.829 1.53 0.357 From the results, it is concluded that PR(C)> PR(D)> PR(B)> PR(A) 2. Iterative Method of Page Rank It is easy to solve the equation system, to determine page rank values, for a small set of pages, but the web consists of billions of documents and it is not possible to find a solution by inspection method. In iterative calculation, each page is assigned a starting page rank value of 1 as shown in table 1 below. These rank values are iteratively substituted in page rank equations to find the final values. In general, many iterations could be followed to normalize the page ranks. d=0.25 d=0.5 d=0.85 Iteration PR(A) PR(B) PR(C) PR(D) PR(A) PR(B) PR(C) PR(D) PR(A) PR(B) PR(C) PR(D) 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1.25 1 1 1 1.5 1 1 0.5 1.425 0.575 2 0.875 0.97 1.21 0.99 0.875 0.94 1.44 0.97 0.75 0.788 1.46 0.82 3 0.90 0.975 1.22 0.99 0.86 0.93 1.4 0.965 0.77 0.80 1.48 0.83 †¦Ã¢â‚¬ ¦ †¦Ã¢â‚¬ ¦ †¦Ã¢â‚¬ ¦. †¦Ã¢â‚¬ ¦ †¦Ã¢â‚¬ ¦ †¦Ã¢â‚¬ ¦ †¦Ã¢â‚¬ ¦ †¦Ã¢â‚¬ ¦ †¦Ã¢â‚¬ ¦. From the results, it is concluded that PR(C)> PR(D)> PR(B)> PR(A) 3. Page Rank with Visits of Links (VOL) (Gyanendra Kumar) Using equation 6, the ranks for pages A, B, C are calculated as follows: (A)=(1-d)+d((1) (B)=(1-d)+d((2) (C)=(1-d)+d(+(3) (D)=(1-d)+d((4) The intermediate values can be calculated as: Similarly other values after calculation are: 2/3 Having value d=0.25,0.5, 0.85 the page ranks of pages A, B and C become: Dampening Factor PR(A) PR(B) PR(C) PR(D) 0.25 0.83 0.82 1.23 0.818 0.5 0.635 0.606 0.808 0.6 0.85 0.2478 0.22 0.3449 0.1123 From the results, it is concluded that PR(C)> PR(A)> PR(B)> PR(D) 4. Weighted Page Rank (Wenpu Xing and Ali Ghorbani) Using equation 3, the ranks for pages A, B, C are calculated as follows: (C,A).(1) (2) (3) (4) The weights of incoming as well as well as outgoing links can be calculated as: (C,A)= IA/IA+IC = 1/ 1+2 = 1/3 =OA/OA=1 Having value d=0.5, the page ranks of pages A, B and C become: Dampening Factor PR(A) PR(B) PR(C) PR(D) 0.25 0.8526 0.8210 1.2315 0.75 0.5 0.7059 0.6176 1.235 0.5 0.85 0.3380 0.2458 0.6636 0.15 From the results, it is concluded that PR(C)> PR(A)> PR(B)> PR(D) 5. Weighted Page Rank Based on Visits of Link (VOL) (Neelam Tyagi and Simple Sharma) Using equation 7, the ranks for pages A, B, C are calculated as follows: )(1) )(2) (3) (4) The weights of incoming, number of visits of link as well as total number of visits of all links can be calculated as Having value d=0.25, 0.5 0.85, the page ranks of pages A, B and C become: Dampening Factor PR(A) PR(B) PR(C) PR(D) 0.25 0.8061 0.7836 1.015 0.8153 0.5 05981 0.5498 0.8825 0.5916 0.85 0.1734 0.1735 0.3469 0.1994 From the results, it is concluded that PR(C)> PR(D)> PR(A)> PR(B) 5. Enhancement in Weighted Page Rank Using Visits of Link (VOL) (Sonal Tuteja) Using equation 10, the ranks for pages A, B, C are calculated as follows: (1) (2) (3) Intermediate values can be calculated as follows: =IA/IA=1 =OA/OA=1 Having value d=0.25, 0.5, 0.85 the page ranks of pages A, B and C become: Dampening Factor PR(A) PR(B) PR(C) PR(D) 0.25 0.7226 0.7951 1.029 0.75 0.5 0.9557 0.6195 0.9115 0.5 0.85 1.911 0.5561 1.116 0.15 From the results, it is concluded that PR(C)> PR(B)> PR(D)> PR(A) Comparison chart of various Ranking Algorithms Algorithm Page Rank Page Rank with VOL Weighted Page rank WPRV EWPRV

Monday, August 19, 2019

No Child Left Behind Act Essay -- School Education Learning Essays Pap

No Child Left Behind Act The No Child Left Behind Act of 2001, President George W. Bush's education reform bill, was signed into law on Jan. 8, 2002. The No Child Left Behind Act says that states will develop and apply challenging academic standards in reading and math. It will also set annual progress objectives to make sure that all groups of students reach proficiency within 12 years. And the act also says that children will be tested annually in grades 3 through 8, in reading and math to measure their progress. The test results will be made public in annual report cards on how schools and states are progressing toward their objectives. States will have until the 2005-06 school year to develop and apply their tests. Once the tests are in place, schools will be required to show "adequate yearly progress" toward their statewide objectives. This means that they must demonstrate through their test scores that they are on track to reach 100 percent proficiency for all groups of students within 12 years. The schools that fall behind may tend to have school improvement, corrective action, or restructuring measures forced by the state. The No Child Left Behind Act has many positive and negative aspects. Many school teachers and community members are starting to challenge many of the features of the No Child Left Behind Act. Many people feel that the law was developed too quickly and that it was pushed through Congress. For many years, both Democrats and Republicans h...

Sunday, August 18, 2019

The Search for America in Rip Van Winkle and The Legend of Sleepy Hollo

The Search for America in Rip Van Winkle and The Legend of Sleepy Hollow      Ã‚  Ã‚  Ã‚   In the early to mid-1800's, Washington Irving was an immensely popular writer heralded as one of the 'great' American writers.   Irving's importance lies especially in "Rip Van Winkle" and " The Legend of Sleepy Hollow," the sketches in which he creates the vision of the alternate America(n).   His critique of American society through his main characters-Rip and Ichabod-and the towns in which they live gives shape to an America not usually acknowledged by his contemporaries, and thus crucial to American literary studies today.   J. Hector St. John De Crevecoeur, who created the most definitive statement of "American" circa Irving's time, certainly would not.   Indeed, it is Crevecoeur's type of America that Irving opposes.   When viewed against the backdrop of Crevecoeur's definition of America, Irving's sketches portray a very different America-the other America.    Irving will be compared with Crevecoeur in five main sections:   "Building the European," in which Crevecoeur claims that traces of Europe can be found throughout American society; "The Melting Pot," in which Crevecoeur states that the European influences are assimilated into an American whole, and creating a new society; "The American Stranger," in which Crevecoeur claims that no one is a stranger in America; "American Industry," which looks at the spirit of industry found in Americans; and finally, "People of the Soil," which deals with Americans' ties with the land.   In all of these sections, Crevecoeur's mainstream view of American will serve to show Irving's unique America.    I. Building on the European When defining 'American,' Crevecoeur is quick to point out ... ...ary on the Works of   Washington Irving, 1860-1974.   Ed. Andrew B. Myers.   Tarrytown, NY:   Sleepy Hollow Restorations, 1976.   330-42. Pochmann, Henry A.   "Irving's German Tour and its Influence on His Tales."   PMLA   45 (1930) 1150-87. Ringe, Donald A.   "New York and New England: Irving's Criticism of American   Society."   American Literature 38 (1967): 455-67.   Rpt. in A Century of Commentary on the Works of Washington Irving, 1860-1974.   Ed. Andrew B. Myers. Tarrytown, NY: Sleepy Hollow Restorations, 1976.   398-411. Rourke, Constance.   American Humor: A Study of the National Character.   Garden City,   NY: Doubleday, 1931. Rubin-Dorsky, Jeffrey.   "The Value of Storytelling: 'Rip Van Winkle' and 'The Legend   of Sleepy Hollow' in the Context of The Sketch Book."   Modern Philology 82    (1985): 393-406.    Â