Thursday, October 31, 2019

The 180 Day Plan Essay Example | Topics and Well Written Essays - 1000 words

The 180 Day Plan - Essay Example The overall focus for this phase is to teach students appropriate behavior for the classroom. For September, the focus is on instruction. During this month, students are being introduced to the expectations of their teacher and their school. The teacher's role during this month is to instruct students in appropriate classroom behavior. A management strategy that is essential during this month is to teach only the skills that students need to progress, such as how to transition or how to turn in work. A recommendation that the plan has for teachers is to organize the classroom to decrease the chances of disruptive behavior. For October, the focus is on reinforcing and strengthening skills taught during the previous month. Teacher's should continue to teach desired behavior, but they should spend more time helping students establish appropriate patterns for their behavior. The teacher's role during this month is to review rules with students and preteach expected behaviors that student s continue to struggle with. A management strategy for this month is to not only reinforce students who follow classroom rules, but to also reinforce them for other appropriate behaviors. ... A management strategy for this month is to upgrade instructional procedures and increase the amount of time for practicing daily skills, for students who are not consistently meeting the teacher's expectations. A recommendation that the plan has for teachers is to begin to increase the standard for acceptable performance. Instead of reinforcing students each time they follow a classroom rule, the teacher should reinforce students who comply with the rules in difficult circumstances or for longer periods of time. The retention phase takes place during the months of December, January and February. The overall focus for this month is to help students master behavioral skills and academic competencies so that they can become independent learners. For December, the focus is to help students gain mastery over material, but maintain appropriate school and classroom behavior. The teacher's role during this month is to teach and reinforce independent and self-reliant behavior in the students. A management strategy for this month is to reinforce students who are trying to perform independently. A recommendation that the plan has for teachers is to look for students who are showing appropriate, independent behavior and reinforce them heavily. January can be a difficult time for students and teachers. Students have been on winter break for weeks and some may forget expected behavior or they may not retain material that they learned during the previous year. For January, the focus is to reintroduce previously learned rules and routines while, at the same time, helping students to regain their mastery of the academic content they learned the previous

Tuesday, October 29, 2019

Annotated Bibliography Essay Example for Free

Annotated Bibliography Essay In order for NHS to satisfy its requirement to transmit large medical imaging files in a timely and secure manner, they must be able to subscribe to circuits of the appropriate bandwidth at each remote office to address the local needs. Unfortunately, the remoteness of some of these locations have resulted in the limitation of network connectivity options. Therefore, the cost-effective alternative to expensive, high-bandwidth internet circuits is to use a WAN optimization solution. A WAN optimization solution consists of a network appliance at each location that focuses on increasing network performance. It accomplishes this through the use of a combination of data compression, content and object-caching, data deduplication and protocol optimization. A WAN optimization appliance works in conjunction with the available bandwidth at a location. The host site would have an appliance that would build ‘acceleration tunnels’ to each of the appliances located at the remote sites. The appliances at the remote sites would be sized based upon the number of users and the available bandwidth at that location. This solution has a number of advantages. It is a very cost-effective approach. Higher bandwidth circuits in remote areas tend to be expensive. The purchase of network appliances are normally a capital expenditure that can be amortized over the life of the product. This timeframe is normally 3 to 5 years. On the other hand, the addition of larger circuits is an operational expenditure that incurs a higher recurring cost on a monthly basis. Secondly, these appliances are transparent to the end user. They do not require additional software on the users’ computers, or require any special setup on a per-user basis. NHS would very likely fall into the Early Adopters category in the Technological Acceptance Curve for this solution (Rogers, 2003). These individuals quickly buy into an idea when the possibility of real benefits have been established. They are primarily concerned with finding a strong match between their needs and the expected benefits (Moore, 1999). The use  of WAN optimization appliances would be an excellent fit for NHS and would be easily accepted by NHS management due to the ability to provide an optimal, technical and cost-effective resolution to the issue they are facing at the remote locations. This solution would allow them the means to meet their requirement to transmit large data files. References Rogers, Everett (2003). Diffusion of Innovations (5th Edition). New York, NY: Free Press Moore, Geoffrey (1999). Crossing the Chasm. United States: Harper Business Essentials

Sunday, October 27, 2019

High Performance Liquid Chromatography Experiment

High Performance Liquid Chromatography Experiment INTRODUCTION Pharmaceutical Analysis may be defined as the application of analytical procedures used to determine the purity, safety and quality of drugs and chemicals. The term Pharmaceutical analysis is otherwise called quantitative pharmaceutical chemistry. Pharmaceutical analysis includes both qualitative and quantitative analysis of drugs and pharmaceutical substances starts from bulk drugs to the finished dosage forms. In the modern practice of medicine, the analytical methods are used in the analysis of chemical constituents found in human body whose altered concentrations during disease states serve as diagnostic aids and also used to analyze the medical agents and their metabolites found in biological system. Qualitative inorganic analysis seeks to establish the presence of given element or inorganic compound in a sample. Qualitative organic analysis seeks to establish the presence of a given functional group or organic compound in a sample. Quantitative Quantitative analysis seeks to establish the amount of a given element or compound in a sample. The term quality as applied to a drug product has been defined as the sum of all factors, which contribute directly or indirectly to the safety, effectiveness and reliability of the product. These properties are built into drug products through research and during process by procedures collectively referred to as Quality control. Quality control guarantees with in reasonable limits that a drug products Is free of impurities. Is physically and chemically stable Contains the amount of active ingredients as stated on the label and Provides optimal release of active ingredients when the product is administered. Most modern analytical chemistry is categorized by two different approaches such as analytical targets or analytical methods. INTRODUCTION FOR CHROMATOGRAPHY: High performance liquid chromatography is the process, which seperates mixture containing two or more components under high pressure. In this the stationary phase is packed in column one end of which is attached to a source of pressurized liquid mobile phase. High performance liquid chromatography is the fasted growing analytical technique for the analysis of drug. Its simplicity, high specificity and wide range of sensitivity makes its ideal for the analysis of many drugs in both dosage form and biologic fluids. HPLC is also known as High performance liquid chromatography. It is essential form column chromatography in which the stationary phase is consists of a small particles (3-5o µm) packing contained in a column with a small bore (2-5mm), one end of which is attached to source of pressurized liquid eluent(mobile phase). Different Types of Principles: According to the phases involved, HPLC can be classified into several types, which are as follows: Normal Phase Chromatography (NPC) Reverse Phase Chromatography (RPC) Liquid Solid Chromatography or adsorption HPLC Liquid Liquid Chromatography or Partition HPLC Ion exchange Chromatography or Ion exchange HPLC Size exclusion or gel permeation or steric exclusion HPLC 1. Normal Phase Chromatography (NPC): In normal phase chromatography, the stationary phase is more polar then the mobile phase, and the mobile phase is a mixture of organic solvents with out added water (e.g. isopropane with hexane) and the column packing is either an inorganic adsorbent (silica) are a polar bonded phase (cyanno, diol, amino) on a silica support. Sample retention in normal phase chromatography increases with the polarity of mobile phase decreases. They are eluted in the order of increasing polarities. 2. Reverse Phase Chromatography (RPC): In reverse-phase chromatography, the stationary phase is less polar than the mobile phase and the mobile phase is a mixture of organic and aqueous phase. Reverse-phase chromatography is typically more convenient and rugged than the other forms of liquid chromatography and is more likely to result in a satisfactory final separation. High performance RPC columns are efficient, stable and reproducible. In this, the solutes are eluted in the order of their decreasing polarities. These are prepared by treating the surface silanol group of site with an organic chloro silane reagent. INSTRUMENTATION: RECORDER SCHEMATIC DIAGRAM OF HPLC a. Pumps: Pumps are required to deliver a constant flow of mobile phase at pressures ranging from 1 550 bar pumps capable of pressure up to 6000 psi provide a wide range of flow rates of mobile phase, typically from 0.01-10ml min-1. Low flow rates (10-100à ¯Ã‚ Ã‚ ­l min-1) are used with micro bore columns, intermediate flow rates (0.5-2ml min-1) are used with conventional analytical HPLC columns, and fast flow rates are used for preparative or semi preparative columns and for slurry packing techniques. Mechanical pumps of the reciprocating piston type view a pulsating supply of mobile phase. A damping device is there fore required to smooth out the pulses so that excessive noise at high levels of sensitivity or low pressure does not detract from detection of small quantities of sample. This type of pump is mostly used. Dual piston reciprocating pumps produce an almost pulse free flow because the two pistons are carefully faced so that as one is filling the other is pumping. These pumps are more expensive than single piston pumps but are of benefit when using a flow sensitive detector such as ultraviolet or refractive index detector. b. Injection Systems: Injection ports are of two basic types, (A) those in which the sample with injected directly into the column and (B) those in which the sample is deposited before the column inlet and then swept by a valving action into the column by the mobile phase. c. Columns: HPLC columns are made of high quality stainless steel, polish internally to a mirror finish. Standard analytical columns are 4-5 mm internal diameter and 10-30 cm in length. Shorted columns (3-6 cm) containing a smaller particles size packing material (3 or 5 à ¯Ã‚ Ã‚ ­m) produce similar or better efficiencies, in terms of the number of theoretical plates (about 7000), that those of 20 cm columns containing 10 à ¯Ã‚ Ã‚ ­m irregular particles and are used an short analysis time and highest throughput of samples are required. Micro bore columns of 1-2 mm internal diameter and 10-25 cm in length have certain advantages of lower detection limits and lower consumption of solvent, the latter being important if expensive HPLC grade solvents are used. HPLC are also being carried out on the semi preparative scales by using columns of 7-10 mm or 20-40 mm internal diameter respectively. d. Detectors: The most widely used detectors for liquid chromatography are Detector Analytes Solvent Requirements Comments UV-Visible Any with chromophores UV-grade non UV absorbing solvents Has degree of selectivity and useful for many HPLC applications Fluorescence Fluorescent compounds UV-grade non UV absorbing solvents Highly selective and sensitive, often used to analyze derivitized compounds Refractive index Compounds with different RI than mobile phase Cannot run mobile phase gradients Limited sensitivity Conductivity Charged or polar compounds Mobile phase must be conducting Excellent for ion exchange compounds Electrochemical Readily oxidized or reduced compounds, specially biological samples Mobile phase must be conducting Very selective and sensitive Mass-Spectrometer Broad range compounds Must use volatile solvents or volatile buffers Highly sensitive. Many modes available. Needs trained person Theoretical principles of HPLC: a. Retention time: The time is required between the injection point and the peak maximum is called the retention time. It is denoted as the Rt. It is mainly useful for the qualitative analysis for the identification of compound. b. Capacity factor: It represents the molar ratio of the compound in the stationary phase and the mobile phase. It is independent of column length and mobile phase flow rate. It is denoted as the k. It should be kept 1-10. If k values are too low it is likely that the solutes may be adequately resolved and for high k values the analysis time is too long. It can be calculated by tr t0 k = - t0 tr = Retention time, t0 = Dead time. c. Tailing factor: Closer study of a chromatographic show that the Gaussian forms is usually not completely symmetrical. The graph spread out to a greater or lesser extent, forming a tail. It reduces the column plate number which intern influences the resolution. Tailing is mainly due to deteriorated column, overloading column, extra column-volumes, and incompatibility of sample with standard and/or mobile phase. Practically it can be calculated or determined at 10% of the total peak height. It must not be greater than 2.0 d. Resolution: The degree of separation of one component from another is described by the resolution. It is generally denoted by Rs. It is measured as the difference in retention time and the arithmetic mean of the two peak widths. tr2 tr1 Rs = 0.5(w1 + w2) tr2 = Retention time of first peak w1 = width of first peak tr1 = Retention time of second peak w2 = width of second peak e. Theoretical plates: It is important property of the column. It reflects its quality of separation and its ability to produce sharp, narrow peak and achieving good resolution of peak. N denotes it. 3500 X L (cm) Theoretical plates = - dp( µm) L = length of the column in cm, dp = diameter of the particle ( µm) It follows that if the exchange is fast and efficient, the theoretical plate will be small in size and there will be large number of plates in the column. f. Height equivalent to theoretical plate (HETP): Number of plates directly proportional to the column length (L) and inversely proportional to the diameter of the particles (dp). The value of H is a criterion for the quality of a column. Lower the HETP, higher is the efficiency of the column. Its value depends upon particle size, flow rate, viscosity of mobile phase. H = L/N L = Length of column, N = No. of theoretical plate HPLC method development: The wide variety of equipment, columns, eluent and operational parameters involved makes high performance liquid chromatography (HPLC) method development seem complex. The main objective of method development is to obtain a good separation with minimum time and effort. Based on the goal of separation, the method development is preceded. The steps involved are: Information on sample, define separation goals Need for special HPLC procedure, sample pretreatment, etc. Choose detector and detector settings Choose LC method, preliminary run; Estimate best separation conditions Optimize separation conditions Check for problems or requirement for special procedure Validation for release to routine laboratory The following must be considered when developing an HPLC method: Keep it simple Try the most common columns and stationary phases first Thoroughly investigate binary mobile phases before going on to ternary Think of the factors that are likely to be significant in achieving the desired resolution. Mobile phase composition, for example, is the most powerful way of optimizing selectivity whereas temperature has a minor effect and would only achieve small selectivity changes. pH will only significantly affect the retention of weak acids and bases. VALIDATION OF ANALYTICAL METHOD IN PHARMACEUTICAL ANALYSIS: Validation is documented evidence, which is completed to ensure that an analytical method is accurate, reproducible and robust over the specific range. The quality of the analytical data is a key factor in the success of a drug development program. The process of method development and validation has a direct impact on the quality of these data. Method validation: Method validation is the process to confirm that analytical procedure employed for a specific test is suitable for its intended use. Method needs to be validated or revalidated Before their introduction into routine use Whenever the conditions changes for which the method has been validated , e.g., instrument with different characteristics Whenever the method is changed, and the change is outside the original scope of the method. Depending on the use of the assay, different parameters will have to be measured during the assay validation. ICH and several regulatory bodies and Pharmacopoeia have published information on the validation of analytical procedures METHOD VALIDATION PARAMETERS: SPECIFICITY. ACCURACY. PRECISION. LINEARITY. ROBUSTNESS. SOLUTION STABILITY. The goal of the validation process is to challenge the method and determine the limit of allowed variability for the conditions needed to run the method. The following statistical parameters are to be determined to validate the developed method. Correlation coefficient(r): When the changes in one variable are associated or followed by changes in the other, it is called correlation. The numerical measure of correlation is called the coefficient of correlation and is defined by the relation. à ¯Ã‚ Ã¢â‚¬Å" (x x) (y -y) r = à ¢Ã‹â€ Ã… ¡ à ¯Ã‚ Ã¢â‚¬Å"(x -x) 2 à ¯Ã‚ Ã¢â‚¬Å"(y -y Regression equation: Regression equation= I + aC Y2 Y1 a = slope = X2 X1 I = Intercept = regression a C As a percentage of mean absorbance. 3. Standard Deviation: S = à ¢Ã‹â€ Ã… ¡ à ¯Ã‚ Ã¢â‚¬Å" (X- X!) 2/N 1 Where, X = observed values X! = Arithmetic mean = à ¯Ã‚ Ã¢â‚¬Å"X/N N = Number of deviations For practical interpretation it is more convenient to express S in terms of percent of the approximate average of the range of analysis is used in the calculation of S. This is called co-efficient of variation (C.V) or percent relative standard deviation (%RSD). C.V OR %RSD = 100* S/ X! Criteria for Validation of the Method CHARACTERISTICS ACCEPTABLE RANGE Specificity No Interference Accuracy Recovery (98-102%) Precision RSD Linearity Correlation Coefficient(r)>0.99 Range 80-120% Stability >24h or >12h DRUG PROFILE RIZATRIPTAN BENZOATE: Structure: Chemical name : N,N diethyl -5-(1H-1,2,4-triazol-1-1-ylmetyl)-1H Indole-3 Ethanamine monobenzoate Molecular Formula : C15H19N5.C6H5COOH Molecular weight : 391.47 Description: White crystalline powder Melting point: 178-1800C Solubility: Sparingly soluble in water and methanol Storage: Air tight container protect from light. Drug Category: Anti migraine drug THERAPEUTIC RATIONAL RIZATRIPTAN BENZOATE: CLINICAL PHARMACOLOGY: Mechanism of action: Rizatriptan binds with high affinity to human 5-HTIB and 5-HTID receptors leading to cranial blood vessel constriction. Pharmacokinetics: Absorption: Completely absorbed from GI tract, absolute bioavailability is 45% plasma peak concentration attained with in 1-1.5 hours (conventional tablet )or 1.6-2.5 hours (orally disintegrating tablet)after oral administration. Distribution: Crosses placenta and is distributed in to milk in animal, no studies in pregnant or nursing women. Metabolism: Metabolized principally via oxidative deamination by Mao-A to an inactive indole acetic acid metabolite Elimination: Excreted principally in urine(14% of dose as unchanged drug and 51 % a indole acetic acid metabolite Adverse effects: Dry mouth Dizziness Pain tightness/pressure in neck/throat/jaw. Nausea Chest pain Parasthesia Fatigue Dosage and administration: The dose range of Rizatriptan benzoate is 10-30mg orally once daily.Rizatriptan benzoate can be administer orally disintegrating tablet with out meals. LITERATURE REVIEW Sasmitha Kumar et al: has been developed UV spectroscopic method for estimation of Rizatriptan benzoate.The drug shows maximum absorption at 277 nm and 281 nm and obeys beer-lamberts law in the concentration of 0.5-20  µg/ml at 277 nm and 0.5-80  µg/ml at 281 nm respectively. The percentage recovery was found to be 97-100%. Madhukar et al; has been developed reverse phase high performance liquid chromatographic method for determination of Rizatriptan benzoate. The proposed method utilized column L1 inertsil ODS-3v, 250 nmx4.6 mm having particle size, 5 µm. The mobile phases were comprised of A, B of Acetonitrile and buffer pH 6.5 at UV detection 225 nm.The method shows recovery 96.64-97.71 Sachin jagthap et al; has been developed stability indicating reversed phase high performance liquid chromatographic method for the determination of Rizatriptan benzoate in bulk powder and in pharmaceutical formulations. The method utilizes c18 column having dimension 250mmx4.6 mm having particle size,5.0  µm using a mobile phase 0.01M sodium dihydrogen phosphate buffer: Methanol , at a flow rate 1ml/min at ambient temperature and detected at 225 nm.and the method was validated according to ICH guidelines Quizi zhang et al: has been developed, a high performance liquid chromatographic method for the determination of Rizatriptan benzoate in human plasma.using asingle step liqid liqid extraction with metyl tertiary butyl ether, the analytes separated usig amobile phase consisting of 0.05%v/v triehylamine in water adjusting ph 2.75 with 85% phosphoric acid and acetonitrile.fluroscence detection was performed at an excitation wavelength of 225 nm and an emission wavelength of 360 nm.The linearity for rizatriptan was within the concentration range of 0.5-50ng/ml. Rajendra Kumar et al: has been developed and validated stability a stability indicating high performance liquid chromatographic method for Rizatriptan benzoate.The force degradation studies were performed on bulk sample of Rizatriptan benzoate. The method utilizes a zorbax SB-CN column with dimension of 250 mmx4.6 mm, 5um column. The mobile phase consists of a mixture of aqueous potassium dihydrogen ortho phosphate (ph3.4), acetonitrile and methanol. Rauza bagh et al: has been developed a spectroscopic method for analysis of Rizatriptan benzoate in bulk and tablet dosage form. The Rizatriptan benzoate shows maximum absorbance at 225 nm. Beers law was obeyed in the concentration range of 1-10 µg/ml. AIM AND PLAN OF WORK The present aim is to develop a new simple and rapid analytical method to estimate the Rizatriptan benzoate The plan of the proposed work includes the following steps: To undertake solubility studies for analytical studies of Rosuvastatin calcium Develop initial chromatographic conditions. Setting up of initial chromatographic conditions for the assay of Rosuvastatin calcium Optimization of initial chromatographic conditions. Validation of the developed HPLC Analytical method according to ICH method validation parameters. EXPERIMENTAL NEW RP-HPLC METHOD FOR THE ESTIMATION OF RIZATRIPTAN BENZOATE IN TABLET DOSAGE FORM A simple reverse phase HPLC methods was developed for the determination of Rizatriptan benzoate in tablet dosage form. Zorbax Eclipse XBD C18 (250 cm ÃÆ'- 4.6 mm) column in isocratic mode with mobile phase Buffer ph 5.0: Methanol (80:20) was used and pH-3 adjusted with tri ethylamine. The flow rate was 1.0 ml/min and UV detection at 225nm. The retention time 3.0 min. The proposed method was also validated. EXPERIMENTAL 1. Instrumentation: Shimadzu LC-10A HPLC Vacuum pump Gelmon science Elico SL-164 double beam UV-Visible spectrophotometer Ultra sonicator 3.5L 100(pci) 2. Chemicals: Water HPLC grade Methanol HPLC grade (Merck) Potassium dihydrogen orthophosphate(AR Grade) Triethylamine (AR Grade) 5.1 OPTIMIZATION: 1. Selection of wavelength: After solubility study for the drug solvent was selected and appropriate concentration of Rizatriptan benzoate standards with solvent were prepared. The solution were then scanned by using doubl beam UV-Visible spectrophotometer the range between 200-400nm.The overlain spectra for the both drug were observed and maximum wavelength was finally selected. 2. Selection of mobile phase: To develop a prà ©cised and robust HPLC method for determination of Rizatriptan benzoate , its standard solution were injected in the HPLC system. After literature survey and solubility data different composition of mobile phase of different flow rates were employed in order to determine the best condition for effective separation of drugs. 3. Selection of column: Initially different C8 and C18 columns were tried for selected composition of mobile phase and quality of peaks were observed for the drugs. Finally the column was fixed upon the satisfactory results of various system suitability parameters such as column efficiency, retention time, tailing factor / peak asymmetry of the peaks. Other parameters such as flow rate, column temperature etc. were selected by varying its value up to certain levels and results were observed. The value at satisfactory results were obtained has been selected for the method. The final selection of chromatographic conditions as follows Optimized chromatographic conditions Preparation of Buffer ph 5.0: Dissove 2.76 gm of potassium dihydrogen orthophosphate in 1000ml of HPLC water plus 5.0 mlof Triethylamine. Mix and adjust PH 5.0 with orthophosporic acid. Filter with 0.45u nylon filter. Preparation of mobile phase: The mobile phase was prepared by mixing Buffer: Methanol (80:20). the solution was then filtered through 0.45ÃŽÂ ¼m membrane filter and sonicated. Preparation of standard stock solution: Standard solution of the pure drug was prepared by dissolving 73.0 mg of Rizatriptan benzoate in 100ml volumetric flask. The drugs were dissolved by using mobile phase as a diluent. Add about 50ml of diluent and sonicate to dissolve. Make up the volume with diluent. Mix well. Further dilute 5.0ml of the above solution to 250ml with diluent, mix well. Preparation of sample solution: Weight and transfer 10 intact tablet in into a100ml volumetric flask. Add about 50ml of diluent and sonicate for 15 min and make up the volume with diluent. Mix well, filter through 25 mm 0.45 u nylon , discard 4ml filtrate. Further dilute 5ml of the solution to 250 ml with diluent and mix well. CONCLUSION The evaluation of obtained values suggests that the proposed HPLC methods provide simple, precise, rapid and robust quantitative analytical method for determination of Rizatriptan benzoate in tablet dosage form. The mobile phase is simple to prepare and economical. After validating proposed method as per ICH guidelines and correlating obtained values with the standard values, satisfactory results were obtained. Hence, the method can be easily and conveniently adopted for routine estimation of Rizatriptan benzoate in tablet dosage form.

Friday, October 25, 2019

A Biography of Nelson Mandela :: Nelson Mandela Biography

A Biography of Nelson Mandela Nelson Rolihlahla Mandela is judged to be one of the greatest political leaders of modern times. Among his many accomplishments are the 1993 Nobel Peace Prize for his dedication to the fight against racial oppression in South Africa and establishing democracy there and becoming the president of South Africa in 1994 following their first multiracial elections. Nelson was born as the foster son of a Thembu chief in Umtata (now the province of Eastern Cape) and raised in a traditional tribal culture within the grips of apartheid, a powerful system of black oppression that existed in South Africa. After years as a poor student and law clerk in Johannesburg, he assumed an important role in the African National Congress (ANC), a civil rights group. He also helped form the ANC Youth League in the 1950's. He was accused of treason in 1956 but was acquitted in 1961. From 1960-1962 Mandela led the NAC's para military wing known as Umkhonto we Sizwe which translate to "Spear of the Nation." He was arrested in August of 1962, sentenced to five years in prison and while incarcerated was again convicted of sabotage and treason and was sentenced to life imprisonment in june, 1964 at the famous Rivonia Trial. During his twenty-seven years in prison, Nelson Mandela became a symbol of resistance to the white-dominated country of South Africa throughout the world. After complex negotiation, Mandela was finally released from prison by President F.W. deKlerk in February, 1990, after lifting the long ban on the ANC. Mandela's release from prison marked the beginning of the end of apartheid in South Africa when he once again became the head of the ANC. He began the process to from a new constitution in South Africa which would allow political power to the black majority. Finally in 1991 the South African government repealed the laws that had upheld apartheid. In May, 1994 Nelson Mandela became South Africa's first black president after the country's first multiracial elections were held. His goal was to provide for economic and social growth for the black majority that had been oppressed for so long by the system of apartheid.

Thursday, October 24, 2019

Dutch and English Essay

The economic and political success of the Dutch and the English between 1570-1766 How the Dutch and English became successful was not only in trading but being merchants and bankers as well. When the other countries were busy fighting each other the Dutch were specializing in trading with them. Out of 20,000 trading vessels 16,000 of them were Dutch ships.† In the early 1400’s two thirds were based in Amsterdam.†The English and Dutch went to war over trading not only once but three times. The first was fought in 1652-54 the second in 1665-67 the third and final was in 1672-74 with the Dutch being the victor at Solo Bay in 1672. The most important thing to the Dutch was there trading, they even came up with something called the Maritime Insurance: with this people didn’t lose out on the profit. When the ships left port and out to sea nobody knew what would happen or if they were going to make it back, till the ship sailed into the harbor once aging. The Dutch even design a ship that was able to carry more goods and less people, it was a la rge bulk- carrying vessel called a flute or fly boat. The Dutch had trading stations and supply depots in many ports to name a few were: Norway, Ceylon, Java, Sumatra, Formosa,† which they took control of by 1641.† They were also the first to dominate the Baltic trade route between Spain, France, and England. The Dutch were able to pay a higher price for your goods and also give you credit, in doing this even if the crops were not ready yet the farmers still made sure they had something to sell to the Dutch. This meant a lower profit margin but the Dutch were able to profit since they had so much trade. There was even a market for Dutch paintings they were the first one to paint every day citizens doing every day things: form standing at the market, celebrations, or just having a good time. The  colors and demotions of the paintings is what made they more life like. Painting also showed how clean the homes and alleys were, of the farms the Dutch weren’t afraid to show everyone what they were like, took pride in showing people how they lived. Map making was another thing the Dutch did well they were able to lay it flat instead of in a cylinder shape. This way they were able to write on it redesign the different countries it was easier to measure how far you came or still needed to go. Education was available to anyone who wanted to learn woman and children, and not just to the rich but to everyone who wanted to learn. Pictures show that the schools were like an out of control daycare. Women were able to help run a business draw up contracts, women were just about equal to the men expect they couldn’t be on things like town counsels or like. Girard P.2 Women were still expected to get married and become a mother that was very important more then running a business. What was interesting was woman were able to go out by themselves and feel safe they didn’t need to worry about being abused, the other towns people looked after each other children they cared what happed to each other. The Dutch people seemed to always be talking about anything and anyone it didn’t matter where they were. In doing this people from other countries were surprised in this. The Dutch had an option on things form there own country two any other that might cross there mind. While other countries were fighting a religious war the Dutch were more laid back each religion had it place. At the some time there were many people moving to the Dutch Netherlands because they were able to study things like being a mathematician, even studying astrology. The Dutch showed the world what middle class family was and they didn’t need to starve or be poor; the farmers were able to sell their livestock or crops at the markets, they made a good living they didn’t dress in rags or dirty. In fact the Dutch were very clean, their homes were neat, even the back allies were keep clean and in order. The Dutch were very curious they wanted to know how the human body looked on the inside they even painted they doing an autopsy of the era. They were all about advance technology they used the ocean to power water wheels they dug canals to the wheels this powered the machinery, even  the wind was used for the windmills. The Dutch currency stayed the same, to help with the trading the first stock market was started in the Amsterdam town square. Not long after that the bank of Amsterdam was founded in 1694. Merchants were able to give credit and finance people. â€Å"Around 1700 the Dutch Netherland was ruled by the merchants mostly in Amsterdam it was the riches in province.† So while the rest of the world was taking from everyone and fighting over what religion was the right one the Dutch were trying to improve themselves trading was were the money was even if they had to spend money to make it they came away richer for it. They use the elements around them the ocean and the wind. There country men were there equals the country worked together. Things like money and finance was agreed on. The Dutch were traders, farmers, fishermen, merchants, bankers, even slave traders they did what they could to succeed in life while everyone else were fighting. Agriculture was a imporant econimc factor in England by the sixteenth century they had improved in better breading in their livestock, better dranges in the lower farmlands. People even came to England to learn how to farm. England imported other crops form different areas like rice from Asia.Trading was economicly favorable not as successful as the Dutch. Some of the English shipments were things like: â€Å"timber, flax, pitchwere the first of the Baltic trade†. England and the Dutch first went to was in 1652-54 the English were the victor. Some of England’s wealth came from the skilled craftsmen. They were invented and came up with new techniques.† Two centuries of gunnery had brought mining and metallurgy to a high pitch.† Girard P.3 In the year 1558 England got a new Queen named Elizabeth, she was the daughter of Henry the Eighth and Ann Boleyn. Before she became Queen her half sister imprisoned her in 1554 then final put her in exile in May 23 1554. When Elizabeth came to the throne due to fate she didn’t have the very men killed that tried to have her killed. Instead she bade them to place her in their hearts and have trust in her in other words. She talked to them as they were her equals, not just her royal subjects. On her coronation she asked nothing more of her subjects then to think of the good of England she stated that the common wealth of England comes first. Instead of fighting  amongst themselves she wanted them to stand together to fight their enemies. Spain and the Dutch were England’s greatest enemies. With the defeated of the Spanish in the Gulf of Mexico by John Hawkins in 1577 he was appointed treasurer of the navy. The English came up with a prototype of a ship in 1569 that was faster and easier to maneuver and had a better chance of hitting their target even in the turbulent whether. By 1588 the English had eighteen of them built. In 1776 saw another sea battle with Spain, during the next few years England would war with Spain three more times. Religion was a major political in England it seems from the start of the world: you had the crusades that were fought in the Holy Lands not only once but at lease three times. Everyone was trying to convert anyone they can to the true church. You had the Roman Catholic like John Knox an ordained priest. John Calvin who after braking from the Roman Catholic Church and becoming a Protestant around 1530, Calvinism and Lutheranism was the out come of that religion. Lutheranism was after a man named Martin Luther who was an Augustinian Monk. Because of him and his belief and thinking that we all didn’t need to follow one religion we today have many different faiths and we are able to choose what we are and what believe freely. Between the years of 1562-1598 there were at lease nine religion wars fought. Even under Elizabeth Catholics died because they were judged to be traders. In 1694 the bank of England was founded, the merchants were able to give credit and finance, there was a rise in the use of paper for currencies and instead of using bullions the cheque was invented. â€Å"Joint stock companies generated another form of negotiable security, their own share.† In the seventeenth century the coffee house were being taken over by the start of the London Stock exchange. Financer started to offer the public life insurance for the first time. The English became merchants and bankers when it was apparent that more money was to be made if they were more involved in the trading. The Economic gain of both the English and the Dutch were closely related, the trading and the banking. They soon realized that they would have to spend money to make it so that is how the Dutch came up with the stock market were everyone was able to have a share in. both countries were into slave trading, the Dutch started their West Indies company solely for the trading of slaves. This was an important economic for both countries. . Girard P.4 Another of England’s success was in the colonizing of the many countries and the discovery of even more. New York was a Dutch colony before England took it over. The other reason the English were successful because they didn’t wait for things to come to them they went out and took it. Whether it was the trading, farming, banking: both England and the Dutch kept trying to improve what they had in life. They still went out to make something of themselves discovering new countries and learning from there mistakes. One of the biggest successes the Dutch had been their trading routes. They didn’t just stop at a few they went on establishing many new shipping ports. Then they even designed ships that were able to carry larger bulk items, with less man power. They were able to give better bargains that made people wasn’t to trade with them. Back then it was important that you didn’t loose your product or that was the end of your money till the next season. The Dutch were so successful with their trading they had control over the Baltic trade routes, Spain, France, and England were just a few ports they traded with. The Dutch even had the environment working for them: they had advanced their technology where the ocean powered their water wheels, which powered the machinery. If the water wheels were inland they dug canals to where they needed the water. The wind was even utilized with the windmills that were spreading up across the country side. How they worked to maintain the shipping trade was everyone who could afford it bought stock in the company and that is how the first stock market was invented. The English on the other hand was a close second in the trading, one of the biggest commodes was the slave trade. The import trade was just an important this brought to England how to improve better breading with the livestock, the different crops like rice. then you had other shipments such as cotton and rubber just to name a few. Not only were the English traders but they became merchants industries were gaining a foot hold like the brewing establishments and the wool merchants. Mechanical and engineering skill clocks were done with mechanical interments. Brewing and textile establishments were also a growing business along side of the cloth and wool merchants that were spreading across the country side When Elizabeth became Queen she tried to improve the political stand point with England instead of fighting each  other she stated that the common wealth of England should come first. She talked to the people as if the were her equals and just her royal subjects. The economical success of both English and the Dutch was learning how to advance their trading what things worked and what didn’t. They applied and design different techniques the farm land and the crops were improving with every century and generation. Religion was always an issue in the political area you had your Roman catholic, or your Protestants. Then when Luther was making an issue of which god and belief was the true one two more faiths came into play and that was the Lutherans and Calvinists because of a man name John Calvin. The bible didn’t come into print till 1455 and that was the Gutenberg Bible. After that t5he people were able to read the words of God for themselves.

Tuesday, October 22, 2019

Concert Orchestra experience Essay

I went to the UNT Concert Orchestra on Wednesday, October 3rd, 2012. It was held in Winspear Hall at the Murchison Performing Arts Center at 8:00 pm. The Concert was led by Conductor Clay Couturiaux and featured soloist Christopher Deane, who played the Marimba. The first piece was Variations on a Theme of Tchaikovsky, Op. 35a (1894) by Anton Arensky (1861-1906). The piece was written in 1894, in tribute to Pyotr Il’yich Tchaikovsky (1840-1893). It was based on the theme from the poem â€Å"Legend†, written by Richard Henry Stoddard (1825-1903). This poem portrays the crucifixion of Christ. Arensky admired Tchaikovsky so much that he used the theme of â€Å"Legend† for a set of variations in the second movement of his Second String Quartet. This piece’s style is a themes and variations. Its instrumentation includes Cello solo, 2 Flutes, 2 Oboes, 2 Clarinets (A), 2 Bassoons + 2 Horns (F) + Violins I, Violins II, Violas, Cellos, and Double Basses. The second piece was Concerto for Marimba and Orchestra, Op. 34 (1957) by Robert Kurka (1921-1957). This piece introduced the marimba, which proved to the musical world that it could contend with instruments that had been used in orchestras and also provide a unique sound to the traditional orchestras played in regular concerts. This piece’s style is solo concerto. Its instrumentation includes the marimba and the orchestra. The third piece was Pictures at an Exhibition (1874) by Modest Mussorgsky (1839-1881). This piece was inspired by the paintings of the artist Viktor Hartmann (1834-1873). This piece’s style is an orchestral suite. Its instrumentation includes 3 Flutes (2nd and 3rd doubling Piccolos), 3 Oboes (3rd doubling Cor Anglais), 2 Clarinets in A and Bb, Bass Clarinet in A and Bb, Alto Saxophone, 2 Bassoons, Double Bassoon, 4 Horns in F, 3 Trumpet in C, 3 Trombones, Tuba, Timpani, Percussion (xylophone, triangle, rattle, whip, side drum, bass drum, cymbals, suspended cymbal), 2 Harps, Celesta, and Strings. I picked the pieces was Variations on a Theme of Tchaikovsky, Op. 35a (1894) by Anton Arensky and Concerto for Marimba and Orchestra, Op. 34 (1957) by Robert Kurka. Both of these pieces were distinctly different than one another. The piece by Arensky depicts a sense of deep sadness and despair as a whole. It starts out containing elements of intimacy and moves towards a slow moving harmony. The structure of the music matched the structure of the original poem. The variations of sounds expressed many shifting moods such as a dialogue between instruments. Mood changed quickly throughout the piece and showed different parts of the melody, from increments of joy, to sadness, to a deep sorrow. The rhythm seamlessly continued throughout the piece acting towards each of the different themes described in its construction. The piece by Kurka produced a new and different type of classical music that is unique to the orchestra. The use of the marimba stood out from the traditional orchestral instruments. The first movement begins with an alternation between the marimba and the orchestra. Its upbeat sound resonates in a catchy chiming sound whose rhythm is clear yet unexpected. It provides a playful side to a usually stern and focused orchestra. As the second movement begins, it as if the marimba is communicating to the orchestra itself. As if it is trying to fit in with these classic types of instruments through its unique dynamics and resounding tone. It seems to clash with its orchestral counterparts. By the third movement, it seems as if all the instruments reach an agreement on the legitimacy of the marimba through its colorful and exciting solo. Although both pieces are completely different than one another, they both exhibit emotion. Arensky exhibits cruel sounding music that discusses the importance of religion and a series of events that affects a wide variety of people. It evokes a sense of despair that expresses a deep sounding melody. Kurka exhibits a different type of music that discusses the marimba’s rise to becoming a part of classical orchestra. Its colorful timbre expresses a joyful and unique melody that pleases the human ear. Anton Arensky (12 July 1861 -25 February 1906), was a Russian composer of Romantic classical music, a pianist and a professor of music. Pyotr Tchaikovsky was the greatest influence on Arensky’s musical compositions. Indeed, Rimsky-Korsakov said, â€Å"In his youth Arensky did not escape some influence from me; later the influence came from Tchaikovsky. He will quickly be forgotten. † The perception that he lacked a distinctive personal style contributed to long-term neglect of his music, though in recent years a large number of his compositions have been recorded. Therefore, his values are seemingly non-existent because of the major influence of Tchaikovsky and absence of his own personal work. Throughout the performance I did perceive a strong sense of historical value and defines not who Arensky was, but his role model Tchaikovsky and how his music conveyed a strong sense of religious value. Kurka’s Concerto for Marimba and Orchestra was the first marimba work to enjoy both widespread public appeal and widespread recognition of having a high level of musical sophistication fit for the concert hall. It debuted during the modern style period. It provided important historical value by Kurka finally representing everything that early marimba composers set out to do in one piece: create a sophisticated and serious musical work that is both challenging to the performer and which has widespread public appeal. I perceived an ongoing struggle throughout the piece, but as the performance continued it conveyed the struggle the instrument had to do in order to become a prominent part of the classical orchestra. Citatation Keunning, G. (1999). Symphony of the canyons. Retrieved from http://lasr. cs. ucla. edu/geoff/prognotes/mussorgsky/pictures. html Strain, James. â€Å"Vida Chenoweth. † Percussive Notes 32. 6 (1994): 8-9. Print. Stevens, Leigh Howard. â€Å"An Interview with Vida Chenoweth. † Percussive Notes 15. 3 (2002): 22-25. PAS Online Archive . Weir, Martin. â€Å"Catching up with Vida Chenoweth. † Percussive Notes 32. 3 (1994): 53-55. Print.

A General Approach to the Air-Conditioning System essays

A General Approach to the Air-Conditioning System essays Human beings are born into a hostile environment, but the degree of hostility varies with the season of the year and with the geographical locality. For removing these effects of environment and changing outdoor conditions to the conditions we feel comfortable we use mechanical systems. Air conditioning system is the most developed system using for this purpose. Automotive air conditioning systems are used for controlling the conditions of air using in automotive. After investigations, negative effects of this system on the automotive are minimized and with using safety devices the usage area of the system are broaden. Now automotive air conditioning system are one of the most needed systems in automotive. Full air conditioning implies the automatic control of an atmospheric environment either for the comfort of human beings or animal or for the proper performance of some industrial or scientific process. The adjective full demands that the purity, movement, temperature and relative humidity of the air be controlled, within the limits imposed by the design specification. Air conditioning is always associated with refrigeration and it accounts for the high cost of air conditioning. The ability to counter sensible and latent heat gains is, then, the essential feature of an air conditioning system and, by common usage, the term air conditioning means that refrigeration is involved. Human beings are born into a hostile environment, but the degree of hostility varies with the season of the year and with the geographical locality. This suggests that the arguments for air conditioning might be based solely on climatic considerations, but although these may be valid in tropical and subtropical areas, they are not for temperate climates with industrialized social structures and rising standards of living. Air conditioning is necessary for the following reasons. Heat gains from human bodies, sunlight and el...

Sunday, October 20, 2019

Man with the Movie Camera and the Male Gaze Essays

Man with the Movie Camera and the Male Gaze Essays Man with the Movie Camera and the Male Gaze Paper Man with the Movie Camera and the Male Gaze Paper Essay Topic: Invisible Man Man with the Movie Camera: The Male Gaze Between every audience and a film there will always lay a camera; this camera may seem transparent or not visible, but nevertheless there is a camera and a cameraperson filming the scenes. Laura Mulvey, within her essay Visual Pleasure and Narrative Cinema, coins the term â€Å"male gaze,† where the intermediary, the camera, is metaphorically transformed to the eyes of a male, changing how we view cinema, as well as both men and women immortalized on the silver screen. Dziga Vertov, a Soviet director, wrote and directed an avant-garde, silent documentary film called Man with the Movie Camera in 1929. Despite being famous for its anti-narrative cinematical elements, the film includes a number of narrative developments of human movement in the Soviet Union, which portray power struggles between the government, men, and women. Vertov’s Man with the Movie Camera reflects Mulvey’s psychoanalytic male gaze by abstaining from the use of a visible subject or actors, its use of a wide and unusual variety of cinematic camera techniques, and a male perspective. Man with the Movie Camera lacks a clear or constant visible subject or actor, and thus supports Mulvey’s theory of the male gaze in cinema. The film, instead of having recognizable characters or actors, attempts to capture the life of a camera man, very much from the camera’s perspective. Vertov includes shots of the titular camera men within the film, but many of the scenes are montage or unstaged clips of daily life. By not utilizing strongly developed characters, the audience does not have a particular perspective to view the film, other than the exclusively male cameramen, but, by including the cameramen, with their cameras, filming within the film, as well having the audience view another audience watching the same movie, Vertov brings attention to the gaze itself; that there is, in this case, a man looking through the camera and creating the scene. Mulvey says that â€Å"There are circumstances in which looking itself is a source of pleasure, just as, in the reverse formation, there is pleasure in being looked at† (200). The male gaze in the example of scenes of cameramen filming with the film itself represents this pleasure of looking and of capturing a moment. Mulvey goes on to say that: â€Å"At first glance, the cinema would seem to be remote from the undercover world of the surreptitious observation of an unknowing and unwilling victim. What is seen of the screen is so manifestly shown. But the mass of mainstream film, and the conventions within which it has consciously evolved, portray a hermetically sealed world which unwinds magically, indifferent to the presence of the audience, producing for them a sense of separation and playing on their voyeuristic fantasy† (201). Man with the Movie Camera seems to counteract the illusion of cinema by drawing attention to the act of filming and the cameramen themselves and a lack characters. Furthermore, because the film is a silent documentary, though an orchestral soundtrack was produced to accompany the film, the characters that are present have no voice or audible connection to the audience, thus without a consistency of characters nor a voice attached to any of the subjects within the film the audience becomes aware that the camera can ultimately be an intermediary between the cameramen and them, and the illusion of narrative cinema is lost. Mulvey states that in film women are typically the objects, rather than the possessors, of gaze because the control of the camera, and thus the gaze, comes from the assumption of heterosexual men as the default target audience for most film genres, in this case, as a result of the male cameramen present in the film (200). Though there are no consistent human characters with Man with the Movie Camera, the camera itself seems become a subject itself. In the opening scene of the movie one of the various cameramen is positioned, by being superimposed, on top of another large, mountainous camera. In later scenes within the film, Vertov seeks to emphasize the power of the visual reach of the camera; it can go anywhere and be anywhere. For example, Vertov creates scenes in which the film superimposes a cameraman inside a glass of, women waking up and getting dressed, and a woman giving birth, and the baby being bathed. In another scene the camera is subject to simple animation in which it even evolves human movement like its cameramen. These scenes portray the gaze of the camera, thus the gaze of the man behind the camera – a literal male gaze, as having the power to film and objectify anything, from this the camera itself becomes the subject amongst a lack of actors. Man with the Movie Camera utilizes an unusually broad range of cinematic technique and staging, which reflect Mulvey’s male gaze of cinema. A majority of the scenes in the film appear to be completely not staged, as the audience is aware that the cameramen being filmed are simply attempting to gain shots of people of the Soviet Union in their everyday life and routine. By creating a seemingly realistic shot, Vertov changes â€Å"the function of film†¦ to reproduce as accurately as possible the so-called natural conditions of human perception. Camera technology†¦ and camera movements†¦, combined with invisible editing†¦ all tend to blur the limits of screen† (204). In one clip, Mikhail Kaufman, one of the cameramen, as well as Vertov’s editor, sets his camera up in a train car to film passengers sitting in a train car. Despite the people in the train car appearing staged, one child waves to the camera shyly, making the scene lose its formal, undisturbed feeling. In a similar way to voyeurism and the male gaze, Mulvey says â€Å"that of the spectator in direct scopophilic contact with the female form displayed for his enjoyment (connoting male fantasy) and that of the spectator fascinated with the image of his like set in an illusion of natural space, and through him gaining control and possession of the woman within the diegesis† (204). In the case of Kaufman filming what we presume as a diegesis of natural space, according to Mulvey the male gaze of the camera, the cameramen, and the audience creates a spectacle of the natural, or unstaged, world, which, as Mulvey puts in Freudian terms, creates a voyeuristic male fantasy. The film itself does contain sexual imagery, concurrent with the male fantasy. Scenes of a camera set up in a room continuously films women waking up and getting dressed, then undressed later, which literally fulfills the fantasy of voyeuristic male fantasy. Similar to the concept of the’ peeping to,’ The Man with the Movie Camera creates an unstaged world which entertains the male gaze. Amongst other cinematic techniques include many scenes involving track shots. Track shots, so named because the camera is usually set along a track in order to control its movement, mirror a gentle progression of movement, not entirely unlike human walking or running. In this sense, the film once again recreates a natural world through comparably human movement. Other techniques, such as extreme close ups, for example of people sitting in the audience viewing the movie, the same film, within the movie in the heater present the audience with another scene in which the viewer is associated with the active subject; the camera and its gaze – or the gaze of the titular characters, and the passive, objectified individuals, as well as the masses. Lastly, Vertov’s Man with the Movie Camera reflects Mulvey’s male gaze through it s portrayal of men and women through objectification. The basis of this argument comes from the assumption that the audience will take the perspective of the cameramen seen filming within the movie, whom are the only consistent characters, thus the audience will take on the gaze of the male. Mulvey says â€Å"the man controls the film fantasy and also emerges as the representative of power in a further sense: as the bearer of the look of the spectator, transferring it behind the screen to neutralize the extra diegetic tendencies†¦ as spectacle† (204). One of the first scenes in which women are visible on screen, is a montage of footage of cameramen working to achieve difficult or risky shots, such as sitting in front of a moving train or filming in a moving vehicle, spliced with scenes of women putting on pantyhose and braziers. This comes as a reflection of the male gaze by objectifying women through the comparison between men working with cameras and taking dangerous shots and women’s legs. In one, the men are usually facing the camera or their faces are at the very least visible to the viewer whilst they are filming, yet for the women their faces are never visible throughout this montage, only their bodies. This works on different levels to support a male gaze; it solidifies the association of the audience with the male by both showing men’s faces, and their gaze, and their relationship with the camera; women are not shown to be even capable of a gaze nor able to be equals with the male gaze by meeting it with their own. According to Mulvey, the male gaze is based upon the theory that â€Å"the paradox of phallocentrism in all its manifestations is that it depends on the image of the castrated woman to give order and meaning to its world. An idea of woman stands as a linchpin to the system: it is her lack [of a phallus] that produces the phallus as a symbolic presence; it is her desire to make good the lack that the phallus signifies† (198). Through this reasoning, The Man with the Movie Camera, no matter how artificial this montage may be interpreted, objectifies women as both a threat of castration and sexual objects, and portrays men as the both the men behind the camera and connected to the actively looking audience. Despite much of the film being nstaged, The Man with the Movie Camera contains a few scenes in which the events are staged or choreographed. The scene mentioned earlier, of the women getting dressed, is one of the few obvious examples of staging within the film, as well as a scene in which chess pieces are being collected in the middle of the chess board. By having scenes that are obviously staged or choreographed, especially amongst a vast majority of film that is natural, or, Vertov emphasizes such objectification. Dziga Vertov’s Man with a Movie Camera comes to the viewer as a reflection Laura Mulvey’s psychological male gaze by having no consistent characters or narrative development, unusual cinematic and plot techniques, and by utilizing an objectifying male gaze. Vertov’s film, much like a majority of film of the silver screen from Hollywood’s day and age, clearly had examples of a male gaze a theory from Mulvey, a much more contemporary writer, despite many of its non-traditional, anti-narrative structure.

Saturday, October 19, 2019

Comprehensive Strategic Management Case Study Example | Topics and Well Written Essays - 1750 words

Comprehensive Strategic Management - Case Study Example ’s low salary-pegged tourism marketing strategy is a viable cost advantage and results to increasing Haiti’s competitiveness in tourism (Witcher 4). Haiti’s pro poor tourism programs train the residents to be good tourist guides. The government’s inclusion of poor residents in the nation’s policymaking decisions ensures the poor have better chances of finding jobs, especially tourist guide jobs. The tourism programs include the St. Lucia Heritage Tourism Programme (Kolbe et al.6). Haiti gained differentiation advantage (Freeman 85). In 2013, research showed that Haiti has unique products for tourists being offered in the diversified packaging. Haiti culture and lifestyles has attracted and retained many tourists who desire to go back for holidays every year. The neighboring countries that are regarded as Haiti’s competitors dwarf the uniqueness and hospitability of the Haitians. For instance, Dominican Republic’s culture differs from the Haiti’s culture (Tiudor 5). Research shows that the Haiti visitors recall the most meaningful portion of their Haiti visit is the friendly and accommodating attitude of the residents. The majority of the tourists, including the visiting working class and middle class visitors, felt at home and enjoyed the warm hospitality of the Haitian residents. In 2013, another differentiation strategy is inviting Haiti friends and relatives to visit. Recent research showed 62 percent of Haiti tourists visited their fr iends and family members (Kolb et al. 10) There are future strategies that will help gain future cost advantage (Hitt, Ireland, and Hoskisson, 81). The government can involve its citizens by encouraging people to invite their friends and relatives to visit Haiti in 2014 and future. Such encouragement will reduce the government’s paying for tourism promotion activities, which are very expensive in terms of labor and advertising. This can be done by advertising Haiti’s Catholic fiestas to the world starting in

Friday, October 18, 2019

Konica Minolta business solutions customer service training plan Coursework

Konica Minolta business solutions customer service training plan - Coursework Example The assessment will help in; Organizational analysis examines the areas where training is required and the explicit conditions under which the training will be conducted (Altschuld & Kumar, 2010). It will identify abilities, skills, and knowledge that employees will need for the future in order to meet the organization’s goal of providing substantial services to its loyal customers while helping health care, legal, and educational customers to embrace rapid information movement, reduce costs, improve quality, and enhance security. HR data will be analyzed to indicate areas where introduction of training will boost performance. Tagged among these are departments with high absenteeism rates, high turnover, and poor performance (Noe,  2010). Changes in automation, technology or equipment will also need to be identified. After a thorough analysis, appropriate training will be developed. Moreover, the management will need to offer the required financial support in order to ensure the success of the assessm ent. Moreover, customer complaints and employee grievances will also be considered in order to effectively cover the needs of the organization. Factors to be considered include labor pool, future skills need, and alterations in laws and conventions (Hawthorne, 2007). Individual analysis will target the employees of Konica Minolta Business Solutions and how they perform. Employees will be reviewed to reveal any deficiencies that will aid in the formulation of an effective and efficient training plan. Additionally, employees will also be interviewed (both casual and casual), surveyed, or tested to ascertain their training needs. Employees will be at liberty to indicate the various problems that they have and recommend possible solutions to the problems. Task study will begin with a comparison of employees’ knowledge and skills to

Competing Value Framework Research Paper Example | Topics and Well Written Essays - 1250 words

Competing Value Framework - Research Paper Example Similarly, other tools like Managerial Behavioral Instrument and Organizational Culture Assessment Instrument gives a concrete path to analyze the organization’s position and defines where it should have to be, and assists in rearranging the business from entire the culture of the organization to individual level (Yu & Wu, 2009). Discussion Confucius defined his techniques which are also known as The Great Learning regarding development of great nation is that one must focus and incline toward his state; to build the a great nation; you need to incline towards your family initially, and to develop a great family; first you need to nurture yourself; for nurturing oneself; one needs to dictate towards learning (Blocker & Starling, 2001). These techniques were written in 5th century B.C. for aspiring leaders and it is accepted until today and accredited. Therefore, if someone wants its organization to compete effectively globally, then the relevant and appropriate culture for the organization is needed to execute effective strategies. Therefore, we need to consider ourselves before entering into this type of transformation effectively. It is a fact that the Competing Values Framework is an essential and effective tool to use which assists in determining culture not only at the individual level but also at an organizational level. It also assists in developing the path for change in organizational culture that is important for strategies going to be implemented (Cameron, 2006). Common models of leadership have divided this popular area of leadership among different comparing categories. There are various examples regarding the comparison of leadership from the literature of leadership, for instance, task theory versus socio-emotional; Theory X versus Theory Y; transactional leadership versus transformational leadership; and participative leadership versus autocratic leadership (Van & Suino, 2012). It is also found that these theories cannot be used in larger comparison and there is no other such work that compares large mixtures to examine or analyze towards defining the required leaders' behavior and to what extent it is required in leaders (Hart & Quinn, 1993). It is also to mention here that these traditional models just make us think over such leadership and it shows limits of these models which further leads to inefficiency in defining the leadership effectively (Bensimon et al., 1989). However, Robert Quinn was among those who were in an argument of saying that leadership effectiveness needs simultaneous and balancing mastery of likely paradoxical or contradictory abilities; reflectiveness and decisiveness; incremental adjustments and bold moves; and people orientation as well as performance (Hart & Quinn, 1993). It is to mention here that Quinn’s model is based on CVF for analysis of organizations. Initially, it was developed from a research that was conducted for identifying the factors of effective organizations. Quinn and Rohrbaugh in 1983 identified two main dimensions essential for the effectiveness of organization based on their statistical outcomes. The first dimension refers to a focus on the organization, from focusing internally on the individual’s development and well-being in the organization towards an external or outside focus on organization’s development and well-being.  

The Most Appropriate Way of Analyzing and Representing Data Article

The Most Appropriate Way of Analyzing and Representing Data - Article Example This is meant to make a researcher come up with a simpler way of data analysis without following the rigid linear method. A major problem existed when it came to coding and five major approaches were unearthed for the purpose of data analysis as discussed below. For the purposes of chronology, the steps involved in this case is organizing data files, the creation of initial codes, description, interpretation, and presentation of this data. As such, this method can be said to be used appropriately for qualitative research. However, all aspects of the data analysis methods were not justified clearly shown by Cresswell. The accumulation of other elements such as beginning scrutiny by focusing on a distinct element. This could have also made the method more effective. The grounded theory is also known as constant equation theory whose stages are more detailed and include; organizing the data, getting to know the data, open coding, axial coding and checking the results of the analysis. This method has been effectively used to study recovery from child abuse according to the text thus it has been used appropriately for research analysis. Creswell clearly gives a step to step process before a hypothesis is made. However, the results of the analysis were not presented in the study above and thus all processes were not justified. The best way to correct this according to Miles and Huberman would be to present sub-stages in the presentation of analysis. Used successfully for analyzing personal experiences, the structured steps include; description of the experience, stating significant statements, grouping significant statements, answering the questions what and how and lastly writing a description of the phenomenon.

Thursday, October 17, 2019

Accomplishing Life Essay Example | Topics and Well Written Essays - 1250 words

Accomplishing Life - Essay Example Most of the time all we had was each other and that is why I surprised my family when I told them that I was going to join the army. My brothers and I did not have any money to go to college when we graduated, plus I didn’t want to go to college right away. I graduated in 2005 and a year later I joined the United States Army. At the time I didn’t feel like I had accomplished anything so far even though i did ok in school. I played sports and was part of school activities but still had certain goals I wanted to reach and I knew I had to start somewhere. I wanted to get away from home and see what was out in the real world for me. Army was the way to go, and then maybe I could start accomplishing my life goals. The journey for my army accomplishment started in June 2006 when i was sent off to Fort Jackson, SC for basic training. It was 9 weeks of physical and mental training and a lot of people telling you what to do all the time. There were four platoons that consisted of 50-60 people. The four platoons made one company. I was in fourth platoon, which was one of the greatest platoons you wanted to be in. I started meeting a lot of great people and did a lot of team building. We ran miles and miles; ruck marched through woods and sand, which was hard when you carried 30lbs on your back- holding a weapon. We learned about many different weapons and how to shoot them.

Case summary Essay Example | Topics and Well Written Essays - 500 words - 3

Case summary - Essay Example For instance, Paula can be unfamiliar with new bedding standard of the hotel. One more possible reason is that Paula finds it difficult to fit to this standard because it challenges her usual pace of work. Lisa does not consider these options at all; her evaluation of Paulas work seems to be too subjective. I believe that Lisa treats Paula differently from other housekeepers. According to her opinion, cleaning is physically hard and even younger employees â€Å"are challenged†. Despite Paulas high quality of work, Lisa speaks about Paulas age negatively. If I were a manager, I would not believe Lisa because her evaluation is discriminatory. All people have to be treated equally at work despite their age, sex, gender, religion etc. In order to make the right decision, I will check whether Paula really cannot meet the standard of the hotel. To resolve the issue, I would advise Lisa to talk with Paula about new bedding standards and her performance. As Paulas line manager, Lisa has to give her a constructive feedback about her performance. She has to mention both good and bad aspects of Paulas work to show that her contribution to the team is appreciated. Moreover, Lisa has to make sure that Paula is familiar with new bedding standard. If she finds it difficult to fit it, Lisa can offer her a training program. There are younger housekeepers who are also challenged by new bedding standard. They can join Paula and learn from her how to boost their quality of work. Younger employees can benefit from cooperation with Paula because she has a profound work experience and well-developed skills. At the same time, Paula can adapt to new norms quicker if she is assisted by someone from her team. It is obvious that there are some serious issue with team work between housekeepers. Paulas performance can get worse because she is treated as an outsider by her team. As the

Wednesday, October 16, 2019

The Most Appropriate Way of Analyzing and Representing Data Article

The Most Appropriate Way of Analyzing and Representing Data - Article Example This is meant to make a researcher come up with a simpler way of data analysis without following the rigid linear method. A major problem existed when it came to coding and five major approaches were unearthed for the purpose of data analysis as discussed below. For the purposes of chronology, the steps involved in this case is organizing data files, the creation of initial codes, description, interpretation, and presentation of this data. As such, this method can be said to be used appropriately for qualitative research. However, all aspects of the data analysis methods were not justified clearly shown by Cresswell. The accumulation of other elements such as beginning scrutiny by focusing on a distinct element. This could have also made the method more effective. The grounded theory is also known as constant equation theory whose stages are more detailed and include; organizing the data, getting to know the data, open coding, axial coding and checking the results of the analysis. This method has been effectively used to study recovery from child abuse according to the text thus it has been used appropriately for research analysis. Creswell clearly gives a step to step process before a hypothesis is made. However, the results of the analysis were not presented in the study above and thus all processes were not justified. The best way to correct this according to Miles and Huberman would be to present sub-stages in the presentation of analysis. Used successfully for analyzing personal experiences, the structured steps include; description of the experience, stating significant statements, grouping significant statements, answering the questions what and how and lastly writing a description of the phenomenon.

Tuesday, October 15, 2019

Case summary Essay Example | Topics and Well Written Essays - 500 words - 3

Case summary - Essay Example For instance, Paula can be unfamiliar with new bedding standard of the hotel. One more possible reason is that Paula finds it difficult to fit to this standard because it challenges her usual pace of work. Lisa does not consider these options at all; her evaluation of Paulas work seems to be too subjective. I believe that Lisa treats Paula differently from other housekeepers. According to her opinion, cleaning is physically hard and even younger employees â€Å"are challenged†. Despite Paulas high quality of work, Lisa speaks about Paulas age negatively. If I were a manager, I would not believe Lisa because her evaluation is discriminatory. All people have to be treated equally at work despite their age, sex, gender, religion etc. In order to make the right decision, I will check whether Paula really cannot meet the standard of the hotel. To resolve the issue, I would advise Lisa to talk with Paula about new bedding standards and her performance. As Paulas line manager, Lisa has to give her a constructive feedback about her performance. She has to mention both good and bad aspects of Paulas work to show that her contribution to the team is appreciated. Moreover, Lisa has to make sure that Paula is familiar with new bedding standard. If she finds it difficult to fit it, Lisa can offer her a training program. There are younger housekeepers who are also challenged by new bedding standard. They can join Paula and learn from her how to boost their quality of work. Younger employees can benefit from cooperation with Paula because she has a profound work experience and well-developed skills. At the same time, Paula can adapt to new norms quicker if she is assisted by someone from her team. It is obvious that there are some serious issue with team work between housekeepers. Paulas performance can get worse because she is treated as an outsider by her team. As the

Cocoa solids Essay Example for Free

Cocoa solids Essay Chocolate! The name brings memories of a sugary and scrumptious sweet in your mouth. Each and every person in the world, whatever be his age or his sex, loves the delicious sin. In fact, chocolate is one of the most preferred gifts on every occasion, birthday or anniversary, Valentine’s Day or Christmas, wedding or farewell. Whether it is your wife or your boyfriend, your kids or your in-laws, you can present chocolates to almost everyone. While eating a chocolate, have you ever thought how it came into being? If you are ignorant about the origin of chocolate till date, use the interesting information on its background, given below. History of Chocolate The oldest records related to chocolates date back to somewhere around 1500-2000 BC. The high rainfall, soaring temperatures and great humidity of Central American rain forests created the perfect climate for the cultivation of the Cacao Tree. During that time, the Mayan civilization used to flourish in that region. Mayan people worshipped Cacao Tree, believing it to be of divine origin. They also used to roasted and pounded seeds of the tree, with maize and Capsicum (Chilli) peppers, to brew a spicy, bitter sweet drink. The drink was consumed either in ceremonies or in the homes of the wealthy and religious elite. It is said that the word ‘Cacao’ was corrupted by the early European explorers and turned into Cocoa. Even the Aztecs, of Central Mexico, are believed to have acquired the beans through trade and/or the spoils of war. In fact, Cacao beans were considered to be so prized by Aztecs that they started using it as a type of currency. They also made a drink, similar to the one made by Mayans, and called it ‘Xocolatl’, the name which was later corrupted to Chocolat, by Spanish conquistadors. The further corruption of the word, which finally gave it its present form ’Chocolate’, was done by the English. Entry in Europe Xocolatl, or Chocolate, was brought to Europe by Cortez. It was here that sugar and vanilla were added to the Aztecs brew, to offset its spicy bitterness. The commercialization of chocolate started in Spain, where the first chocolate factories were opened. Spanish treasure fleets brought back dried fermented beans from the new world, roasting and grounding them to make chocolate powder. This powder was used to make European version of the ‘Aztec’ drink and then, exported to the other countries in Europe. Within a few years, Spain’s drink become popular throughout the continent and it was around 1520 that it came to England. However, it was only in the year 1657 that the first Chocolate House of England was opened, in London. The popularity of the drink led to a string of other Chocolate Houses. Since cocoa was so expensive, the houses started serving as elite clubs, where the wealthy and business community met to smoke a clay pipe of tobacco, conduct business and socialize over a cup of chocolate. It’s America Again Chocolate came to the place of its birth once again. This time, it was the English colonists who carried chocolate, along with coffee, with them to the colonies in North America. These colonies later consolidated into the United States of America and Canada. Despite the changes in the territorial boundaries, chocolate continued to be a favorite of all the Americans, of every age, sex, group, and so on. Till date, the status quo has not changed and hot chocolate is still one of the favorite drinks of the Americans. Modern Chocolate The chocolate of today, in the sold form, took its roots in England. It was around mid-1600, when English bakers started adding cocoa powder to cakes. Seeking to make chocolate drink smoother and more palatable, Johannes Van Houten, a Dutch chemist, invented a technique of extracting the bitter tasting fat (cocoa butter) from the roasted ground beans, in 1828. With this, he paved the way for the chocolate in its present form. It was in 1847 that solid chocolate, as we know of today, was made by Fry Sons of Bristol (England), by mixing sugar with cocoa powder and cocoa butter. The first milk chocolate was made in 1875, by Daniel Peters, a Swiss manufacturer, by mixing cocoa powder and cocoa butter with sugar and dried milk powder. The rest, as they say, is history! Today, chocolate is made across the globe and liked by almost every person in this world.

Monday, October 14, 2019

Reasoning in Artificial Intelligence (AI): A Review

Reasoning in Artificial Intelligence (AI): A Review 1: Introduction Artificial Intelligence (AI) is one of the developing areas in computer science that aims to design and develop intelligent machines that can demonstrate higher level of resilience to complex decision-making environments (Là ³pez, 2005[1]). The computations that at any time make it possible to assist users to perceive, reason, and act forms the basis for effective Artificial Intelligence (National Research Council Staff, 1997[2]) in any given computational device (e.g. computers, robotics etc.,). This makes it clear that the AI in a given environment can be accomplished only through the simulation of the real-world scenarios into logical cases with associated reasoning in order to enable the computational device to deliver the appropriate decision for the given state of the environment (Là ³pez, 2005). This makes it clear that reasoning is one of the key elements that contribute to the collection of computations for AI. It is also interesting to note that the effectiveness of the r easoning in the world of AI has a significant level of bearing on the ability of the machine to interpret and react to the environmental status or the problem it is facing (Ruiz et al, 2005[3]). In this report a critical review on the application of reasoning as a component for effective AI is presented to the reader. The report first presents a critical overview on the concept of reasoning and its application in the Artificial Intelligence programming for the design and development of intelligent computational devices. This is followed by critical review of selected research material on the chosen topic before presenting an overview on the topic including progress made to date, key problems faced and future direction. 2: Reasoning in Artificial Intelligence 2.1: About Reasoning Reasoning is deemed as the key logical element that provides the ability for human interaction in a given social environment as argued by Sincà ¡k et al (2004)[4]. The key aspect associated with reasoning is the fact that the perception of a given individual is based on the reasons derived from the facts that relative to the environment as interpreted by the individual involved. This makes it clear that in a computational environment involving electronic devices or machines, the ability of the machine to deliver a given reason depends on the extent to which the social environment is quantified as logical conclusions with the help of a reason or combination of reasons as argued by Sincà ¡k et al (2004). The major aspect associated with reasoning is that in case of human reasoning the reasoning is accompanied with introspection which allows the individual to interpret the reason through self-observation and reporting of consciousness. This naturally provides the ability to develop the resilience to exceptional situations in the social environment thus providing a non-feeble minded human to react in one way or other to a given situation that is unique in its nature in the given environment. It is also critical to appreciate the fact that the reasoning in the mathematical perspective mainly corresponds to the extent to which a given environmental status can be interpreted using probability in order to help predict the reaction or consequence in any given situation through a sequence of actions as argued by Sincà ¡k et al (2004). The aforementioned corresponds with the case of uncertainty in the environment that challenges the normal reasoning approach to derive a specific conclusion or decision by the individual involved. The introspective nature developed in humans and some animals provides the ability to cope with the uncertainty in the environment. This adaptive nature of the non-feeble minded human is the key ingredient that provides the ability to interpret the reasons to a given situation as opposed to merely following the logical path that results through the reasoning process. The reasoning in case of AI which aims to develop the aforementioned in the electronic devices to perform complex tasks with minimal human intervention is presented in the next section. 2.2: Reasoning in Artificial Intelligence Reasoning is deemed to be one of the key components to enable effective artificial programs in order to tackle complex decision-making problems using machines as argued by Sincà ¡k et al (2004). This is naturally because of the fact that the logical path followed by a program to derive a specific decision is mainly dependant on the ability of the program to handle exceptions in the process of delivering the decision. This naturally makes it clear that the effective use of the logical reasoning to define the past, present and future states of the given problem alongside the plausible exception handlers is the basis for successfully delivering the decision for a given problem in chosen environment. The key areas of challenge in the case of reasoning are discussed below (National Research Council Staff, 1997). Adaptive Software – This is the area of computer programming under Artificial Intelligence that faces the major challenge of enabling the effective decision-making by machines. The key aspect associated with the adaptive software development is the need for effective identification of the various exceptions and the ability to enable dynamic exception handling based on a set of generic rules as argued by Yuen et al (2002)[5]. The concept of fuzzy matching and de-duplication that are popular in case of software tools used for cleansing data cleansing in the business environment follow the above-mentioned concept of adaptive software. This is the case there the ability of the software to decide the best possible outcome for a given situation is programmed using a basic set of directory rules that are further enhanced using references to a variety of combinations that comprise the database of logical combinations for reasons that can be applied to a given situation (Yuen et al, 20 02). The concept of fuzzy matching is also deemed to be a major breakthrough in the implementation of adaptive programming of machines and computing devices in Artificial Intelligence. This is naturally because of the fact that the ability of the program to not only refer to a set of rules and associated reference but also to interpret the combination of reasons derived relative to the given situation prior to arriving on a specific decision. From the aforementioned it is evident that the effective development of adaptive software for an AI device in order to perform effective decision-making in the given environment mainly depends on the extent to which the software is able to interpret the reasons prior to deriving the decision (Yuen et al, 2002). This makes it clear that the adaptive software programming in artificial intelligence is not only deemed as an area of challenge but also the one with extensive scope for development to enable the simulation of complex real-world problem s using Artificial Intelligence. It is also critical to appreciate the fact that the adaptive software programming in the case of Artificial Intelligence is mainly focused on the ability to not only identify and interpret the reasons using a set of rules and combination of outcomes but also to demonstrate a degree of introspection. In other words the adaptive software in case of Artificial Intelligence is expected to enable the device to become a learning machine as opposed to an efficient exception handler as argued by Yuen et al (2002). This further opens room for exploring into knowledge management as part of the AI device to accomplish a certain degree of introspection similar to that of a non-feeble minded human. Speech Synthesis/Recognition – This area of Artificial Intelligence can be deemed to be a derivative of the adaptive software whereby the speech/audio stream captured by the device deciphers the message for performs the appropriate task (Yuen et al, 2002). The speech recognition in the AI field of science poses key issues of matching, reasoning to enable access control/ decision-making and exception handling on top of the traditional issues of noise filtering and isolation of the speaker’s voice for interpretation. The case of speech recognition is where the aforementioned issues are faced whilst in case of speech synthesis using computers, the major issue is the decision-making as the decision through the logical reasoning alone can help produce the appropriate response to be synthesised into speech by the machine. The speech synthesis as opposed to speech recognition depends only on the adaptive nature of the software involved as argued by Yuen et al (2002). This is due to the fact that the reasons derived form the interpretation of the input captured using the decision-making rules and combinations for fuzzy matching form the basis for the actual synthesis of the sentences that comprises the speech. The grammar associated with the sentences so framed and its reproduction depends heavily on the initial decision of the adaptive software using the logical reasons identified for the given environmental situation. Hence the complexity of speech synthesis and recognition poses a great challenge for effective reasoning in Artificial Intelligence. Neural Networks – This is deemed to be yet another key challenge faced by Artificial Intelligence programming using reasoning. This is because of the fact that neural networks aim to implement the local behaviour observed by the human brain as argued by Jones (2008)[6]. The layers of perception and the level of complexity associated through the interaction between different layers of perception alongside decision-making through logical reasoning (Jones, 2008). This makes it clear that the computation of the decision using the neural networks strategy is aimed to solving highly complex problems with a greater level of external influence due to uncertainties that interact with each other or demonstrate a significant level of dependency to one another. This makes it clear that the adaptive software approach to the development of the reasoned decision-making in machines forms the basis for neural networks with a significant level complexity and dependencies involved as argued by r efenrece8. The Single Layer Perceptions (SLP) discussed by Jones (2008) and the representation of Boolean expressions using SLPs further makes it clear that the effective deployment of the neural networks can help simulate complex problems and also provide the ability to develop resilience within the machine. The learning capability and the extent to which the knowledge management can be incorporated as a component in the AI machine can be defined successfully through identification and simulation of the SLPs and their interaction with each other in a given problem environment (Jones, 2008). The case of neural networks also opens the possibility of handling multi-layer perceptions as part of adaptive software programming through independently programming each layer before enabling interaction between the layers as part of the reasoning for the decision-making (Jones, 2008). The key influential element for the aforementioned is the ability of the programmer(s) to identify the key input and output components for generating the reasons to facilitate the decision-making. The backpropagation or backward error propagation algorithm deployed in the neural networks is a salient feature that helps achieve the major aspect of learning from mistakes and errors in a given computer program as argued by Jones (2008). The backpropagation algorithm in the multi-layer networks is one of the major areas where the adaptive capabilities of the AI application program can be strengthened to reflect the real-world problem solving skills of the non-feeble minded human as argued by Jones (2008). From the aforementioned it is clear that the neural networks implementation of AI applications can be achieved to a sustainable level using the backpropagation error correction technique. This self-correcting and learning system using the neural networks approach is one of the major elements that can help implement complex problems’ simulation using AI applications. The case of reasoning discussed earlier in the light of the neural networks proves that the effective use of the layer-based approach to simulate the problems in order to allow for the interaction will help achieve reliable AI application development methodologies. The discussion presented also reveals that reasoning is one of the major elements that can help simulate real-world problems using computers or robotics regardless of the complexity of the problems. 2.3: Issues in the philosophy of Artificial Intelligence The first and foremost issue faces in the case AI implementation of simulating complex problems of the real-world is the need for replication of the real-world environment in the computer/artificial world for the device to compute the reasons and derive upon a decision. This is naturally due to the fact that the simulation process involved in the replication of the environment for the real-world problem cannot always account for exceptions that arise due to unique human behaviour in the interaction process (Jones, 2008). The lack of this facility and the fact that the environment so created cannot alter itself fundamentally apart from being altered due to the change in the state of the entities interacting within the simulated environment makes it a major hurdle for effective AI application development. Apart from the real-world environment replication, the issue faced by the AI programmers is the fact that the reasoning processes and the exhaustiveness of the reasoning is limited to the knowledge/skills of the analysts involved. This makes it clear that the process of reasoning depending upon non-feeble minded human’s response to a given problem in the real-world varies from one individual to another. Hence the reasons that can be simulated into the AI application can only be the fundamental logical reasons and the complex derivation of the reasons’ combination which is dependant on the individual cannot be replicated effectively in a computer as argued by Là ³pez (2005). Finally, the case of reasoning in the world of Artificial Intelligence is expected to provide a mathematical combination to the delivery of the desired results which cannot be accomplished in many cases due to the uniqueness of the decision made by the non-feeble minded individual involved. This poses a great challenge to the successful implementation of AI in computers and robotics especially for complex problems that has various possibilities to choose from as result. 3: Critical Summary of Research 3.1: Paper 1 – Programs with Common Sense by Dr McCarthy The rather ambitious paper presented by Dr McCarthy aims to provide an AI application that can help overcome the issues in speech recognition and logical reasoning that pose significant hurdles to the logical reasoning in AI application development. However, the approach to the delivery of the aforementioned in the form of an advice taker is a rather feeble approach to the AI representation of the solution to a problem of greater magnitude. Even though the paper aims to provide an Artificial Intelligence application for verbal reasoning processes that are simple in nature, the fact that the interpretation of the verbal reasoning in the light of the given problem relative to an environment is not a simple component to be simulated with ease prior to achieving the desired outcome as discussed in section 2. â€Å"One will be able to assume that the advice taker will have available to it a fairly wide class of immediate logical consequences of anything it is told and its previous knowledge†. (Dr McCarthy, Pg 2). This statement by the author in the research paper provides room for the discussion that the advice taker program proposed by Dr McCarthy is aimed to deliver an AI application using knowledge management as a core component for logical reasoning. This is so because of the nature of the statement which implies that the advice taker program will be able to deliver its decision through access to a wide range of immediate logical consequences of anything it is told and its previous knowledge. This makes it clear that the advice taker software program is not a non-viable approach as the knowledge management strategy for logical reasoning is a component under debate as well as development over a wide range of scientific applications related problems simulation using AI. The Two S tage Fuzzy Clustering based on knowledge discovery presented by Qain in Da (2006)[7] is a classical example for the aforementioned. It is also interesting to note that the knowledge management aspect of artificial intelligence programming is mainly dependant on the speed related to the access and processing of the information in order to deliver the appropriate decision relative to the given problem (Yuen et al, 2002). A classical example for the aforementioned would be the use of fuzzy matching for validation or suggestion list generation on Online Transaction Processing Application (OLTP) on a real-time basis. This is the scenario where a portion of the data provided by the user is interpreted using fuzzy matching to arrive upon a set of concrete choices for the user to choose from (Jones, 2008). The process of choosing the appropriate option from the given suggestion list by the individual user is the component that is being replaced using Artificial Intelligence in machines to c hoose the best fit for the given problem. The aforementioned is evident in case of the advice taker software program that aims to provide a solution for responding to verbal reasoning processes of the day-to-day life of a non-feeble minded individual. The author’s objective ‘to make programs that learn from their experience as effectively as humans do’, makes it clear that the knowledge management approach with the ability of the program to utilise a database type storage option to store/access its knowledge and previous experiences as part of the process. This makes it clear that the advice taker software maybe a viable option if the processing speed related to the retrieval and storage of information from a database of such magnitude which will grow in size at an exponential rate is made available for the AI application. The aforementioned approach can be achieved by the use grid computing technology as well as other processing capabilities with the availability of electronic components at affordable prices on the market. The major issue however is the design for such an application and the logical reasoning processes of retrieving such information to arrive at a decision for a given problem. Form the discuss ion presented in section 2 it is evident that the complexity in the level of logical reasoning results in higher level of computation to account for external variants thus providing the decision appropriate to the given problem. This cannot be accomplished without the ability to deliver process through the existing logical reasons from the application’s knowledgebase. Hence the processing speed and efficiency of computation in terms of both the architecture and software capability is a question that must be addressed to implement such a system. Although the advice taker software is viable in a hardware architecture perspective, the hurdle is the software component that must be capable of delivering the abstraction level discussed by the author. This is because, the ability to change the behaviour of the system by merely providing verbal commands from the user which is the main challenge faced by the AI application developers. This is so because of the fact that the effective implementation of the aforementioned can be achieved only with the effective usage of the speech recognition and logical reasoning that is already available to the software for incorporating the new logical reason as an improvement or correction to the existing set-up of the application. This approach is the major hurdle which also poses the challenge of identifying the key speech patterns that are deemed to be such corrective commands over the statements’ classification provided by the user author for providing information to the application. Fr om the above arguments it can be concluded that the author’s statement – â€Å"If one wants a machine to be able to discover an abstraction, it seems most likely that the machine must be able to represent this abstraction in some relative simple way† – is not a task that is easily realisable. It is also necessary to address the issue that the abstractions that can be realised by the user can be realised by an AI application only if the application being used already has a set of reasons or room for learning the reasons from existing reasons prior to decision-making. This process can be accomplished only through complex algorithms as well as error propagation algorithms discussed in section 2.3. This makes it clear that the realization of the advice taker software’s capability to deliver to represent any abstraction in a relative simpler way is far fetched without the appropriate implementation of self-corrective and learning algorithms. The fact th at learning is not only through capturing the previous actions of the application in similar scenarios but also to generate logical reasons based on the new information provided to the application by the users is an aspect of AI application which is still under development but the necessary ingredient for the advice taker software. However, considering the timeline associated with the research presented by Dr McCarthy and the developments till date, one can say that the AI application development has seen higher level of developments to interpret information from the user to provide an appropriate decision using the logical reasoning approach. The author’s argument that for a machine to learn arbitrary behaviour simulating the possible arbitrary behaviours and trying them out is a method that is extensively used in the twenty-first century implementation of the artificial intelligence for computers and robotics. The knowledge developed in the machines programmed using AI is m ainly through the use of the arbitrary behaviours simulated and their results loaded into the machine as logical reasons for the AI application to refer when faced with a given problem. Form the arguments of the author on the five features necessary for an AI application hold viable in the current AI application development environment although the ability of the system to create subroutines which can be included into procedures as units is still a complex task. The magnitude of the processor speed and related requirements on the hardware architecture is the problem faced by the developers as opposed to the actual development of such a system. The author’s statement that ‘In order for a program to be capable of learning something it must first be capable of being told it’ is one of the many components of the AI application development that has seen tremendous development since the dawn of the twenty-first century (Jones, 2008). The multiple layer processing strategy to address complex problems in the real world that have influential variants both within the input provided as well as the output in the current state of AI application development is synonymous to the above statement by Dr McCarthy. The neural networks for adaptive behaviour presented in great detail by Pfeifer and Scheier (2001)[8] further justifies the aforementioned. This also opens room for discussion on the extent to which the advice taker application can learn from experience through the use of neural networks as an adaptive behaviour component for programming robots and other devices facing complex real-world problems. This is the kind of adaptive behaviour that is represented by the advice taker application by Dr McCarthy who described it nearly half a century ago. The viability of using neural networks to take comments in the form of sentences (imperative or declarative) is plausible with the use of the adaptive behaviour strategy described above using neural networks. Finally, the construction of the advice taker described by the author can be met with in the current AI application development environment although the viability of the same would have been an enormous challenge at the time when the paper was published. The advice taker construction in the twenty-first century AI environment can be accomplished using either a combination of computers and robotics or one of the two as a sole operating environment. So development of the AI application either using computers or robotics for the delivery of the advice taker is plausible depending upon the delivery scope for the application and its operational environment. Some of the hurdles faced however would be with the speech recognition and the ability to distinguish imperative sentences to declarative sentences. The second issue faced in the case of the advice taker will be the scope of application as the simulation of various instances for generating the knowledge database is plausible only withi n the defined scope of the application’s target environment as opposed to the non-feeble human mind that can interact with multiple environments at ease. The multiple layer neural networks approach may help tackle the problem only to a certain level as the ability to distinguish between different environments when formed as layers is not easily plausible without the knowledge on its interpretation stored within the system. Finally, a self-corrective system for AI application is plausible in the twenty-first century but the self learning system using the logical reasons provided is still scarce and requires a greater level of design resilience to account for input and output variants of the system. The stimulus-response forms described by the author in the paper is realisable using the multiple layer neural networks implementation with the limitation on the scope of the advice taker restricted to a specific problem or set of problems. The adaptive behaviour simulated using the neural networks mentioned earlier justifies the ability to achieve the aforementioned. 3.2: Paper 2 – A Logic for Default Reasoning Default reasoning in the twenty-first century AI applications is one of the major elements that attribute to the effective functioning of the systems without terminating unexpectedly unable to handle the exception raised due to the combination of the logic as argued by Pfeifer and Scheier (2001). This is naturally because of the fact that the effective use of the default reasoning process in the current AI application development environment aims to provide default reasoning when an exhaustive list of the reasons that are simulated and rules combinations are effectively managed. However, the definition of exhaustive or the perception of an exhaustive list for the development in a given environment is limited to the number of simulations that the users can develop at the time of AI application design and the adaptive capabilities of the AI system post implementation (refernece8). This makes it clear that the effective use of the default reasoning in the AI application development can be achieved only through handling a wide variety of exceptional conditions that arise in the normal operating environment for the problem being simulated (Pfeifer and Scheier, 2001). In the light of the above arguments the assertion by the author on the default reasoning as beliefs which may well be modified or rejected by subsequent observations holds true in the current AI development environment. The default reasoning strategy described by the author is deemed to be a critical component in the AI application development mainly because of the fact that the defaulting reasons are not only aimed to prevent unhandled exceptions leading to abnormal termination of the program but also the effective learning from experience strategy implemented within the application. The learn from experience described in the section 2 as well as the discussion presented in section 3.1 reveal that the assignment of a default reason for an adaptive AI application will provide room for identifying the exceptions that occur in the course of solving problems thus capturing new exceptions that can replace the existing default value. Furthermore, the fact that the effective use of the default reasoning strategy in AI applications also limits the learning capabilities of the application in cases where the adaptive behaviour of the system is not effective although preventing abnormal termination of the sys tem using the default reason. The logical representation of the exceptions and defaults and the interpretation used by the author to interpret the phrase ‘in the absence of any information to the contrary’ as ‘consistent to assume’ justifies the aforementioned. It is further evident from the arguments of the author that the default reason creation and its implementation into the neural network as a set of logical reasons are complex than the typical case wise conditional analysis on establishing a given condition holds true to the situation on hand. Another interesting factor to the aforementioned it the fact that the definition of the conditions must incorporate room for partial success owing to the fact that the typical logical approach of success or failure do not always apply to the AI application problems. Hence it is necessary to ensure that the application is capable of accommodating partial success as well as accounting for a concrete number to the given problem in order to gener ate an appropriate decision. The discussion on the non-monotonic character of the application defines the ability to effectively formulate the condition for default reasoning rather than merely defaulting due to the failure of the system to accommodate for the changes in the environment as argued by Pfeifer and Scheier (2001). Carbonell (1980)[9] further argues that the type hierarchies and their influence on the AI system have a significant bearing on the default reasoning strategies defined for a given AI application. This is naturally because of the fact that the introduction of the type hierarchies in the AI application will provide the application to not only interpret the problem against the set of rules and reference data stored as reasons but also assign it within the hierarchy in order to identify the viability of applying a default reason to the given problem. The arguments of Carbonell (1980) on Single-Type and Multi-Type inclusion with either strict or non-strict partiti oning justify the above-mentioned argument. It is further critical to appreciate the fact that the effective implementation of the type hierarchy in a logical reasoning environment will provide the AI application with greater level of granularity to the definition and interpretation of the reasons pertaining to a given problem (Pfeiffer and Scheier, 2001). It is this state of the AI application that can help achieve a significant level of independence and ability to interact effectively in the environment with minimal human intervention. The discussion on the inheritance mechanisms presented by Carbonell (1980) alongside the implementation of the inheritance properties as the basis for the implementation of AI systems in the twenty-first century (Pfeifer and Scheier, 2001) further justify the need for default reasoning as an interactive component as opposed to a problem solving constant to prevent abnorm