Sie sind auf Seite 1von 90


Looking beyond Infinity

Aleph Naught Club


I have often pondered over the roles of knowledge or experience, on the one hand and imagination or intuition, on the other, in the process of discovery. I believe that there is a fundamental conflict between the two and knowledge by advocating caution tends to inhibit the flight of imagination. Therefore a certain naivet unburdened by conventional wisdom can sometimes be a positive asset. --Harish-Chandra

Foreword DISHA was conceptualized to give the students of Mathematics, a glimpse of the several dimensions in which Mathematicians think, ponder about and do Research. The effort was quite fruitful with some really fascinating topics of Mathematics being taken up. On behalf of the Faculty of Mathematics Department, School of Sciences and Humanities, Jain University, I would like to congratulate the editorial team and sincerely hope that this Magazine shall inspire our young students to develop Research aptitude and the aesthetic sense of Mathematics. Prof. Arathi Sudarshan Head, Department of Mathematics School of Sciences and Humanities, JU.

EDITORIAL COMMITTEE Chief Editor J.V.Ramana Raju

Associates Puneeth Gowda Puneeth Krishna Prathik Kumar Sunil.M.N Heena Karnavat Yoshitha Sushmitha.Diwakar

From the Desk of the Editor Isaac Barrow once said Mathematics is the unshaken Foundation of Sciences, and the plentiful Fountain of Advantage to human affairs. This quote tells the whole story; one cannot afford to underestimate the power of mathematical intuition. This Magazine considered to be an opportunity for the young budding Scientists to know and get enlightened about the many facets of Mathematical knowledge, has probably served its purpose. From the trivial thoughts about the invention of zero by the Indian Mathematical legends to the complex arena of using Mathematics in Graphics systems and cryptography, there are many interesting topics worthy of mention. To give a sample of themes Suraj from MEC combination has explored the fascinating topic of curved manifolds and the geometry of Black holes, while Harshitha explains how Mathematics helps in hiding information collected during surveys. Codes and Ciphers are taken up respectively by Heena and Amrutha. It is the opinion of several Mathematicians that certain areas like Algebra and Number theory were considered to be too pure for them to be touched by mundane human tasks. But the application of polynomials and cyclic groups in todays communication systems have made us believe that any piece of Mathematics has the potential to generate commercial utilities. The three pillars of Mathematics namely Algebra, Analysis and Geometry have now given rise to around 104 subtopics according to the American Mathematical society subject classification 2010. So there is a high probability that two Mathematicians who are sharing dais may not be knowledgeable about each others chosen specialization. I would like to state that the canvas is very wide and there is a huge opportunity for todays youngsters to do cutting edge Research in various themes of Mathematics. Not many know that Indian Mathematics has been experiencing resurgence with quite a good number of Mathematicians of international repute working in some of the premier research institutes. India hosted the 2010 ICM, the International Congress of Mathematicians, a prestigious event held once in four years. It is in this event that the Fields Medal the Mathematicians Nobel prize is given away. I wish all the readers a very lively 'stroll' through this Mag! J.V.Ramana Raju Department of Mathematics

Mathematics of RSA Cryptosystem Amrutha .K 5th Semester Cryptography is the practice and study of hiding information. Modern cryptography intersects the disciplines of mathematics. Number theory may be one of the purest branches of mathematics, but it has turned out to be one of the most useful when it comes to computer security. Sensitive data exchanged between a user and a Web site needs to be encrypted to prevent it from being disclosed to or modified by unauthorized parties. The encryption must be done in such a way that decryption is only possible with knowledge of a secret decryption key. The decryption key should only be known by authorized parties. In traditional cryptography, such as was available prior to the 1970s, the encryption and decryption operations are performed with the same key. This means that the party encrypting the data and the party decrypting it need to share the same decryption key. Establishing a shared key between the parties is an interesting challenge. If two parties already share a secret key, they could easily distribute new keys to each other by encrypting them with prior keys. Then researchers suggested perhaps encryption and decryption could be done with a pair of different keys rather than with the same key. The decryption key would still have to be kept secret, but the encryption key could be made public without compromising the security of the decryption key. This concept was called Public-key cryptography because of the fact that the encryption key could be made known to anyone. Prime Generation and Integer Factorization are the two basic facts and one conjecture in number theory prepares the way for todays RSA public-key cryptosystem. The basic premise behind the RSA cryptosystem is that although multiplying two numbers is a simple process, factoring the product back into the original two numbers is much more difficult to do computationally.The difficulty increases as we use larger and larger numbers.In order to encrypt a message using the RSA cryptosystem one must rst choose two large prime numbers p and q, usually of about 50 digits each. Conclusions: Public-key cryptography finds its strongest application when parties who have no prior relationship want to exchange sensitive data with each other. Equally important was the fact that the mathematical operations in public-key cryptography required considerable computational resources relative to computer performance at the time. REED SOLOMON CODES Heena Karnavat 5th SEM BSc PMCs Introduction: A code is a mapping from a vector space of dimension m over a finite field K(denoted by Vm(K)) into a vector space of higher dimension n>m over the same field (Vn(K)). K is usually taken to be the field of two elements Z 2, in which case it is a mapping of m-tuples of binary digits (bits) into n-tuples of binary digits. Such mappings

are used to encode data so that errors crept in through the transit in the channel can easily be detected and corrected. Irving S Reed and Gustave Solomon invented a scheme for coding called Reed Solomon Codes. In coding theory, RS codes are non binary cyclic error correcting codes. They described a systematic way of building codes that could detect and correct multiple random symbol errors. RS code can be used to as an Erasure code and can correct uptot known erasures. In RS coding, source symbols are viewed as coefficients of a polynomial p(x) over a finite field. RS codes are viewed as cyclic BCH codes (a class of parameterized error correcting codes).The code C which is a vector space can be given as C={f(x1),f(x2)..f(xn) : f f[x], deg(f)k} Here (x1,x2,xn) is the input sequence of n symbols taken from a finite field F. Classic View: In practice, the encoding symbols are viewed as the coefficients of an output polynomial S(x) constructed by multiplying the message polynomial p(x) of maximum degree k-1 by a generator polynomial g(x) of degree t=n-k-1. The generator polynomial g(x) is defined by having as its roots i.e.
g x = x x . . .. .. .. . .. .. . . x =g0 +g1 x+g 2 x . . .. .. . .. +gt1 x
2 t 2 t1


The transmitter sends the n-1 coefficients of s(x)=p(x)g(x) and the receiver can use polynomial division by g(x) of the received polynomial to determine whether the message is in error; a non zero remainder means that an error was detected. Historically RS codes were first practically implemented in 1977 in the Voyager Programmer (Space Ship Equipment) in the form of concatenated codes. In 1982 the first commercial application of RS codes appeared with the Compact Discs where two interleaved RS codes are used. Later Bar Codes, Satellite Transmission and Broadcasting also began using RS codes. Conclusion: Reed Solomon codes are a powerful class of non binary block codes which are particularly used for correcting burst errors. Since, coding efficiency increases with code length, RS codes have a special attraction. They can be configured with long block lengths (in bits) with less decoding time than other codes of similar lengths. This is because, the decoder logic in RS codes works with symbol-based rather than bit-based arithmetic. Hence, for 8-bit symbols, the arithmetic operations would all be at the byte level. This increases the complexity of the logic, compared with binary codes of the same length, but it also increases the throughput.

Fractal Geometry Sundar.M.N 5th sem, B.Sc, PCM Nature loves symmetry and hence we observe symmetry in anything and everything around us. Mathematics, geometry in specific, loves analysing these symmetrical systems so as to help us understand the Nature and its elements in a better manner. A Mathematician uses several tools in geometry to get a hold on the study of the structures and one such tool is the concept of FRACTALS. A fractal is a fragmented geometric shape that can be split into parts, each of which has the shape of the original shape. Or in other words its a geometrical shape that has selfsimilarity. Fractal word was derived from the Latin fractus- meaning "broken" or "fractured. Though the study of Fractals was started by mathematician and philosopher Gottfried Leibniz who was working on recursive self-similarity but Benot Mandelbrot is honoured as The Father of Fractal geometry as it was categorised and the name was coined by him in the year 1975. In the meantime several mathematicians like Karl Weierstrass, Georg Cantor Felix Hausdorff, Helge von Koch unknowingly worked with fractals while studying functions that were continuous but not differentiable. We observer that a mathematical fractal is based on an equation that undergoes iteration and roots can be traced back to functions which come under the category of functions that are continuous but not differentiable. The chronology of the development of the Fractals is as below: Gottfried Leibniz, mathematician and philosopher, considers recursive selfsimilarity in the 17th century thus starting the fractal geometry. Karl Weierstrass gave an example of a function with the non-intuitive property of being everywhere continuous but nowhere differentiable in 1872, the first fractal. Helge von Koch in 1904 gave a more geometric definition of a similar function, which is now called the Koch curve. Later many Mathematicians created several shapes using functions like the triangle and the carpet that was constructed by Waclaw Sierpinski in 1915. The idea of self-similar curves was taken further by Paul Pierre Lvy, who, in his 1938 paper which described a new fractal curve, the Lvy C curve. Georg Cantor also gave examples of subsets of the real line with unusual properties called Cantor sets that are fractals.

Every geometrical structure has a set of characteristic features which help the mathematicians to classify these structures and analyse them. Similarly the Fractals also have some characteristic features which are listed below: Very fine structure, very irregular in the sense of geometry, self similarity and dimensions often a fraction. One can enlist several real life applications of fractals. For instance in understanding coastline complexity, kinetics of enzymes, election results,analysis of music, computer games and graphics to name a few

MATHEMATICAL BIOLOGY Srima.Ghamire, Vth Semester B.Sc Mathematics, the only language shared by all human being and language of symbols with this universal language, all of us no matter what our unit of exchange, are likely to arrive at be literate in the shared language of maths. With this language we can explain the mysteries of the universe or secrets of DNA or overview of mathematical concept as related to biological science. Mathematical biology is an interdisciplinary scientific research field with a range of applications in biology, medicine and biotechnology. Mathematical biology aims at the mathematical representation, treatment and modelling of biological processes, using a variety of applied mathematical techniques and tools. For example recent development of mathematical tools such as Chaos Theory help to understand complex, non-linear mechanisms in biology. A model of a biological system is converted into a system of equations. The solution of these equations is by either analytical or numerical means, describes how the biological system behaves at equilibrium. There are many different types of equations and the type of behaviour that can occur is dependent on both the model and the equations used. The model often makes assumption about the system. The equations may also make assumption about the nature of what may occur. Mathematical biology ha got both practical and theoretical applications in biological, biomedical and biotechnology research. In biology, mathematics areas which are being applied are calculus, probability theory, statistics, linear algebra, abstract algebra, graph theory, combinatorics, algebraic geometry, topology, dynamical systems, differential equations and coding theory. Applying mathematics to biology has a long history, but recently there has been an explosion of interesting fields, a mathematical biology approach of cancer, explains the double helix of DNA last but not the least exposed to diverse quantitative concepts to provide the conceptual foundation for biological systems.Mathematics is the gateway and the key to other sciences. KNOT THEORY Sindhuja 5th Semester B.Sc In topology, knot theory is the study of mathematical knots. While inspired by knots which appear in daily life in shoelaces and rope, a mathematician's knot differs in that the ends are joined together so that it cannot be undone. In precise mathematical

language, a knot is an embedding of a circle in 3-dimensional Euclidean space, R3. Two mathematical knots are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself. Knots can be described in various ways. Given a method of description, however, there may be more than one description that represents the same knot. For example, a common method of describing a knot is a planar diagram called a knot diagram. Any given knot can be drawn in many different ways using a knot diagram. Therefore, a fundamental problem in knot theory is determining when two descriptions represent the same knot. A complete algorithmic solution to this problem exists, which has unknown complexity. In practice, knots are often distinguished by using a knot invariant, a "quantity" which is the same when computed from different descriptions of a knot. Important invariants include knot polynomials, knot groups, and hyperbolic invariants. The original motivation for the founders of knot theory was to create a table of knots and links, which are knots of several components entangled with each other. Over six billion knots and links have been tabulated since the beginnings of knot theory in the 19th century. To gain further insight, mathematicians have generalized the knot concept in several ways. Knots can be considered in other three-dimensional spaces and objects other than circles can be used. Higher dimensional knots are n-dimensional spheres in mdimensional Euclidean space. The generalized Poincar conjecture states that Every simply connected, closed nmanifold is homeomorphic to the n-sphere. Every n-dimensional knot can therefore be stretched into a trivial n-sphere. N-dimensional knots are generally not decomposable into 2-dimensional knots, though they can be projected to superpositions of lower dimensional knots. Application of Calculus Vinutha M 5th sem BSc (PMCs) Introduction: Calculus is a very versatile and valuable tool. It is a form of mathematics which was developed from algebra and geometry. It is made up of two interconnected topics, differential calculus and integral calculus. Differential calculus is the mathematics of motion and change whereas integral calculus covers the accumulation of quantities, such as area under a curve. Calculus is deeply integrated in every branch of the physical sciences. It is found in computer science, statistics, engineering, economics, business, and medicine. Modern developments such as architecture, aviation and other technologies all make use of what calculus can offer.

Right from the concept of Limiting process to the more intricate calculation involving Functional, Series representations etc. Calculus concepts are very much applied to disciplines like Business, Economic, Engineering and of late even in Biology and Medicine. We enlist a few exciting applications of calculus. Practical applications of Calculus: Finding the Slope and length of a Curve: Calculus can give us a generalized method of finding the slope of a curve. The slope of a line is elementary, using some basic algebra it can be found. With regular maths you can do the straight incline problem whereas with calculus you can do the curving incline problem. Calculus allows us to find out how steeply a curve will tilt at any given time. This can be very useful in any area of study. For example you can determine the length of a cable hung between two towers that has the shape of a catenary (which is different, by the way, from a simple circular arc or a parabola). Knowing the exact length is of obvious importance to a power company planning hundreds of miles of new electric cable. Calculating the Area of Any Shape: Calculus can be used to find the area of any shapes. Even though we have some standard methods to do this calculus allows us to make it much easier. You can calculate the area of the flat roof of a home with regular math. With calculus you can compute the area of a complicated, non-spherical shape like the dome of the Houston Astrodome. Architects designing such a building need to know the domes area to determine the cost of materials and to figure the weight of the dome. The weight, of course, is needed for planning the strength of the supporting structure. Conclusion: Calculus is one of the greatest inventions of modern science. The success of calculus has been extended over time into various other important topics in mathematics. Some are: differential equations, vector calculus, calculus of variations, complex analysis and differential topology. Since every process occurring in nature can be analyzed by calculus methods. One can visualize them by graphs, using the concept of slope and second derivative. The whole edifice of Newtonian mechanics and Celestial mechanics is understood by integrating the concepts of geometry and calculus. Hence calculus is one of the greatest inventions of modern science. Banach-Tarski Theorem Bhavani 5th Semester BSc Can a ball be decomposed into a finite number of point sets and reassembled into two balls identical to the original? The BanachTarski paradox is a theorem in set theoretic geometry which states that a solid ball in 3-dimensional space can be split into a finite number of non-overlapping pieces, which can then be put back together in a different way to yield two identical

copies of the original ball. The reassembly process involves only moving the pieces around and rotating them, without changing their shape. However, the pieces themselves are complicated: they are not usual solids but infinite scatterings of points. A stronger form of the theorem implies that given any two "reasonable" solid objects (such as a small ball and a huge ball) solid in the sense of the continuum either one can be reassembled into the other. This is often stated colloquially as "a pea can be chopped up and reassembled into the Sun". The reason the BanachTarski theorem is called a paradox is that it contradicts basic geometric intuition. "Doubling the ball" by dividing it into parts and moving them around by rotations and translations, without any stretching, bending, or adding new points, seems to be impossible, since all these operations preserve the volume, but the volume is doubled in the end. Unlike most theorems in geometry, this result depends in a critical way on the axiom of choice in set theory. This axiom allows for the construction of nonmeasurable sets, collections of points that do not have a volume in the ordinary sense and require an uncountably infinite number of arbitrary choices to specify Stefan Banach and Alfred Tarski gave a construction of such a "paradoxical decomposition", based on earlier work by Giuseppe Vitali concerning the unit interval and on the paradoxical decompositions of the sphere by Felix Hausdorff, and discussed a number of related questions concerning decompositions of subsets of Euclidean spaces in various dimensions. They proved the following more general statement, the strong form of the BanachTarski paradox: Given any two bounded subsets A and B of a Euclidean space in at least three dimensions, both of which have a non-empty interior, there are partitions of A and B into a finite number of disjoint subsets, A = A1UA2U.UAk, B = B1UB2UUBk, such that for each i between 1 and k, the sets Ai and Bi are congruent. Now let A be the original ball and B be the union of two translated copies of the original ball. Then the proposition means that you can divide the original ball A into a certain number of pieces and then rotate and translate these pieces in such a way that the result is the whole set B, which contains two copies of A. Any two bounded subsets of 3-dimensional Euclidean space with non-empty interiors are equidecomposable. While apparently more general, this statement is derived in a simple way from the doubling of a ball by using a generalization of BernsteinSchroeder theorem due to Banach that implies that if A is equidecomposable with a subset of B and B is equidecomposable with a subset of A, then A and B are equidecomposable. The BanachTarski paradox can be put in context by pointing out that for two sets in the strong form of the paradox, there is always a bijective function that can map the points in one shape into the other in a one-to-one fashion. In the language of Georg Cantor's set theory, these two sets have equal cardinality. The proof involves Set theoretic axioms including the Zarmelo Frankel Axioms. Mathematics of Game Theory Stalin.F 5th Semester Basic concepts of game theory Game:A conflict in interest among n individuals or groups (players). There exists a set of rules that define the terms of exchange of information and pieces, the conditions under which the game begins, and the possible legal exchanges in particular conditions. The entirety of the game is defined by all the moves to that point, leading to an outcome.

Move:The way in which the game progresses between states through exchange of information and pieces. Moves are defined by the rules of the game and can be made in either alternating fashion, occur simultaneously for all players, or continuously for a single player until he reaches a certain state or declines to move further. Moves may be choice or by chance. For example, choosing a card from a deck or rolling a die is a chance move with known probabilities. On the other hand, asking for cards in blackjack is a choice move. Information:A state of perfect information is when all moves are known to all players in a game. Games without chance elements like chess are games of perfect information, while games with chance involved like blackjack are games of imperfect information. Strategy:A strategy is the set of best choices for a player for an entire game. It is an overlying plan that cannot be upset by occurrences in the game itself. Payoff:The payoff or outcome is the state of the game at it's conclusion. In games such as chess, payoff is defined as win or a loss. In other situations the payoff may be material (i.e. money) or a ranking as in a game with many players. Extensive and Normal Form: Games can be characterized as extensive or normal. A in extensive form game is characterized by a rules that dictate all possible moves in a state. It may indicate which player can move at which times, the payoffs of each chance determination, and the conditions of the final payoffs of the game to each player. Each player can be said to have a set of preferred moves based on eventual goals and the attempt to gain a the maximum payoff, and the extensive form of a game lists all such preference patterns for all players. Games involving some level of determination are examples of extensive form games. The normal form of a game is a game where computations can be carried out completely. This stems from the fact that even the simplest extensive form game has an enormous number of strategies, making preference lists are difficult to compute. More complicated games such as chess have more possible strategies that there are molecules in the universe. A normal form game already has a complete list of all possible combinations of strategies and payoffs, thus removing the element of player choices. In short, in a normal form game, the best move is always known. Types of games: One-Person Games: A one-person games has no real conflict of interest. Only the interest of the player in achieving a particular state of the game exists. Single-person games are not interesting from a game-theory perspective because there is no adversary making conscious choices that the player must deal with. However, they can be interesting from a probabilistic point of view in terms of their internal complexity.

Zero-Sum Games:In a zero-sum game the total possible payoffs at the end is zero since the amounts won or lost are equal. Von Neumann and Oskar Morgenstern demonstrated mathematically that n-person non-zero-sum game can be reduced to an n + 1 zero-sum game, and that such n + 1 person games can be generalized from the special case of the two-person zero-sum game. Another important theorem by Von Neumann, the minimax theorem, states certain aspects of the maximal and minimal strategies of are part of all two-person zero-sum games. Thanks to these discovery, such games are a major part of game theory. Two-Person Games:Two-person games are the largest category of familiar games. A more complicated game derived from 2-person games is the n-person game. These games are extensively analyzed by game theorists. However, in extending these theories to n-person games a difficulty arises in predicting the interaction possible among players since opportunities arise for cooperation and collusion. Applications : Though at first glance the idea of game theory sounds trivial, applications of game theory are extensive. Von Neumann and Morgenstern originally applied their models of games to economic analysis. Each factor in the market, such as seasonal preferences, buyer choice, changes in supply and material costs, and other such market factors can be used to describe strategies to maximize the outcome and thus the profit. However, game theory can be also used to simply study economics of the past and interactions of different factors in a matter. It can also be used to investigate matters such as monetary distributions and their effects on other outcomes. Military strategists have turned to game theory to play "war games." Usually, such games are not zero-sum games, for loses to one side are not won by the other, and they have been criticized as potentially dangerous oversimplification of necessarily factors. Economic situations are also more complicated than zero-sum games, but those factors only require readjustments to the strategy over time. Sociologists have taken an interest in game theory, and have developed an entire branch dedicated to group decision making. Immunization procedures and vaccine or other medication tests are analyzed by epidemiologists using game theory. The properties of n-person non-zero-sum games can be used to study different aspects of social sciences as well. Matters such as distribution of power, interactions between nations, the distribution of classes and their effects of government, and many other matters can be easily investigated by breaking the problem down into smaller games, each of whose outcomes affect the final result of a larger game.

Black-Holes and Geometry

Suraj 5th Semester

Black holes are very exotic objects, in the way that we cannot apply common sense to deduct the behavior of anything which is close to a black hole. This however makes them all the more interesting. If common sense does not help us, we must use mathematical formulas and simulations to see what happens. The mathematical formulas are provided by Einsteins famous general theory of relativity. In essence, it reformulates the laws of gravity in terms of space-time geometry. Theprinciple of equivalence the starting point of general relativity, states that in any local (that is, sufficiently small) region in space-time, it is possible to formulate the equations governing physical laws such that the effect of gravitation can be neglected. Take, for a concrete example, the astronauts in a satellite orbiting the Earth. They are constantly under the influence of gravity. But yet, in the satellite station, things behave as if there was no gravity, you see on TV how cups float in the air and so on. Eliminating gravity from the picture, something else had to be changed to account for its influence. SOME FACTS ABOUT CURVED GEOMETRY To help the understanding of the new approach to gravity and the universe, scientists often resort to embedding diagrams, which show curved surfaces as examples of Einsteins curved geometry. Of course not all 4 dimensions (3 spatial and 1 temporal dimension) can be drawn on paper (unless you do it artistically). Often, these diagrams suppress one or two spatial coordinates, to give a clearer picture. There can be three types of curved surfaces, according to the value of the curvature: positive, negative or zero. We have been referring to curved geometry, curvature and so on, but what is curvature? In Euclidean geometry, the term refers to the inverse of the radius of the circle tangent to the surface. Here, curvature is given a physical sense: It refers to the trajectories in space-time of two rays of light, which start out parallel to each other. Do parallel lines intersect ? Not in flat space, on which the Euclidean geometry we learn in school is based. But in positively curved spacetime, they do, while in negatively curved spacetime, they get further and further apart. This should give you a sense of what curvature refers to. Another way of looking at curvature (this time considering also its value, not only the sign) is to think ofGEODESICS . A geodesic is the shortest path between two events, or points in 4D.What is the shortest path between two points? On the curved surface, a straight line which has every point contained in the surface simply does not exist. (Try drawing a straight line on a sphere!) Depending on how curved a surface is, the

geodesics get more and more curved, thus longer and longer compared to the straight line connecting the points. DOWN THE BENT AXIS, AS DEEP AS WE CAN GO Now that we went through all the trouble to explain this new geometry, how does it relate to black holes? As all massive bodies, black holes produce curvatures in space-time according to Einstein. How? First of all let us see how a massive star curves spacetime. The most widely used analogy is that of a heavy ball placed on a rubber sheet (though it can be misleading in many ways): The curved spacetime around a massive body looks similar to this, although one spatial coordinate is replaced with the temporal one. To properly and quantitatively describe events in curved spacetime, one would like to have a metric , a recipe to show one how to work with the four coordinates. If you are familiar with the scalar product of two vectors, a metric is easily explained: it gives the scalar product of two four-vectors, that is vectors with 4 components instead of 3. One of the biggest mathematical challenges was to modify the special relativistic metric to suit gravitational influence, to take into account the new geometrical constraints posed by general relativity. The first solution was offered by the German scientist Karl Schwarzschild, who assumed a spherical symmetry of the solution in order to simplify the equations. Due to spherical symmetry, he transformed the special relativity invariant from Cartesian to polar coordinates: Then, he assumed a more general invariant would suit the General Theory of Relativity TENSORS Saideep.B.T.R 5th sem, B.Sc, PCM

To start off with, Ill be assuming that the readers of this article are familiar with the concepts, of scalars and vectors barring which, the following few paragraphs of this article, is not going to make much sense!! The multiplication of a scalar with a vector will result in another vector of same direction but a different magnitude. Cross product between two vectors is not sufficient to change the direction as it limits the change to only 90 degrees. Thus arises a necessity of defining a new physical quantity called TENSORS. To define a tensor let us consider two vectors and Then, the resultant tensor is defined and represented as It is a distinct entity. It is neither a scalar nor a vector. It is very similar to the way we multiply two algebraic equations involving more than one variable. Let the resultant tensor be called T. Then, the tensor is denoted by the symbol .

Now having described the basic definition of Tensors, let us try and understand them further starting with their classifications. Just as scalars and vectors can be defined by a set of numbers and unit vectors to represent the direction of the vectors, tensors can be defined by an array of numbers in more than one dimension. These arrays of numbers are called the scalar components or just components of the tensor. Such a tensor is denoted by the name of the tensor followed by the index of the tensor. The index of the tensor usually denotes its position in the array. The total number of indices is called the order of the tensor or the rank of the tensor. In fact scalars and vectors are also tensors. A scalar is a tensor of order 0 and the order of a vector is 1. The number of components of a particular tensor is given by where is the order or rank of the tensor. Thus it follows that a scalar has only one component (its magnitude) and a vector has three components (Magnitude and one direction). On this basis tensors consisting of different number of components can be defined. A tensor of order 2 is called a dyad and has 9 components. Similarly a tensor of order 3 has 27 components and is called a triad and so on... To illustrate tensors further consider an example from classical electrodynamics. The magnetic flux density B and the magnetization H are related as But in some cases like hysteresis loops of a steel cube subjected to a magnetic field in each of the three directions of a cube shows peculiar characteristics as there is a change in the direction of magnetization from the applies flux. But, this is contradictory to the above equation which shows that magnetic flux and magnetization are in the same direction with being a constant of value H/m. In such cases we need to slightly modify the above equation by replacing the scalar with a tensor of rank 2. Now, the vectors B and H both vary from each other in both magnitude and in direction. Therefore the modified equation relating magnetic flux density and magnetization is given by . Applications of Tensors to Different Fields!!! The tool called tensors has revolutionised the perceptions in science. The Diffusion tensor imaging is a technology which is very important in the study of rate of diffusion of water in the muscle fibres of the heart which varies with the direction from which the observer is watching. The Einsteins Field Equations (EFE) uses tensors as an important and necessary tool to prove that gravitational interaction is due to curvature in space-time being caused by matter and energy. Tensors are a very abstract concept which requires a lot of imagination and dedication to be understood. But its understanding gives huge insights into complex scientific phenomena, thus making it an enormously important subject of study.

Music and Mathematics

Sagar 5th Semester BSc Music theorists often use mathematics to understand music. Indeed, Mathematics is "the basis of sound" and sound itself "in its musical aspects... exhibits a remarkable array of number properties", simply because nature itself "is amazingly mathematical". Though ancient Chinese, Egyptians and Mesopotamians are known to have studied the mathematical principles of sound, the Pythagoreans of ancient Greece are the first researchers known to have investigated the expression of musical scales in terms of numerical ratios particularly the ratios of small integers. Their central doctrine was that "all nature consists of harmony arising out of number". From the time of Plato harmony was considered a fundamental branch of physics, now known as musical acoustics. The first person to make the connection between math and music was Pythagoras of Samos, a famous philosopher and cult leader who lived most of the time in southern Italy in 5th century BC. Among his claims to fame is the oldest known proof of what we call the "Pythagorean Theorem". If you have never heard of this guy, he is one of western civilizations strangest, but most influential thinkers. For Pythagoras, ratios were everything. He believed every value could be expressed as a fraction (he was wrong, but that is a whole different story). He also is the first to believe in the idea that mathematics is everywhere. One bit of evidence of underlying rational numbers was in Greek music. At the time, music was not as complicated as it is today. The Greek octave had a mere five notes. Pythagoras pointed out that each note was a fraction of a string. The attempt to structure and communicate new ways of composing and hearing music has led to musical applications of set theory, abstract algebra and number theory. Some composers have incorporated the Golden ratio and Fibonacci numbers into their work. Harmonics Why does a flute and a violin sound different when they play the same note? The answer is harmonics. Harmonics is also why scales have different feels to them. Most of what follows was discovered by German scientist Hermann Helmholtz in the 19th century, but surprisingly many musicians are unaware of this hidden connection between math and music. When you play a note on a flute, you are only producing that particular tone. On an old Moog synthesizer, you can do the same thing by using a sine wave to produce the note. When you play a note on a violin, you are not only producing that tone, but numerous harmonic tones as well. The Moog synthesizer uses a sawtooth wave to do the same thing .In Physics, harmonics are waves at proportional frequencies, and at inversely

proportional amplitudes. If we play an "A" (440hz) with full harmonics we will not only hear the 440hz tone, but also an 880hz tone at half the volume (first harmonic), a 1320hz tone at a third the volume (second harmonic), a 1760hz tone at a quarter of the volume (third harmonic), etc., until the frequencies get too high or the volume gets too low to be heard.


Recent research paints a new picture of the debt that we owe to Arabic/Islamic mathematics. Certainly many of the ideas which were previously thought to have been brilliant new conceptions due to European mathematicians of the sixteenth, seventeenth and eighteenth centuries are now known to have been developed by Arabic/Islamic mathematicians around four centuries earlier. In many respects the mathematics studied today is far closer in style to that of the Arabic/Islamic contribution than to that of the Greeks. There is a widely held view that, after a brilliant period for mathematics when the Greeks laid the foundations for modern mathematics, there was a period of stagnation before the Europeans took over where the Greeks left off at the beginning of the sixteenth century. The common perception of the period of 1000 years or so between the ancient Greeks and the European Renaissance is that little happened in the world of mathematics except that some Arabic translations of Greek texts were made which preserved the Greek learning so that it was available to the Europeans at the beginning of the sixteenth century. Before we proceed it is worth trying to define the period that this article covers and give an overall description to cover the mathematicians who contributed. The period we cover is easy to describe: it stretches from the end of the eighth century to about the middle of the fifteenth century. Giving a description to cover the mathematicians who contributed, however, is much harder. The works and are on "Islamic mathematics", similar to which uses the title the "Muslim contribution to mathematics". Other authors try the description "Arabic mathematics", see for example and . However, certainly not all the mathematicians we wish to include were Muslims; some were Jews, some Christians, some of other faiths. Nor were all these mathematicians Arabs, but for convenience we will call our topic "Arab mathematics". The regions from which the "Arab mathematicians" came was centred on Iran/Iraq but varied with military conquest during the period. At its greatest extent it stretched to the west through Turkey and North Africa to include most of Spain, and to the east as far as the borders of China. There began a remarkable period of mathematical progress withal Khwarizmis work and the translations of Greek texts. This period begins under the Caliph Harun alRashid, the fifth Caliph of the Abbasid dynasty, whose reign began in 786. He encouraged scholarship and the first translations of Greek texts into Arabic, such as Euclids Elements by al-Hajjaj, were made during al-Rashid's reign..

Perhaps one of the most significant advances made by Arabic mathematics began at this time with the work of al-Khawrazimi, namely the beginnings of algebra. It is important to understand just how significant this new idea was. It was a revolutionary move away from the Greek concept of mathematics which was essentially geometry. Algebra was a unifying theory which allowed rational numbers, irrational numbers, geometrical magnitudes, etc., to all be treated as "algebraic objects". It gave mathematics a whole new development path so much broader in concept to that which had existed before, and provided a vehicle for future development of the subject. Another important aspect of the introduction of algebraic ideas was that it allowed mathematics to be applied to itself in a way which had not happened before. As Rashed writes quotesAl-Khwarizmis successors undertook a systematic application of arithmetic to algebra, algebra to arithmetic, both to trigonometry, algebra to the Euclidean theory of numbers, algebra to geometry, and geometry to algebra. This was how the creation of polynomial algebra, combinatorial analysis, numerical analysis, the numerical solution of equations, the new elementary theory of numbers, and the geometric construction of equations arose.

CATEGORY THEORY Priyanka.B.G 5th sem, B.Sc, PCM

Category theory is an area of study in mathematics that examines in a abstract way the properties of particular mathematical concepts, by formalizing them as collections of objects and arrows, and they satisfy certain basic conditions and it occupies a central position in contemporary mathematics and theoretical Computer Science and in general mathematical theory of structures and of systems of structures. Category theory has many faces. Homological algebra is category theory in its aspects of organizing and suggesting manipulations in abstract algebra. Diagram chasing is a visual method of arguing with abstract arrows joined in diagrams. Functors and natural transformations are the key concepts in category theory. Functor associates to every object of one category to an object of another category and to every morphism in the first category to a morphism in the second. Natural transformation A Natural transformation is a relation between two functor, which describe natural construction, natural transformations, natural homomorphism between two constructions. In 1942-1945, Samuel Eilenberg and Saunders Mac Lane introduced categories, functors, and natural transformations.

We can define the empty set without referring to elements, By characterizing these objects in terms of their relation to other objects, given by morphisms of the respective categories. Two categories can be considered to be essentially same if the theorems of one category are readily transformed into theorems of other category. Every theorem in category has a dual which is essentially obtained by reversing all the arrows. If we consider a morphism between two objects as a process taking us from one object to another, then higher dimensional categories allow us to profitably generalize this by considering higher dimensional process, especially equivalence of categories, adjoint functor pairs (A function can be left (or right) adjoin to another functor that maps in opposite direction, and functor categories are used. A category C can be described as a set of ob, whose members are the objects of C, satisfying the three conditions, Morphis, Identity, and Composition. Category theory unifies mathematical structures in two different ways. First, every set theoretically defined mathematical structures with the appropriate portion of homomorphism yield a category. Second, once a type of structure has been defined, It is imperative to determine, how new structures can be constructed out of the given one. It also relates how different kinds of structures are related to one another. Category theory has become unautonomous field of research and pure category theory could be developed. It grew rapidly in its applications namely algebraic topology and homological algebra, and in algebraic geometry. The very first applications outside algebraic geometry were in set theory, where various independence results were recast in terms of tops and these are used to investigate models of various aspects of Institutionism. Category theory is used in theoretical Computer Science and development of new logical systems and quantum groups. The history of category theory offers a rich source of information to explore and take into account for an historically sensitive epistemology of mathematics. It is also used to characterize a specific mathematical domain. Category theory offers thus many philosophical challenges, challenges which will hopefully be taken up in years to come. Convergence of random variables Chaitra 5the Semester BSc In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences ofrandom variables to some limit random variable is an important concept in probability theory, and its applications to statistics andstochastic processes. The same concepts are known in more general such as stochastic convergence and they formalize the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle down into a behaviour that is essentially unchanging when items far enough into the sequence are studied. The different possible notions of convergence relate to how such a behaviour can be characterised: two readily understood behaviours are that the sequence eventually takes a constant value, and that values in the sequence continue to change but can be described by an unchanging probability distribution. Background

"Stochastic convergence" formalizes the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle into a pattern. These other types of patterns that may arise are reflected in the different types of stochastic convergence that have been studied. The different types of Convergence are as follows: 1.Convergence in distribution, Convergence in Probability, Sejaro convergence Convergence in mean etc Convergence in distribution With this mode of convergence, we increasingly expect to see the next outcome in a sequence of random experiments becoming better and better modeled by a givenprobability distribution. Convergence in distribution is the weakest form of convergence, since it is implied by all other types of convergence mentioned in this article. However convergence in distribution is very frequently used in practice; most often it arises from application of thecentral limit theorem. Convergence in probability The basic idea behind this type of convergence is that the probability of an unusual outcome becomes smaller and smaller as the sequence progresses. The concept of convergence in probability is used very often in statistics. For example, an estimator is called consistent if it converges in probability to the quantity being estimated. Convergence in probability is also the type of convergence established by the weak law of large numbers.

History of the Four Colour Theorem Kiran 5the semester Bsc The four-color theorem states that any map in aplane can be colored using four-colors in such a way that regions sharing a common boundary (other than a single point) do not share the same color. This problem is sometimes also called Guthrie's problem after F.Guthrie, who first conjectured the theorem in 1852. The Conjecture was then communicated to de Morgan and thence into the general community. In 1878, Cayley wrote the first paper on the conjecture. Fallacious proofs were given independently by Kempe (1879) and Tait (1880). Kempe's proof was accepted for a decade until Heawood showed an error using a map with 18 faces (although a map with nine faces suffices to show the fallacy)Six colors can be proven to suffice for the genus zero case, and this number can easily be reduced to five, but reducing the number of colors all the way to four proved very difficult. This result was finally obtained by Appel and Haken (1977), who constructed a computer-assisted proof that four colors were sufficient.. However, because part of the proof consisted of an exhaustive analysis of many discrete cases by a computer, some mathematicians do not

accept it. However, no flaws have yet been found, so the proof appears valid. A shorter, independent proof was constructed by Robertson etal(1996; Thomas 1998). In December 2004, G.Gonthier of Microsoft Research in Cambridge, England (working with B.Werner of INRIA in France) announced that they had verified the Robertson proof by formulating the problem in the equational logic program 'Coq' and confirming the validity of each of its steps (Devlin 2005, Knight 2005). Martin Gardner (1975) played an April Fool's joke by (incorrectly) claiming that the map of 110 regions illustrated above requires five colors and constitutes a counterexample to the four-color theorem. However, the coloring of Wagon (1998; 1999) obtained algorithmically using Mathematica software, clearly shows that this map is, in fact, four-colorable. Cox Theorem Probability and Philosophy Deepashree, 5th Semester B.Sc Coxs theorem provides a theoretical basis for using probability theory as a general logic of plausible inference. The theorem states that any system for plausible reasoning that satisfies certain qualitative requirements intended to ensure consistency with classical deductive logic and correspondence with commonsense reasoning is isomorphic to probability theory. However, the requirements used to obtain this result have been the subject of much debate. Philosophical conclusions are argued from formal mathematical results, one should look very carefully at the assumptions of the arguments in question. For any such argument cannot rest on the formal result alone; there must be some philosophical premise, and this is often illicitly smuggled through the back door. Cox philosophy and probability: There are two ways in which an agent can be uncertain about the state of a system. The first is familiar. This is where there is uncertainty about some underlying fact of the matter: System S is either in state or it is not, butagent A does not know which. A might be in possession of some probabilistic information about the state of Seither numerical (the probability that S is in state is x) or non-numerical (its more likely that S is in state than not). Call this epistemic uncertainty. Now compare this with a second, quite different kind of uncertainty; uncertainty where there is no fact of the matter about whether system S is in state or not. Indeed, here the uncertainty arises because there is no underlying fact of the matter. Call this second kind of uncertainty non-epistemic uncertainty. It follows that if there are any instances of non-epistemic uncertainty, an agent could not be in possession of probabilistic information in such cases

Cox's theorem has come to be used as one of the justifications for the use of Bayesian probability For example, in Jaynes (2003) it is discussed in detail in chapters 1 and 2 and is a cornerstone for the rest of the book. Probability is interpreted as a formal system of logic. The natural extension of Aristotelian logic

(in which every statement is either true or false) into the realm of reasoning in the presence of uncertainty. It has been debated to what degree the theorem excludes alternative models for reasoning about Uncertainty. For example, if certain "unintuitive" mathematical assumptions were dropped then alternatives could be devised, e.g., an example provided by Halpern (1999a). However Arnborg and Sjdin (1999, 2000a, 2000b) suggest additional "common sense" postulates, which would allow the assumptions to be relaxed in some cases.
Fibonacci Numbers and Nature Rekha.B ,5th Sem, B.Sc-PMCs In Mathematics, the Fibonacci numbers are the numbers in the following integer sequence:<0,1,1,2,3,5,8,13,21,34,55,89,144..> By definition, the first two Fibonacci numbers are 0 and 1, and each subsequent number is the sum of the previous two. In mathematical terms, the recurrence relation defines the sequence Fn of Fibonacci numbers: Fn=Fn-1+Fn-2 With seed values: F0=0 and F1=1 Fibonacci numbers are closely related to Lucas numbers in that they are a complementary pair of Lucas Sequences. They are intimately connected with the golden ratio, for example the closest rational approximations to the ratio are 2/1, 3/2, 5/3, 8/5... Applications include computer algorithms such as the Fibonacci Search Technique and the Fibonacci heap data structure, and graphs called Fibonacci cubes used for interconnecting parallel and distributed systems. They also appear in biological settings, in two consecutive Fibonacci numbers, such as branching in trees, arrangement of leaves on a stem, the fruit lets of a pineapple, the flowering of artichoke, an uncurling fern and the arrangement of a pine cone. In addition, numerous poorly substantiated claims of Fibonacci numbers or golden sections in nature are found in popular sources, e.g. family tree of honeybees, the spirals of shells, and the curve of waves. The Fibonacci numbers are also found in the BREEDING OF RABBITS. Fibonaccis Rabbits The original problem that Fibonacci investigated (in the year 1202) was about how fast rabbits could breed in ideal circumstances. Suppose newly born pair of rabbits, one male, one female, is put in a field. Rabbits are able to mate at the age of one month so that at the end of its second month a female can produce another pair of rabbits. Suppose that our rabbits never die and that the female always produces one new pair (one male, one female) every month from the second month on. The puzzle that Fibonacci posed was...

How many pairs will there be in one year?

1. At the end of the first month, they mate, but there is still one only 1 pair. 2. At the end of the second month the female produces a new pair, so now there are 2 pairs of rabbits in the field. 3. At the end of the third month, the original female produces a second pair, making 3 pairs in all in the field. 4. At the end of the fourth month, the original female has produced yet another new pair, the female born two months ago produces her first pair also, making 5 pairs. The number of pairs of rabbits in the field at the start of each month is 1, 1, 2, 3, 5, 8, 13, 21, 34 At the end of the nth month, the number of pairs of rabbits is equal to the number of new pairs (which is the number of pairs in month n 2) plus the number of pair alive last month (n 1). This is the nth Fibonacci number. GAME THEORY AND APPLICATIONS

Vinay B.S, 5th Semester BSc

Introduction: In Mathematics, game theory models strategic situations, or games, in which an individual's success in making choices depends on the choices of others (Myerson, 1991). It is used in the Social Sciences (most notably inEconomics,Management,Operations Research,Political Science, , and Social Psychology) as well as in otherformal sciences(logic, Computer Science, and Statistics) While initially developed to analyze competitions in which one individual does better at another's expense (zero sum games), it has been expanded to treat a wide class of interactions, which are classified according to several criteria. Today, "game theory is a sort of umbrella or 'unified field' theory for the rational side of social science, where 'social' is interpreted broadly, to include human as well as non-human players (computers, animals, plants)." Traditional applications of game theory define and study equilibria in these games. In an equilibrium. each player of the game has adopted a strategy that cannot improve his outcome, given the others' strategy. Many equilibrium concepts have been developed (most famously theNash equilibrium) to describe aspects of strategic equilibria. These equilibrium concepts are motivated differently depending on the area of application, although they often overlap or coincide. This methodology has received criticism, and debates continue over the appropriateness of particular equilibrium concepts, the

appropriateness of equilibria altogether, and the usefulness of mathematical models in the social sciences.

Mathematical game theory had beginnings with some publications byEmile Borel, which led to his 1938 book Applications aux Jeux de Hasard. However, Borel's results were limited, and his conjecture about the non-existence of mixed-strategy equilibria in twoperson zero-sum games was wrong. The modern epoch of game theory began with the statement of the theorem on the existence of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used Brouwer fixed point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and Mathematical economics. His paper was followed by his 1944 book :Theory of Games and Economic behavior, with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty. This theory was developed extensively in the 1950s by many scholars. Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. Eight game-theorists have won the Nobel Prize, while John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Jean Baptiste Fourier and his Contributions Aishwarya, 5th Semester B.Sc. Fourier, (Jean Baptiste) Joseph, Baron (Pronounced: Fooryay) (1768-1830) was born in Auxerre, France. He's considered a French mathematician and physicist. He was the son of a tailor and born into a prominent family. Although initially trained for the priesthood, he turned to mathematics and became a teacher by the age of 16. Fourier was professor between 1795 and 1798) at the cole polytechnique in Paris. He accompanied Napoleon I to Egypt and in 1808 he was made a baron. He died in Paris in 1830 - Fourier never married. Contributions: Fourier had a very distinguised career and is famous for showing how the conduction of heat in solid bodies could be analyzed in terms of infinite mathematical series which is

called the 'Fourier Series'. Fourier was the first correct theory on heat diffusion. His scientific writings are contained in two volumes. Published in 1822, the Fourier Series shows how a mathematical series of sine and cosine terms can be used to analyze heat conduction in solid bodies. Fourier came upon his idea in connection with the problem of the flow of heat in solid bodies, including the earth. The formula x/2 = sin x - (sin 2x)/2 + (sin 3x)/3 + was published by Leonhard Euler (1707-1783) before Fouriers work began, so you might like to ponder the question why Euler did not receive the credit for Fouriers series. The Fourier Transform and Its Application, by Ronald N. Bracewell, McGraw-Hill, 1986,pp 462-464 In his work Fourier was guided as much by his sound grasp of physical principles as by purely mathematical considerations. His motto was: Profound study of nature is the most fertile source of mathematical discoveries. This brought him biting criticism from such purists as Lagrange, Poisson, and Biot, who attacked his lack of rigor; one suspects, however, that political motivations and personal rivalry played a role as well. Ironically, Fouriers work in mathematical physics would later lead to one of the purest of all mathematical creations Cantors set theory. Extensions: The importance of Fouriers theorem, of course, is not limited to music: it is at the heart of all periodic phenomena. Fourier himself extended the theorem to nonperiodic functions, regarding them as limiting cases of periodic functions whose period approaches infinity. The Fourier series is then replaced by an integral that represents a continuous distribution of sine waves over all frequencies. This idea proved of enormous importance to the development of quantum mechanics early in our century. The mathematics of Fouriers integral is more complicated than that of the series, but at its core are the same two functions that form the backbone of all trigonometry: the sine and cosine. Lilavati Vikas.P 5th Semester Bsc
Lilavati is the first part of Bhaskaracharya's work Siddhantashiromani which he wrote at the age of 36. Siddhantashiromani consists of four parts namely 1) Lilavati 2) Algebra 3) Planetary motions 4) Astronomy. Lilavati has an interesting story associated with how it got its name. Bhaskaracharya created a horoscope for his daughter Lilavati, stating exactly when she needed to get married. He placed a cup with a small hole in it in a tub of water, and the time at which the cup sank was the optimum time Lilavati was to get married. Unfortunately, a pearl fell into the cup, blocking the hole and keeping it from sinking. Lilavati was then doomed never to wed, and her father Bhaskara wrote

her a manual on mathematics in order to console her, and named it Lilavati. This appears to be a myth associated with this classical work. Lilavati was used as a textbook in India in Sanskrit schools for many centuries. Even now, it is used in some Sanskrit schools. Lilavati mainly deals with what we call as `Arithmetic' in today's mathematical parlance. It consists of 279 verses written in Sanskrit in poetic form (terse verses). There are certain verses which deal with Mensuration (measurement of various geometrical objects), Volume of pyramid, cylinders, heaps of grains etc., wood cutting, shadows, trigonometric relations and also on certain elements of Algebra such as finding an unknown quantity subject to certain constraints using the method of supposition. The first verse of Lilavati is an invocatory verse on Lord Ganesha. Following the invocation, the next 9 verses deal with definitions of measurement units for various things. Firstly, he defines the various units of money which were in vogue during those days. This is followed by measures of gold, units of length, measures of grain in volume and lastly the measure of time. The next two verses define the all-important positional notation of digits and their values. It clearly states that the value of digits increase by a factor ten from right to left. The highest value defined is parardha. The next few verses describe the method (or technique or algorithm) to add two numbers in the positional system. Bhaskaracharya gives 5 diferent methods for multiplication. They involve various tricks like splitting the multiplier into two convenient parts and multiplying the multiplicand by each of the two parts and adding the results; factoring the multiplier, multiplying by each factor and then summing up the results etc. Bhaskaracharya seemed to have known the importance of zero, not just in positional notation, but also as a number. He has special verses describing the peculiar properties of zero. He lists eight rules for operations involving zero. The interesting aspect of this verse is the definition of infinity or Khahara as a fraction who's denominator is zero. Bhaskara deals with simple interest with apparent ease. He does not say anything about compound interest. He also gives a thorough treatment of arithmetic progressions (A.P) and geometric progression but not harmonic progression. He gives the direct and inverse formulae for finding the sum of series, the last term, the constant difference for an A.P. Bhaskara starts with the definition of the sides of a right angled triangle. He then states the Pythagoras theorem (without proof). The author claims that it was known in India since the time of Sulvasutrakaras (3000-800 B.C.) whereas Pythagoras published it in 560 B.C. Bhaskaracharya gives a method for finding an approximate square root of a number which is not a perfect square. Bhaskara gives formula for finding the area of a quadrilateral (both cyclic and acyclic). He also gives the formula for finding the diagonal of a quadrilateral. He also deals with trapezium, disk, sphere etc. He deals with chords of a circle extensively. Bhaskara gives methods for determining volume of a pyramid and its frustrum and the volume of a prism. He also gives practical applications of finding the cost for cutting wood in a particular shape (frustrum of a cone) and its area calculation. He also gives methods to calculate volume of a heap of grain. Indeterminate analysis is the problem of finding integer solutions to x and y in the equation ax + by = c where a, b and c are all integers. The method to do this was called as Kuttaka which means to beat the problem into powder". In other words, the solution which was developed by Brahmagupta and later by Bhaskaracharya involved successively simplifying the problem in an iterative process and then solving it. This reminds of powdering a larger object into smaller pieces first before powdering these pieces into finer pieces and so on. These are also known as Diphontine Equations. Bhaskaracharya provides a method for finding the solution which makes use of the Euclidean Algorithm for finding the

Greatest Common Divisor (G.C.D). The Kuttaka method is said to be an important contribution of Bhaskaracharya. Hilberts Paradox

Vishalaxmi Ray 5th Semester B.Sc Hilbert's paradox of the Grand Hotel is a mathematical veridical paradox (a noncontradictory speculation that is strongly counter-intuitive) about infinite sets presented by German mathematician David Hilbert (18621943). The Paradox: Consider a hypothetical hotel with countably infinitely many rooms, all of which are occupied that is to say every room contains a guest. One might be tempted to think that the hotel would not be able to accommodate any newly arriving guests, as would be the case with a finite number of rooms. Finitely many new guests Suppose a new guest arrives and wishes to be accommodated in the hotel. Becausethe hotel has infinitely many rooms, we can move the guest occupying room 1 to room 2, the guest occupying room 2 to room 3 and so on, and fit the newcomer into room 1. By repeating this procedure, it is possible to make room for any finite number of new guests. Infinitely many new guests It is also possible to accommodate a countably infinite number of new guests: Just move the person occupying room 1 to room 2, the guest occupying room 2 to room 4, and in general room n to room 2n, and all the odd-numbered rooms will be free for the new guests. Infinitely many coaches with infinitely many guests each It is possible to accommodate countably infinitely many coach-loads of countably infinite passengers each. The possibility of doing so depends on the seats in the coaches being already numbered (alternatively, the hotel manager must have the axiom of countable choice at his or her disposal). First empty the odd numbered rooms as above, then put the first coach's load in rooms 3n for n = 1, 2, 3, ..., the second coach's load in rooms 5n for n = 1, 2, ... and so on; for coach number i we use the rooms Pn where Pis the (i + 1)-st prime number. You can also solve the problem by looking at the license plate numbers on the coaches and the seat numbers for the passengers (if the seats are not numbered, number them). Regard the hotel as coach #0, and the initial room numbers as the seat numbers on this coach. Interleave the digits of the coach numbers and the seat numbers to get the room numbers for the guests. The hotel (coach #0) guest in seat (original room) number 1729 moves to room 01070209 (i.e., room 1,070,209.) The passenger on seat 4935 of coach 198 goes to room 4199385 of the hotel. In general any pairing function can be used to solve this problem. Analysis These cases demonstrate the 'paradox', by which we mean not that it is contradictory, but rather that a counter-intuitive result is provably true: The situations "there is a guest to every room" and "no more guests can be accommodated" are not equivalent when there are infinitely many rooms. Some find this state of affairs profoundly counterintuitive. The properties of infinite "collections of things" are quite different from those of finite "collections of things". In an ordinary (finite) hotel with more than one room, the number of odd-numbered rooms is obviously smaller than the total number of rooms. However, in Hilbert's aptly named Grand Hotel, the quantity of odd-numbered rooms is as many as the total quantity of rooms. In mathematical terms, the cardinality of the subset containing the odd-numbered rooms is the same as the cardinality of the set of all rooms. Indeed, infinite sets are characterized as sets that

have proper subsets of the same cardinality. For countable sets, this cardinality is called N0. Rephrased, for any countably infinite set, there exists a bijective function which maps the countably infinite set to the set of natural numbers, even if the countably infinite set contains the natural numbers. MATHEMATICS IN COMPUTER TECHNOLOGY Ranjini.R, 5th SEM BSc PMCs Introduction: Computers & Mathematics with Applications provides a medium of exchange for those engaged in fields where there exists a non-trivial interplay between mathematics and computers. The Use of Mathematics in Computer Games: The purpose of this article is to have a look at how mathematics is used in computer games. I'll use examples from computer games you've probably already played. There are lots of different types of computer games, and I'll talk about how maths is used in some of the following examples: The First Person Shooter (FPS) is a type of game where you run around 3D levels carrying a big gun shooting stuff. Examples of this sort of game include Doom, Quake, Half Life, Unreal or Golden eye. There are other games that look very similar, but aren't first person shooters, for instance Zelda: Ocarina of Time or Mario 64. The most amazing things about FPS are their incredible graphics. They look almost real, none of this would have been possible without the use of advanced maths. Here are some pictures from the early games (Wolfenstein) to the most recent games (Quake III Arena). To begin to explain how these games work, you need to know a bit about geometry, vectors and transformations. Geometry is the study of shapes of various sorts. The simplest shape is the point. Another simple shape is a straight line. A straight line is just the simplest shape joining two points together. A plane is a more complicated shape; it is a flat sheet, like a piece of paper or a wall. There are more complicated shapes, called solids, like a cube or a sphere. If you have a line and a plane, you can find the point where the line cuts through the plane. In fact, sometimes you can't find the intersection, because they don't meet and sometimes the line is inside the plane so they meet at every point on the line , but this doesn't happen in the cases we're interested in. We call this the intersection of the line and the plane. A vector is a mathematical way of representing a point. A vector is 3 numbers, usually called x, y and z. You can think of these numbers as how far you have to go in 3 different directions to get to a point.

A transformation moves a point (or an object, or even an entire world) from one place to another. For instance, I could move it to the right by 4 metres, this type of transformation is called a translation. Another type of transformation is rotation. If you take hold of an object (a pen for instance), and twist your wrist, you have rotated that object. 3D Graphics The basic idea of 3D graphics is to turn a mathematical description of a world into a picture of what that world would look like to someone inside the world. The mathematical description could be in the form of a list, for instance: there is a box with centre (2, 4, 7) and sides of length 3, the color of the box is a bluish grey. To turn this into a picture, we also need to describe where the person is and what direction they are looking, for instance: there is a person at (10, 10, 10) looking directly at the centre of the box. From this we can construct what the world would look like to that person. Imagine there is a painter whose eyes are at the point P. Imagine that he has a glass sheet which he is about to paint on. In the room he is painting, there is a wooden chest. One of the corners of the chest is at point A, and the painter wants to know where that corner of the chest should be on his glass sheet. The way he works it out is to draw a line L from his eyes (P) to the corner of the chest (A), then he works out where this line goes through the canvas, B. He can do this, because the glass sheet is a plane, and I mentioned that you can find the intersection of a line and a plane above. This point B is where the corner of the chest should be in his painting. He follows this rule for every bit of the chest, and ends up with a picture which looks exactly like the chest. Here are two pictures, the first one show the painting when he has only painted the one corner of the chest, the second one shows what it looks like when he has painted the entire chest. What I just described above is similar to what the computer is doing (50 times a second!) every time you run around shooting hideous monsters in Quake , although the details are slightly different. In computer games (at the moment) the description of the world is just a list of triangles and colors. The newest computer games are using more complicated descriptions of the world, using curved surfaces, NURBS and other strange sounding things, however in the end it always reduces to triangles. For instance, a box can be made using triangles as illustrated below. The reason for using triangles is that they are a very simple shape, and if you make sure that everything is made from only one type of shape, you don't have to write a separate program for each type of shape in the game. Each time the computer draws a picture of the world, it goes through the following steps: Firstly, it transforms the world (by rotating and translating), so that the person is at position (0,0,0) and the centre of the glass sheet (the centre of the screen in computers) is at (1,0,0). This makes the rest of the calculations much easier. Secondly, it removes all the triangles you can't see so that it can forget about them, for instance the triangles that are behind you or the ones that are so far away that you can't see them. Thirdly, for every remaining triangle, it works out what it would look like when painted on the glass sheet (or drawn on the screen in computers). Finally, it puts the picture it has drawn on the screen. Nowadays, computers are so fast that they can draw hundreds of thousands of triangles every second, making the pictures more and more realistic, as you can see from the pictures at the beginning of this section.

Of course, there is a lot more to it than just that: there is lighting, fog, animation, textures and hundreds of other things. Most of these use maths and physics to a large extent MATHEMATICS IN HUMAN BODY Govinda P. 5th Semester B.Sc Mathematics is essential to the sciences. One important function of Mathematics in science is the role it plays in the expression of scientific models. Observing and collecting measurements, as well as hypothesizing and predicting, often require extensive use of mathematics. Arithmetic, algebra, geometry, trigonometry and calculus,for example, are all essential to physics. Virtually every branch of mathematics has applications in science, including "pure" areas such as number theory and topology. HUMAN BRAIN A sense of cause and effect: Much of mathematics depends on "if this, then that" reasoning, an abstract form of thinking about causes and their effects. A mathematical proof is a highly abstract version of a causal chain of facts. Spatial-reasoning ability. This includes the ability to recognize shapes and to judge distances, both of which have obvious survival value for many animals. A mathematical model for human brain cooling during cold-water near-drowning -A two-dimensional mathematical model was developed to estimate the contributions of different mechanisms of brain cooling during cold-water near-drowning. Mechanisms include
Conductive heat loss through tissue to the water at the head surface and in the

upper airway and Circulatory cooling to aspirated water via the lung and via venous return from the scalp. The model accounts for changes in boundary conditions, blood circulation, respiratory ventilation of water, and head size. Results indicate that conductive heat loss through the skull surface or the upper airways is minimal, although a small child-sized head will conductively cool faster than a large adult-sized head. METHODS : The head is simplified to be represented by a hemisphere consisting of the brain and uniformly thick layers of bone and soft surface tissue is represented by, . Schematic representation of brain cooling model. The assumed hemispherical geometry of the head and its uniformly thick outer boundaries allows us to express the energy balance in the two-dimensional spherical coordinate system by a partial differential equation

The heat transfer from the soft surface tissues to the cold water is described by another PDE. The heat transfer at boundary 2 is dependent on whether cold water is present in the upper airways. Two possible conditions are considered separately. Insulated. Boundary 2 is insulated, and there is no water in the upper airway. Heat transfer into cold water. Boundary 2 is assumed to be totally exposed to cold water in the upper airway (see Conductive cooling). The surface area across boundary 2 is ~168 cm2, whereas the real surface for the nasopharynx is ~15-20 cm2. Initial Condition The initial (presubmersion) condition is a steady-state temperature distribution at thermoneutral conditions. Boundary 1 is exposed to air at 29C and the heat transfer coefficient is 8 W m -2 C-1. and boundary 2 is insulated. Under this condition the predicted steady-state condition in the head is a reasonable carotid blood temperature of 36.7C and an average brain temperature of 37.0C. USE OF POLYNOMIALS IN COMPUTING LAVANYA.B V SEM B.SC [MECs] Did you know that every time you pick up a calculator and function key like a square root key, an exponent function , or other scientific function , you are calculating a polynomial approximation? A calculator is really a computer. And computers can only add, subtract, multiply, and divide. So how would the calculator evaluate the square root of 1.2? Or find 3.124? it cant store up all the answers since there are an infinite number of potential answers. Instead, the calculator uses what is called a TAYLOR POLYNOMIAL APPROXIMATION. So what is a TAYLOR POLYNOMIAL APPROXIMATION? Given a differential function f(x), the TAYLOR POLYNOMIAL APPROXIMATION at a value x=c is: f(x)=f(c)=f(c)(x-c)+f(c)/2!(x-c)2++f(n)(c)/n!(x-c)+.. Where f(c) is the first derivative evaluated at x=c, and f is the second derivative evaluated at x=c, and f(n) is the nth derivative evaluated at x=c. The fewer terms used, the less accurate this approximation is. Only an infinite number of terms provide the exact value of f(x). THE 4TH DEGREE TAYLOR POLYNOMIAL APPROXIMATION OF y= at x=1 =1+1/2(x-1) -1/8(x-1)2+1/16(x-1)3-5/128(x-1)4

The Impact of Mathematics on Cellular and Molecular Biology Nidhi, 5th Semester B.Sc The application of mathematics to cellular and molecular biology is so pervasive that it often goes unnoticed. The determination of the dynamic properties of cells and enzymes, expressed in the form of enzyme kinetic measurements or receptor-ligand binding are based on mathematical concepts that form the core of quantitative biochemistry. Molecular biology itself can trace its origins to the infusion of physical scientists into biology with the inevitable infusion of mathematical tools. The utility of the core tools of molecular biology was validated through mathematical analysis. Examples include the quantitative estimates of viral titers, measurement of recombination and mutation rates, the statistical validation of radioactive decay measurements, and the quantitative measurement of genome size and informational content based on DNA (i.e., base sequence) complexity. Differential geometry is the branch of mathematics that applies the methods of differential calculus to study the differential invariants of manifolds. Topology is the mathematical study of shape. It defines and quantizes properties of space that remain invariant under deformation. These two fields have been used extensively to characterize many of the basic physical and chemical properties of DNA. geometric concepts of tilt, roll, shear, propeller twist, etc. have been used to describe the secondary structure of DNA (i.e., the actual helical stacking of the bases that forms a linear segment of DNA). In addition, these concepts can be used to describe the interaction of DNA with ligands such as intercalating drugs. Using differential geometry and topology, both molecular biologists and mathematicians have been able to explain many of the properties of these molecules from two basic characteristics of the linking number: First-that it is invariant under deformations;Secondthat it is the sum of the two geometric quantities, twist and writhe. Among the major applications are: a. The explanation for and extent of supercoiling in a variety of closed DNAs (Bauer 1978); b. The analysis of the enzymes that change the topology of a DNA chain (Cozzarelli 1980, Wasserman and Cozzarelli 1986); c. The estimation of the extent of winding in nucleosomes (Travers and Klug 1987); d. The determination of the free energy associated with supercoiling (Depew and Wang 1975); e. The quantitative analysis of the binding of proteins and of small ligands to DNA (Wang et al. 1983); f. The determination of the helical repeat of DNA in solution and DNA wrapped on protein surfaces (White et al. 1988); and, g. The determination of the average structure of supercoiled DNA in solution (Boles et al. 1990). Computational Biology -A Primer Niti.D 5th Semester We first define Mathematics as 'the study of the measurement, properties, and relationships of quantities and sets,structure,space, and change using numbers and symbols Arithmetic, Algebra, Geometry, and Calculus and other branches of mathematics. The computational nature of some recently developed areas of biological study shall be discussed in this article. We define Biology as the natural science concerned with the study oflife and livingorganism, including their structure, function, growth, origin, evolution, distribution, and taxonomy.

So what is Computational biology? Computational biology aims at the mathematical representation, treatment and modeling of biological processes, using a variety of applied mathematical techniques and tools. It has both theoretical and practical applications in biological, biomedical and biotechnology research. Significance: Applying Mathematics to biology has a long history, but only recently has there been an explosion of interest in the field. Some reasons for this include: A-The explosion of data-rich information sets, due to the genomics revolution,which are difficult to understand without the use of analytical tools. B-Recent development of mathematical tools such as Chaos theory to help understand complex, nonlinear mechanisms in biology. C-An increase in computing power which enables calculations and simulations to be performed that were not previously possible. D-An increasing interest in silico experimentation due to ethical considerations, risk, unreliability and other complications involved in human and animal research. Application of mathematics in Genetic Mapping:Genetic mapping deals with the inheritance of certain "genetic markers" within the pedigree of families. In essence, the genetic map uses probability to get the most likely structure and using the observed data such structures are constructed. Only a few years ago our knowledge of the mathematics involved and the computational complexity of algorithms based on that mathematics allowed us to analyze no more than five or six markers.Progress in this area has been based on mathematical areas such as combinatorics, graph theory and statistics . PASCAL'S TRIANGLE DEEPTHEE M.R. BSc, PMCS, 5th Semester B.Sc How Pascal's Triangle is Constructed At the tip of Pascal's Triangle is the number 1, which makes up the zeroth row. The first row (1 & 1) contains two 1's, both formed by adding the two numbers above them to the left and the right, in this case 1 and 0 (all numbers outside the Triangle are 0's). Do the same to create the 2nd row: 0+1=1; 1+1=2; 1+0=1. And the third: 0+1=1; 1+2=3; 2+1=3; 1+0=1. In this way, the rows of the triangle go on infinitly. A number in the triangle can also be found by nCr (n Choose r) where n is the number of the row and r is the element in that row. For example, in row 3, 1 is the zeroth element, 3 is element number 1, the next three is the 2nd element, and the last 1 is the 3rd element. The formula for nCr is nCr=n!/(n-r)!r! The sum of the numbers in any row is equal to 2 to the nth power or 2n, when n is the number of the row. For example: Initial zeroeth row it is 1=20 second row elements are 1nd 1. Hence it is 21=2. Third row it is 1+2+1=4=22 The third row consists of 1+3+3+1=8=23 and so on Hockey Stick Pattern If a diagonal of numbers of any length is selected starting at any of the 1's bordering the sides of the

triangle and ending on any number inside the triangle on that diagonal, the sum of the numbers inside the selection is equal to the number below the end of the selection that is not on the same diagonal itself Magic 11's If a row is made into a single number by using each element as a digit of the number (carrying over when an element itself has more than one digit), the number is equal to 11 to the nth power or 11n when n is the number of the row the multi-digit number was taken from. Row # Formula = Multi-Digit number Actual Row Row 0 110 = 1 1 Row 1 111 = 11 1 1 Row 2 112 = 121 1 2 1 Row 3 113 = 1331 1 3 3 1 Row 4 114 = 14641 1 4 6 4 1 Row 5 115 = 161051 1 5 10 10 5 1 Row 6 116 = 1771561 1 6 15 20 15 6 1 Row 7 117 = 19487171 1 7 21 35 35 21 7 1 Row 8 118 = 214358881 1 8 28 56 70 56 28 8 1 Fibonnacci's Sequence Fibonnacci's Sequence can also be located in Pascal's Triangle. The sum of the numbers in the consecutive rows shown in the diagram are the first numbers of the Fibonnacci Sequence. The Sequence can also be formed in a more direct way, very similar to the method used to form the Triangle, by adding two consecutive numbers in the sequence to produce the next number. The creates the sequence: 1,1,2,3,5,8,13,21,34, 55,89,144,233, etc . . . . The Fibonnacci Sequence can be found in the Golden Rectangle, the lengths of the segments of a pentagram, and in nature, and it decribes a curve which can be found in string instruments, such as the curve of a grand piano. Triangular Numbers Triangular Numbers are just one type of polygonal numbers. See the section on Polygonal Numbers for an explaination of polygonal and triangular numbers. The triangular numbers can be found in the diagonal starting at row 3 as shown in the diagram. The first triangular number is 1, the second is 3, the third is 6, the fourth is 10, and so on. Similarly one can build polynomial numbers square numbers etc

Exotic spheres, or why 4-dimensional space is a crazy place Vijay Shashank 5th Semester. The world we live in is strictly 3-dimensional: up/down, left/right, and forwards/backwards, these are the only ways to move. For years, scientists and science fiction writers have contemplated the possibilities of higher dimensional spaces. What would a 4- or 5-dimensional universe look like? Or might it even be true, as some have suggested, that we already inhabit such a space, that our 3-dimensional home is no more than a slice through a higher dimensional realm, just as a slice through a 3dimensional cube produces a 2-dimensional square?Just as a 3-dimensional object can be projected onto a 2-dimensional plane, so a 4-dimensional object can be projected onto 3-dimensional space. This image comes from the projection of a 4-dimensional hypersphere. The curves are the projections of the hypersphere's parallels (red), meridians (blue) and so-called hypermeridians According to the early 20th century horror writer H.P.LoverCraft,, these higher dimensions do indeed exist, and are home to all manner of evil creatures. In Lovecraft's mythology, the most terrible of these beings goes by the name of Yog-Sothoth. Interestingly, on the rare occasions that Yog-Sothoth appears in the human realm, it takes the form of "a congeries of iridescent globes... stupendous in its malign suggestiveness". Lovecraft had some interest in mathematics, and indeed used ideas such ashyperbolic geometry to lend extra strangeness to his stories . But he could not have known how fortunate was the decision to represent Yog-Sothoth in this manner. Strange spheres really are the keys to higher dimensional worlds, and our understanding of them has increased greatly in recent years. Over the last 50 years a subject called differential topology has grown up, and revealed just how alien these places are. Higher dimensions and hyperspheres Do higher dimensions exist? Mathematics provides a surprisingly emphatic answer to this question. Just as a 2-dimensional plane can be described by pairs of coordinates such as (5,6) with reference to a pair of axes, so 3-dimensional space can be described by triples of numbers such as (5,6,3). Of course we can continue this line of thought: 4dimensional space, for a mathematician, is identified with the sets of quadruples of real numbers, such as (5,6,3,2). This procedure extends to all higher dimensions. Of course this does not answer the physicist's question, of whether such dimensions have any objective physical existence. But mathematically, at least, as long as you believe in numbers, you don't have much choice but to believe in 4-dimensional space too. Well that is fine, but how can such spaces be imagined? What does the lair of YogSothoth actually look like? This is a much harder question to answer, since our brains

are not wired to see in more dimensions than three. But again, mathematical techniques can help, firstly by allowing us to generalise the phenomena that we do see in more familiar spaces. An important example is the sphere. If you choose a spot on the ground, and then mark all the points which are exactly 1cm away from it, the shape that emerges is a circle, with radius 1cm. If you do the same thing, but in 3-dimensional space, we get an ordinary sphere or globe. Now comes the exciting part, because exactly the same trick works in four dimensions, and produces the first hypersphere. What does this look like? Well, when we look at the circle from close up, each section looks like an ordinary 1dimensional line (so the circle is also known as the 1-sphere). The difference between the circle and the line is that when viewed from afar, the whole thing curves back to connect to itself, and has only finite length. In the same way, each patch of the usual sphere (that is to say, the 2-sphere) looks like a patch of the 2-dimensional plane. In 1956,John Milnor was investigating 7-dimensional manifolds when he found a shape which seemed very strange. On one hand, it contained no holes, and so it seemed to be a sphere. On the other hand, the way it was curved around was not like a sphere at all. Initially Milnor thought that he had found a counterexample to the 7-dimensional version of the Poincar conjecture: a shape with no holes, which was not a sphere. But on closer inspection, his new shape could morph into a sphere (as Poincar insists it must be able to do), but - remarkably - it could not do so smoothly. So, although it was topologically a sphere, in differential terms it was not. Milnor had found the first exotic sphere, and he went on to find several more in other dimensions. In each case, the result was topologically spherical, but not differentially so. Another way to say the same thing is that the exotic spheres represent ways to impose unusual notions of distance and curvature on the ordinary sphere. In dimensions one, two, and three, there are no exotic spheres, just the usual ones. This is because the topological and differential viewpoints do not diverge in these familiar spaces. Similarly in dimensions five and six there are only the ordinary spheres, but in dimension seven, suddenly there are 28. In higher dimensions the number flickers around between 1 and arbitrarily large numbers:. So, is the smooth Poincar conjecture true? Most mathematicians lean towards the view that it is probably false, and that 4-dimensional exotic spheres are likely to exist. The reason is that 4-dimensional space is already known to be a very weird place, where all sorts of surprising things happen. A prime example is the discovery in 1983 of a completely new type of shape in 4-dimensions, one which is completely unsmoothable. As discussed above, a square is not a smooth shape because of its sharp corners. But it can be smoothed. That is to say, it is topologically identical to a shape which is smooth, namely the circle. In 1983, however, Simon Donaldson discovered a new class of 4-

dimensional manifolds which are unsmoothable: they are so full of essential kinks and sharp edges that there is no way of ironing them all out. Beyond this, it is not only spheres which come in exotic versions. It is now known that 4dimensional space itself (or R4) comes in a variety of flavours. There is the usual flat space, but alongside it are the exotic R4s. Each of these is topologically identical to ordinary space, but not differentially so. Amazingly, as Clifford Taubes showed in 1987, there are actually infinitely many of these alternative realities. In this respect, the fourth dimension really is an infinitely stranger place than every other domain: for all other dimensions n, there is only ever one version of Rn. Perhaps after all, the fourth dimension is the right mathematical setting for the weird worlds of science fiction writers' imaginations. Mathematicians with Extraordinary Powers of Memory Srilakshmi.H 5th Semester B.Sc We look here at a few mathematicians who have shown extraordinary powers of memory and calculating. First we mention John Wallis (Born: 23 Nov 1616 in Ashford, Kent, England Died: 28 Oct 1703 in Oxford, England) whose calculating powers are described: Wallis occupied himself in finding (mentally) the integral part of the square root of 3 1040; and several hours afterwards wrote down the result from memory. This fact having attracted notice, two months later he was challenged to extract the square root of a number of 53 digits; this he performed mentally, and a month later he dictated the answer that he had not meantime committed to writing. Von Neumann (Born: 28 Dec 1903 in Budapest, Hungary - Died: 8 Feb 1957 in Washington D.C., USA) whose feats of memory are described by Herman Goldstine as follows:As far as I could tell, von Neumann was able on once reading a book or article to quote it back
verbatim; moreover he could do it years later without hesitation. He could also translate it at no diminution in speed from its original language into English. On one occasion I tested his ability by asking him to tell me how the 'Tale of Two Cities' started. Whereupon, without pause, he immediately began to recite the first chapter and continued until asked to stop after about ten or fifteen minutes.

Von Neumann's ability to do mental arithmetic is the source of a large number of stories which no doubt have grown the more impressive with the telling. It is hard to decide between fact and fiction. However, it is clear that multiplying two eight digit numbers in his head was a task that he could accomplish with little effort. Again it would appear that von Neumann's 'almost perfect' memory played a large part in his ability to calculate. Only one mathematician has ever described in detail how he was able to perform incredible feats of memory and calculating. This is A C Aitken,( Born: 1 April 1895 in Dunedin, New Zealand Died: 3 Nov 1967 in Edinburgh, Scotland) .He could instantly give the product of two numbers each of four digits but hesitated if both numbers exceeded 10,000. Among questions asked him at this time were to raise 8 to the 16th

power; in a few seconds he gave the answer 281,474,976,710,656 which is correct. ... he worked less quickly when asked to raise numbers of two digits like 37 or 59 to high powers. ... Asked for the factors of 247,483 he replied 941 and 263; asked for the factors of 171,395 he gave 5, 7, 59 and 83, asked for the factors of 36,083 he said there were none. He, however, found it difficult to answer questions about numbers higher than 1,000,000. Another Mathematician George Parker Bidder was born in 1806 at Moreton Hampstead in Devonshire, England. He was not one to lose his skills when educated and wrote an interesting account of his powers in calculating. Again it is worth noting that other members of his family had exceptional powers of memory and calculating.

Sum of Four Squares Neetha.r 5th Semester The problem of expressing any given whole number as a sum of squares dates back to Diophantus who lived in 3rd century. The problem is revisited by Fermat in the 17th century, as he received a copy of Diophantus Arithmetica, through Bachet. It seems that Leonard of Pisa also worked on the sum of squares problem but all of them failed to give a proof. It was Lagrange who could solve this problem by using his concept of quadratic form in 1770. Example: 3= 12+12+12+02 31=52+22+12+12 10= 22+22+12 +12 127=112+22+12+12 This can happen to any whole number. Though the original proof is by the theory of quadratic forms. One can prove it in 3 steps using elementary number theory. Step 1: Every prime number can be expressed as a sum of four squares. This step requires the analysis of two cases primes of the form 4k+1 and primes of the form 4k+3 Step 2: If two numbers are both expressible as sum of four squares, so is their productproved by norm of quaternions or by eulers identity. Step 3: Fundamental theorem of arithmetic says that every integer can be expressed as a product of prime numbers Combining all the three statements above gets the proof of Lagranges theorem. Lagranges four square theorem is a special case of the fermat polygonal number theorem and warings problems. Conclusion : we can say that the number theoretic theorems are easy to state but it is difficult to prove. One such theorem is Lagranges sum of four square theorem.

Fermats Last Theorem Shwetha.B.V 5th Semester 1.The Pythagoras identity is x2+y2=z2 -------------------------(1) where x, y are adjacent sides of a right angle triangle and z is its hypotenuse. 2.Now if one tries to generalize equation (1) and then solve , one can at once see that solutions in z do not exist at all for n>=3. This observation was made by Fermat in his diary in the year 1637 . Fermats original statement was as follows: IT IS IMPOSSIBLE FOR A CUBE TO BE THE SUM OF TWO CUBES, A FOURTH POWER TO BE THE SUM OF TWO FOURTH POWERS, OR IN GENERAL FOR ANY NUMBER THAT IS A POWER GREATER THAN THE SECOND TO BE THE SUM OF TWO LIKE POWERS. He also claims that he has marvelous proof but the margin is too narrow to write it down. So many great Mathematicians had tried in the subsequent two centuries , but that Proof was not to be found. Fermat himself has proved the non existence of integers x, y, z such that x .y. z=0 satisfying x4+y4=z4 So the proof was required for n=3, 5, 6. INTERMEDIATE STEPS: If an odd prime p divides n then n=mp for some integer m. Therefore x n+ y n=z n =>(xm)p+(y m)n=(z m)p Therefore taking X=xm, Y=y m, Z=z m we get XP+YP=ZP----------------------------------------------------(3) Even if there is no odd prime dividing p then also we can show that (3) holds. Therefore it is sufficient to look at solutions for (3). Several intermediate conditions were discovered for the solutions of (3). But a perfect solution was not being found. A few people showed some calculations ,claiming that they have proved FLT But they were all flawed arguments. Only computerized proofs were given stating that FLT is true for all exponents upto 4*106. But a true proof is one which explains FLT for all integers n, or all primes p. ANDREW WILES: The Mathematician who solved FLT Andrew Wiles had earlier in 1963 as a 11 year old school going kid dreamt of proving this conjecture of Fermat. Once while browsing in a library, he read about the problem of Fermat (it was then called Fermats last conjecture, since a proof was not yet given). He dreamt that he would be the one who would emerge as the Mathematician to solve Fermats problem . And indeed he goes on to prove it after around 35 years.

Wiles earned a Bachelors degree from Oxford University in 1974, while still thinking about the possible proof f for the Fermats problem. Luckily, he was a Meth major and later on took up a PhD at Cambridge in the mathematical specialization of number theory . He choose the right kind of specialization, so that t it would enable him to fulfill his childhood dream of proving Fermats last conjecture. THE MAIN IMPETUS: The Main Impetus in proving FLT , was the theory of modular forms and elliptic curves. G.Frey , a German mathematician now at the institute for experimental mathematics , University of Dvisby-E ssa , invented a h hypothetical set of geometric objects called Frey curves, which satisfy points in the complex space, that when substituted in equation (3), the equation gets solved. This was an interesting direction to prove FLT. Andrew wiles, used this argument building on the work of Ken Ribet, Taniyama, Shimura and other mathematicians. His goal was to show that Frey curves do not exists in reality. The argument of G.Frey was that if there were 3 whole numbers a, b , c which are non -zero, and a whole number n greater than 2 such that a n+ b n=c n. Then it would to an equation y2=x(x-an)(x-b n) known as Frey curves; which are shown to be non existing by a really long proof offered by Andrew Wiles. ANDREW WILES VICTORY: Wiles road to victory was not really smooth. After a first draft experts found a flaw in one of his arguments. He had to toil for another year to fix the problem. In fact he was confined to an attic for almost a year until he emerged victorious. CONSEQUNCES AND C ONCLUSIONS: The efforts leading to FLT led to enrichment of mathematics both in the relation of poor mathematics as well as in the applications of mathematics to the world outside of mathematics. For example in Number Theory new domains took birth by the work of Kummea ideals and classified theory essentially came up during the efforts to prove FLT. Also the theory of elliptic curves has been used in modern Cryptology. The Mountains of Pi. Rumana Kousar 5th SEM BSc (PMCs) Story on Pi: Did you know that Archimedes was the first mathematician to discover the value of pi up to 10,000 digits!!! Who was the first mathematician to give the approximate value of PI which is commonly accepted today? Yes IT IS THE INDIAN Aryabhata (he gave the value of Pi as 3.1416).

Notes on Pi: Pi is the most famous ratio in Mathematics and is one of the most ancient numbers known to humanity. Pi is approximately 3.14 the number of times that a circles diameter will fit around the circle. Pi goes on forever, and cannot be calculated to perfect precision;3.141492653589793238464338279502884197163993751 This is known as decimal expansion of Pi. No apparent pattern emerges in the succession of digits a predestined yet unfathomable code. They dont repeat periodically, seemingly to pop up by blind chance, lacking any perceivable order, rule, reason or design Random integers and infinitum. In 1991, the Chudvosky brothers in New York, using their computer calculated pi to be billion two hundred sixty million three hundred twenty one thousand three hundred sixty three digits (2,260,321,363). They halted the program that summer. Pi has had various names through the ages and all of them are either words or abstract symbols, since Pi is a number that cannot be shown completely and exactly in any finite form of representation. Pi slips away from all rational methods to locate it. It is indescribable and cannot be found. Ferdinend Lindemenn, a German mathematician, proved the transcendence of pi in 1882,i.e Pi is a transcendental number Pi possibly first entered human consciousness in Egypt. The earliest known reference to Pi occurs in a middle kingdom papyrus scroll, written around 1650 BCE by a seribe named Ahmes. Then Entrance Into The Knowledge of all Existing Things and remarks in passing that he composed the scroll In likeness to writings made of old. Towards the end of the scroll, which is composed of various mathematical problems and their solutions, the area of a circle is found using a rough sort of Pi. Around 200 BCE, Archimedes of Syraceuse found that Pi is somewhere about 3.14 (in fractions, Greeks didnt have decimals) knowledge of pi then bogged down until the 17th century. Pi was called the Ludophian numbers, after Ludoph Van Ceulen. A German letter for the numbers was William Jones, an English mathematician, who coined it in 1706. Physicists have noted the ubiquity of Pi in nature. Pi is obvious in the disks of the moon and the sun. The double helix of DNA revolves around pi. Pi hides in the raindrop, and sits in pupil of the eye, and when a raindrop falls into water pi emerges in the spreading rings. Pi can be found in waves and ripples and spectra of all kinds and therefore pi occurs in colors and musics. Pi has lately turned in tables of death, in what is known as a Gaussian distribution of deaths in a population; that is, when a person dies, the event feels pi. It is one of the great mysteries why nature seems to know mathematics.


TRIGNOMETRY is an important branch of MATHEMATICS. It was first compiled by the Greek Mathematics Hipparchus. The word Trigonometry has been derived from the Greek words Tri (Three), Goni (Angles),Metron (Measurements) literally it means MEASUREMENT OF ANGLES. The Mathematician Bartholemaeus coined the word Trigonometry. Trigonometry was developed for use in sailing as a navigation method used with Astronomy. Trigonometry historically evolved through following civilization, Egypt, Mesopotamia and Indus valley civilization. The ancient Sinhalese in Sri Lanka, when constructing reservoirs in the Anuradhapura kingdom, used trigonometry to calculate the gradient of the water flow. Archeological research also provides evidence of trigonometry used in other unique hydrological structures dating back to 4 BC. The Indian mathematician Aryabhatta in 499 AD gave tables of half chords which are now known as sine tables, along with cosine tables. He used zya for sine, Kotizya for cosine, and Otkram Zya for inverse sine and also introduced the ver-sine. Another Indian mathematician,Brahmagupta in 628AD, used an interpolation formula to compute values of sines, up to the second order of the Newton-stiring interpolation formula. Detailed methods for constructing a table of sines for any angle were given by the Indian Mathematician Bhaskara also developed SPHERICAL TRIGONOMETRY. Persian mathematicianOmar Khayyam (1048-1131) combined trigonometry and approximation theory to provide methods of solving algebraic equations by geometrical means. Khayyam solved the cubic equation x3+200x=20*2+2000 and found a positive root of this cubic by considering the intersection of a rectangular hyperbola and a circle. An approximation numerical solution was then found by interpolation in trigonometric tables. The 13th century Persian Mathematician Nasil al-Din Tusi in his treatise on the Quadrilateral was the first to list the six distinct cases of a right triangle in spherical Trigonometry. In the 14th century, Persian mathematician al -Kashi and Timurid mathematicianUlugh Beg (grandson of Timur) produced tables of trigonometry functions as their studies of astronomy. TRIGNOMETRY TODAY Trigonometry is used in the design and construction of buildings, cars, planes and many other objects. Trigonometry is used in Physics and engineering whenever forces, water, fields and vectors are involved. Trigonometry is used in music and acoustics to design speakers, instruments and concert halls. Trigonometry is used to co-ordinate launcher on space shuttles. Trigonometry is used to navigate ships and planes. Nearly every part of modern life uses TRIGNOMETRY in some way

Conformal transformations and applications Viraja 5th Semester B.Sc

Conformal mapping is a mathematical technique used to convert (or map) one mathematical problem and solution into another. It involves the study of complex variables. Complex variables are combinations of real and imaginary numbers,which is taught in secondary schools. The use of complex variables to perform a conformal mapping is taught in college. Under some very restrictive conditions, we can define a complex mapping function that will take every point in one complex plane and map it onto another complex plane. The mapping is represented by the red lines in the figure. Many years ago, the Russian mathematician Joukowski developed a mapping function that converts a circular cylinder into a family of airfoil shapes. If points in the cylinder plane are represented by the complex coordinates x for the horizontal andy for the vertical, then every point z is specified by: z=x+iy Similarly, in the airfoil plane, the horizontal coordinate is B and the vertical coordinate is C, and every point A is specified by: A=B+iC Then Joukowski's mapping function that relates points in the airfoil plane to points in the cylinder plane is given as follows A=z+1/z The mapping function also converts the entire flow field around the cylinder into the flow field around the airfoil. We know the velocity and pressures in the plane containing the cylinder. The mapping function gives us the velocity and pressures around the airfoil. Knowing the pressure around the airfoil, we can then compute the lift. The computations are difficult to perform by hand, but can be solved quickly on a computer. Computerized texture design, Brain mapping in medical visualization and potential theory in physics are some of the applications of these transformations. A large number of problems arising in fluid mechanics, electrostatics, heat conduction, and many other physical situations can be mathematically formulated in terms of Laplaces equation. ie, all these physical problems reduce to solving the equation The above fundamental technique is used to obtain closed form expressions of characteristic impedance and dielectric constant of different types of waveguides. A series of conformal mappings are performed to obtain the characteristics for a range of different geometric parameters. Conformal mapping has various applications in the field of medical physics. For example conformal mapping is applied to brain surface mapping. This is based on the fact that any genus zero (The genus of a connected, orientable surface is an integer representing the maximum number of cuttings along closed simple curves without rendering the resultant manifold disconnected; a sphere,disk or annuls have genus zero) surface can be mapped conformally onto the sphere and any local portion thereof onto a disk. Conformal mapping can be used in scattering and diffraction

problems. For scattering and diffraction problem of plane electromagnetic waves, the mathematical problem involves finding a solution to scaler wave function which satisfies both boundary condition and radiation condition at infinity.

Proof by Induction and the Power Rule Kamaljit 5th Semester B.Sc
Introduction: Imagine, if you will, the following scenario. Its a beautiful Sunday afternoon and youre enjoying a rare moment of inactivity with your loved one on the riverbank, perhaps admiring the gothic faade of the church which has stood on the opposite side of that river for centuries. All of a sudden, your partner brings up the subject of the product rule of differentiable functions! Needless to say, you are intrigued, if not a little bewildered at the motives behind their introduction of such a topic into the conversation. You say that this does indeed sound like a very useful rule, but, being a conscientious member of the scientific community, you feel compelled to express your concern over how they can be certain that this rule holds true for all cases. Upon doing so, your partner fixes you with a contemptuous look and informs you that they have seen and heard this rule quoted by innumerable respected and accomplished mathematicians and, furthermore, they themselves have never come across any circumstances which have given good reason to doubt that the product rule does indeed hold true. Now, being a budding applied mathematician or physical scientist (not an engineer though, their kind is incapable of love and so this situation could never arise for them), this seems like a perfectly reasonable justification and you happily accept what they say and gratefully assimilate said product rule into your mathematical knowledge base. It all sounds innocent enough, does it not? And it would be, if only it ended there. The following week, youre sitting in a pleasantly run down caf, contemplatively munching on a sausage and once again accompanied by your loved one. Your beloved looks in your direction and whispers the power rule of differentiable functions in your direction. Your response is similar to the one you issued last time something like this happened. The object of your passion heaves a heavy sigh and shakes their head sorrowfully at your impudence. Once again they tell you that its simply an established mathematical fact, this time going as far as to list the mathematical achievements of the lecturer who first imparted this knowledge to them. You duely accept that their justification is a good one, even if their manner is slightly upsetting, and thank them for being so generous with their knowledge, only for a moment noticing the flicker of pure malevolence in their eyes as you do so. This scenario repeats itself in a myriad of different forms for weeks and months, each time you feel less and less inclined to question what you are being taught. You dont even notice that, as you become more and more accepting of what you are told, your partners claims become wilder, more outrageous, and progressively further removed from the world of Science and Mathematics. Within a year you find yourself wearing a leather mask, crawling around on all fours with weights attached to your erogenous zones, and barking like a dog for the pleasure and amusement of your significant other, all because they told you to and apparently have it on good authority that this is the correct way to conduct yourself in their presence. How you envy the pure mathematician who, from the very start, is immune to such schemes! How you envy even the engineer, who simply meets the demands of his biologically predetermined breeding partner with a feral growl and a threatening wave of his fire stick! Then, it hits you, envying engineersnow you truly know that all hope has been extinguished in your darkened world. The Power Rule of Differentiable Functions: Let n The set of Natural Numbers, and let f be the function defined by f (x) = xn. Then f is differentiable at any real x and f (x) = nxn-1 We shall not, at the present moment, seek to prove that f is differentiable at any real x, or the truth of the product rule (upon which the proof we are about to see depends upon), as this would require a discussion of limits of a function and continuity and would take us away from the main topic of the article. Instead we shall leave such discussion for the future and, as much as it pains me to do so, and as contrary as it is to the spirit of this article, assume that the product rule is indeed true, and that f is differentiable at any real x. Now, we wish to prove that f (x) = nxn-1. Being naturally observant people, we notice the variable n in this statement and are immediately reminded of the principle of proof by induction.

i) Let n = 1, then f (x) = x, and f (x) = x0 = 1, which we know to be true. ii) Now, let us consider the Product rule. This states that if two functions, denoted f and g, that are differentiable at some number a, then f g is also differentiable at a, and ( f g )(a) = f (a) g (a) + f (a) g(a) Since we are dealing with one function, x, we need not worry about the differentiability of various functions at a as long as we know x to be differentiable at a. Let us assume that the statement is true for n = k, then ( xk ) = kxk 1. Then, for n = k +1: ( xk+1 ) = ( x xk ) = 1 xk + x kxk 1 = ( xk 1 )( kx + x ) = ( xk 1 )[( k + 1) x ] = ( xk )( k + 1) which proves the power rule for all natural n. (by the product rule).

However, this proves the power rule only for positive integer values of n. For negative n one needs only to modify the original statement to read f (x) = x-n f (x) = -nx-(n+1) and repeat the above proof.

Conclusion: I can see youre rather excited, perhaps a little too excited, by what you have read so far, so I shall leave it at that for the moment and give you a chance to calm down. Dont worry, I understand. I just hope that you will take it all on board, as it deserves to be, seek out other methods of proof and begin to question every single mathematical statement that comes your way. Perhaps you will see that, as G. W. Hardy noted, Pure mathematics is on the whole distinctly more useful than applied. For what is useful above all is technique, and mathematical technique is taught mainly through pure mathematics.

Madhavass Contributions to Mathematics Nainesh Kothari 5th semester B.Sc

The ancient mathematicians in India knew about the properties of right anglesd triangles it seems. There are several evidences to show this and here we discuss about Madhavas contributions. The most remarkable contribution from this period, however, was by Madhava who invented Taylor series and rigorous mathematical analysis in some inspired contributions.Madhava was from Kerala and his work there inspired a school of followers such as Nilakantha and Jayasthadeva Madhavas result which gave a series for , translated into the language of modern mathematics, reads R = 4R - 4R/3 + 4R/5 - ... This is also known as the Madhava-Leibniz series Some of his contributions:

Prominent Hindu mathmematician-astronomer from the town of Irinjalakkuda, near Cochin, Kerala, India [known as Sangamagrama at the time meaning union village] Founder of the Kerala school of astronomy and mathematics

Described as the greatest mathematician-astronomer of medieval India or as the founder of mathematical analysis Although most of Madhavas original work is lost, he is referred to in the work of subsequent Kerala mathematicians (his disciples) as the source for infinite series expansions Developed the table of sines and cosines Founded by Madhava and flourished for two hundred years after him

Madhava's sine table The image below gives Madhava's sine table in Devanagiri as reproduced in Cultural foundations of mathematics by C.K. Raju. The first twelve lines constitute the entries in the table. The last word in the thirteenth line indicates that these are "as told by Madhava".

On Questioning the Proof Nithason Yumnam, 5th Semester Introduction Fermat's Last Theorem, a problem that has been around since 1637 when Pierre de Fermat wrote it into the margin of one of his books1[1], was finally proved in 1993 by Andrew Wiles. But only a handful of people in the entire world can understand the proof, to the rest of us, it's utterly incomprehensible, and yet we are quite happy to announce that "Fermat's Last Theorem has been proved". We have to believe the experts who tell us it has been, because we can't tell for ourselves. In no other field of science would this be good enough. If a physicist told us that light rays are bent by gravity, as Einstein did, then we would insist on experiments to back up the theory. Mathematics therefore occupies a special place, where we believe anyone who claims to have proved a theorem on the say-so of just a few people - that is, until we hear otherwise. The point here is authority. The proofs of Mathematical and 1

Scientific discoveries can be so complicated, or too difficult to reconstruct, that we have to trust the hands of the scientists and mathematicians behind the proof. This leads to so-called accepted knowledge; we hear from the experts that Columbus discovered America, until the contrary is proven. The problem is then, as we have seen, that on the right premises, wrong results can aspire. Like science has empirical elements, mathematics has the founding axioms, the selfevident truths. One major difference between the disciplines of mathematics and philosophy on truth, proof and meaning, is mathematics assumption that there is a truth to study through rational abstractions. In philosophy, assumptions about the nature of truth itself are questioned. The contemporary philosophy of mathematics is built upon a myth. The myth asserts that all of mathematics can be reduced to certain foundations. The philosophy of mathematics is thus reduced to the analysis of these foundations - to the unholy triumvirate of logicism, formalism, and intuitionism. The theory aspires from a 1931 paper by Kurt Gdel on the Incompleteness Thorem, stating that the non-existence of a perfect axiomatic system. However as a chain of deductions from these axioms, we must however admit that mathematics are dealing with perfect truth and that two mathematicians can have an abstract conversation, being completely subjective. However, philosophers, too, must return to the realm of explanation and validation to prove their arguments about the nature of understanding. While mathematicians and philosophers share similar interests in developing and proving truth, they utilize different approaches developed from their distinct traditions. This is precisely the difference between accepted proofs in mathematics and science. Scientific proof craves re-conductibility, capability and logical reasoning, while the work of a mathematician requires only the latter. One other difference is that results science or even philosophy is based on sense perception and therefore often cultural interference. For example, a statistical survey could obtain different evidence for modified questions posed. The proof of the legendary Four Colour Theorem received controversy in its days because of its computational nature; it was proved by running all possible outcomes through a computer. It was rejected at first, but today several other mathematical conjectures are clocked by the computer. Are these mathematical proofs or an empirical data? Can a theorem be proved while no one can see the proof? This is the main argument of the protestors of the FCT-proof; the concept of survivability. Thus, in a way, like proofs of natural sciences, the proofs of mathematics should be verifiable and repeatable. A mathematical truth is considered eternal and thus the proof should not record any historical circumstances. guarantee that the original result will always be obtained, if the experiment were to be repeated; whereas proofs do. Proofs are not temporally relative, they are eternal truths in all possible worlds. Mathematical elegance:The Riemann Hypothesis, probably the biggest problem in mathematics today, still awaits proof. This problem consists of proving that all the non-trivial zeroes of the function -the so-called zeta function lies on a critical line x=1/2. A reward of $1,000,000 is promised to the first proof - but a disproof, i.e finding a counterexample by computer is interestingly not awarded. Why? Conclusion:The accepted proofs or (empirical proofs) are on the other hand based on reasoning, empirical evidence or equivalently, experience and informal knowledge. Accepted proofs can be influenced by authorities and culture background, contrary to ultimate proofs. As we have seen, if the majority of the people supports the validity of a statement, the statement is considered proved.

Constructive mathematics
Radhika , 5th Semester BSc Before the world awoke to its own finiteness and began to take the need for recycling seriously, one of the quintessential images of the working mathematician was a waste paper basket full of crumpled pieces of paper. The mathematician sits behind a large desk, furrowed brow resting on one hand, the other hand holding a stalled pencil over yet another sheet of paper soon to be crumpled and discarded. Imagine now a curious twist on this scene, as in 1909 or thereabouts, the Dutch mathematicianL.E.Brouwer sits, perhaps, amidst mounds of crumpled paper as he works on a theorem about, well, crumpled paper. We can imagine his brow to be even more furrowed than most, because while paper played on his 28-year-old mind, he had already begun to reject the type of proof he was building. Over the remaining 57 years of his life would create a revolution in our view of mathematics and logic. Brouwer can be called the father of constructivist mathematics, for while people have influenced his child, he has had the largest influence. To understand what this movement stands for, we first need to look at his crumpled paper. A fixed point theorem Take first a flat sheet of paper and, imagining the most fiendishly impossible maths homework or your worst enemy (if these two are different), crumple it into a ball any way you please. Now lay this crumpled mess back on the spot where the original, uncrumpled, sheet of paper lay previously, and press it down (rather than uncrumple it) until it's absolutely flat. Unbelievable as it may seem, there is a point on the crumpled sheet of paper which is back in exactly the same spot as it was before you did the crumpling. This result is known as a fixed point theorem, since it shows that when one thing is transformed into another in this case, the flat sheet of paper is transformed into the crumpled one there is a point which is "fixed under the transformation"; a point which does not change its position when the transformation is performed. In fact Brouwer's result was not the only fixed point theorem, but was a very powerful one. It applies to any plane area which can be continuously transformed into a disc in other words, any closed shape in the plane in a single piece and without bits missing from it. Any such shape has a fixed point when it is continuously transformed to itself. Here a continuous transformation means that some parts can be stretched or compressed, or even folded on to themselves, but no tears or holes can be made. To be transformed to itself means that the resulting shape can be made to occupy the same space (or part of it) as the untransformed shape did. Indeed, Brouwer showed that the same is true in any dimension. This means that when a sphere is transformed onto itself, it has a fixed point: when a child takes a glob of playdough and rolls it into a snake, at least one point remains fixed. And this is true of a 27-dimensional child making a 27-dimensional snake from a 27-dimensional glob of playdough. It definitely exists...but where? Surprising this result may be, but it seems hardly likely to start a revolution. Surprising results occur surprisingly often in mathematics a surprising result in itself, you might say. Even Brouwer's proof used a standard and unsurprising approach not considered controversial at the time and not often considered

controversial today. However, it was this type of proof, known as an existence proof, which Brouwer and his movement reject. In this type of proof, the object you are considering in this case, a fixed point is shown to exist with certainty. However, the proof says nothing whatsoever about where the object exists or how to find it the treasure is certainly buried but there is no X to mark the spot. This is like saying that if a paper aeroplane hits the teacher, she is certain that someone in the class threw it, but she has no way of finding out who is the guilty culprit. The notion of proving only the existence of an object was soon rejected outright by Brouwer, and he started to claim that only those objects which we can actually construct should be believed. By "construct" Brouwer did not mean a model made from wire and papier-mch, but rather a mathematical algorithm, a recipe for determining all of the features of an object. In so doing, you also get existence for free, of course, because if you construct an object then it certainly exists. A constructive proof of a fixed point theorem would tell you exactly where the fixed point is, exactly where X should mark the spot. If another theorem deals with a certain number, say the biggest element of a set of numbers, a constructivist proof would not merely say that the number exists but would rather tell you exactly what that number is, or how to calculate it. This is usually acknowledged to be the birth of constructivism or intuitionism in mathematics both names are used interchangeably even though neither gives the whole picture. Constructivism is a movement in the foundations of mathematics which holds that only those objects which can be constructed in the human mind from undeniably true thought processes are to be believed. In this way, it harks back to Immanuel Kant (1724-1804) and his claim that intuition alone tells us that the basics of mathematics are true. Indeed, the constructivist school has since been built on the foundation of what is known as intuitionistic logic, to distinguish it from traditional (or "classical") logic.

VAMPIRE NUMBERS AND FERMAT NUMBERS Sahithi 5th Semester BSc VAMPIRE NUMBERS IA Vampire number is a a composite natural number v, with an even number of digits n, that can be factored into two integers x and y each with n/2 digits and not both with trailing zeroes, where v contains precisely all the digits from x and from y, in any order, counting multiplicity. x and y are Vampire numbers first appeared in a 1994 . FOR EXAMPLE: 1827 can be written as a product of two numbers 21 and 87. i.e., 1827 = 21 * 87 Here the digits 1,8, 2,7 are repeated the same number of times but in different order. 21 and 87 are called fangs.

Examples of Vampire numbers are: 1260, 1395, 1435, 1530, 1827, 2187, 6880, 102510, 104260, 105210, 105264, 105750, 108135, 110758, 115672, 116725, 117067, 118440, 120600, 123354, 124483, 125248, 125433, 125460, 125500, ... There are many known sequences of infinitely many vampire numbers following a pattern, such as: 1530 = 3051, 150300 = 300501, 15003000 = 30005001, ... Multiple fang pairs A vampire number can have multiple distinct pairs of fangs. The first of infinitely many vampire numbers with 2 pairs of fangs: 125460 = 204 615 = 246 510 The first with 3 pairs of fangs: 13078260 = 1620 8073 = 1863 7020 = 2070 6318 The first with 4 pairs of fangs: 16758243290880 = 1982736 8452080 = 2123856 7890480 = 2751840 6089832 = 2817360 5948208
FERMAT NUMBERS In Mathematics, a Fermat number, named after Pierre de Fermat who first studied them, is apositive integer of the form Fn=22n+1 where n is a non-negative integer. The first few Fermat numbers are: 3, 5, 17, 257, 65537, 4294967297, 18446744073709551617, If 2n + 1 is prime, and n > 0, it can be shown that n must be a power of two. In other words, every prime of the form 2n + 1 is a Fermat number, and such primes are called Fermat primes. The only known Fermat primes are F0, F1, F2, F3, and F4. F0=21+1=3 is prime F1=22+1=5 is prime F2=24+1=17 is prime F3=28+1=257 is prime

F4=216+1=65,537 is the largest known Fermat prime F5=232+1=4,294,967,297=641 6,700,417 F6=264+1=18,446,744,073,709,551,617=274,177 67,280,421,310,721 F7=2128+1=340,282,366,920,938,463,463,374,607,431,768,211,457=59,649,589,127,497,217 5,704,689,200,685,129,054,721 F8=2256+1=115,792,089,237,316,195,423,570,985,008,687,907,853,269,984,665,640,564,039,457,584, 007,913,129,639,937=1,238,926,361,552,897 93,461,639,715,357,977,769,163,558,199,606,896,584,051,237,541,638,188,580,280,321 F9=2512+1=13,407,807,929,942,597,099,574,024,998,205,846,127,479,365,820,592,393,377,723,561,4 43,721,764,030,073,546,976,801,874,298,166,903,427,690,031,858,186,486,050,853,753,882,811,94 6,569,946,433,649,006,084,097=2,424,833 7,455,602,825,647,884,208,337,395,736,200,454,918,783,366,342,657 741,640,062,627,530,801,524,787,141,901,937,474,059,940,781,097,519,023,905,821,316,144,415,7 59,504,705,008,092,818,711,693,940,737

L-Hopital's Rule Santosh, 5th Semester Bsc. INTRODUCTION L'Hpital's full name was Guillaume de L hopitals. His father was Anne-Alexandre de l'Hpital, a Lieutenant-general in the King's Army; Guillaume's mother was Elisabeth Gobelin, the daughter of Claude Gobelin who was an Intendant in the King's Army and a Councillor of State. As a child, l'Hpital had no talent for subjects like Latin, but he developed strong mathematical abilities and a real passion for the subject.Guillaume de l'Hpital followed a military career and served as a captain in a cavalry regiment He entered into the service, but without giving up his dearest passion. He studied geometry even in his tent. It is not just that he retired there to study; it was also to hide his application to study. However he resigned from the army because of nearsightedness being unable to see beyond ten paces. His book on Analysis begins with two definitions: Definition 1. Variable quantities are those that increase or decrease continuously while a constant quantity remains the same while other vary. Definition 2. The infinitely small part by which a variable quantity increases or decreases continuously is called the differential of that quantity. There follow two axioms: Axiom 1. Grant that two quantities whose difference is an infinitely small quantity may be taken (or used) indifferently for each other; or (what is the same thing) that a quantity

which is increased or decreased only by an infinitesimally small quantity may be considered as remaining the same. Axiom 2. Grant that a curved line may be considered as the assemblage of an infinite number of infinitely small straight lines; or (what is the same thing) as a polygon with an infinite number of sides, each of infinitely small length such that the angle between adjacent lines determines the curvature of the curve. L.Hopital's rule is sometimes also called Bernoulli rule. This rule is used while dealing with indeterminate forms. Suppose f(a)/g(a) =0/0 or f(a)/g(a)=/ at a certain point a then L.Hopital's rule helps us to determine the limit of such quotients. The L.Hopital's rule states that if f(x) and g(x) approach 0 as x approaches a and f'(x)/g'(x) approaches L as x approaches a then the ratioo f(x)/g(x) also approaches L. Consider the curve in the plane whose x-coordinate is given by g(t) and whose ycoordinate is given by f(t), i.e. t tends to [g(t),f(t)] Suppose f(c) = g(c) = 0. The limit of the ratio f(t)/g(t) as t c is the slope of tangent to the curve at the point [0, 0]. The tangent to the curve at the point t is given by [g(t), f (t)].

Differential geometry of surfaces Keerthi. 5th Semester BSc

INTRODUCTION: Differential geometry of surfaces deals with smooth (differentiable)surfaces with various additional structures, most often, a Riemannian metric. Surfaces have been extensively studied from various perspectives: extrinsically, relating to their embedding in Euclidean space and intrinsically, reflecting their properties determined solely by the distance within the surface as measured along curves on the surface. One of the fundamental concepts investigated is the Gaussian Curvature, first studied in depth by Carl.Frederich Gauss, who showed that curvature was an intrinsic property of a surface, independent of its isometric embedding in Euclidean space. Surfaces naturally arise as graphs of functions of a pair of variables, and sometimes appear in parametric form or as loci associated to space curves. An important role in their study has been played byLie groups namely the symmetry groups of the Euclidean plane, the sphere and the hyperbolic plane. These Lie groups can be used to describe surfaces of constant Gaussian curvature; they also provide an essential ingredient in the modern approach to intrinsic differential geometry through connections. On the other hand extrinsic properties relying on an embedding of a surface in Euclidean space have also been extensively studied. This is well illustrated by the non-linear Euler Lagrange equations in the calculus of variations.

History of surfaces Surfaces of revolutionwere known already to Archimedis . The development of calculus in the seventeenth century provided a more systematic way of proving them. Curvature of general surfaces was first studied byEuler. In 1760 he proved a formula for the curvature of a plane section of a surface and in 1771 he considered surfaces represented in a parametric form. Monge laid down the foundations of their theory in his classical memoir L'application de l'analyse la gometrie which appeared in 1795. The defining contribution to the theory of surfaces was made by Gauss in two remarkable papers written in 1825 and 1827. This marked a new departure from tradition because for the first time Gauss considered the intrinsic geometry of a surface, the properties which are determined only by the geodesic distances between points on the surface independently of the particular way in which the surface is located in the ambient Euclidean space. The crowning result, the Theorema Egregium of Gauss, established that the Gaussian curvature is an intrinsic invariant, i.e. invariant under locallocal isometries . This point of view was extended to higher-dimensional spaces byRiemann and led to what is known today as Riemannian. The nineteenth century was the golden age for the theory of surfaces, from both the topological and the differential-geometric point of view, with most leading geometers devoting themselves to their study.he presentation below largely follows Gauss, but with important later contributions from other geometers. For a time Gauss was Cartographer to George III of Great Britain ; this royal patronage could explain why these papers contain practical calculations of the curvature of the earth based purely on measurements on the surface of the planet. The sphere, ellipsoid, hyperboloid and the torus are the easy examples of surfaces Physics also has motivated for the study of surfaces. For instance the path traversed by magnetic field lines, electromagnetic potential etc can be understood by geometry. The theory of Riemann surfaces has aided physicists a lot. For instance compact Riemann surfaces play a very important role in the understanding of string theory. Mathematics Through Problem Solving

The Role of Problem Solving in Teaching Mathematics as a Process Emphasis is now laid on teaching Mathematics through problems rather than teaching the math of problem solving. Problem solving is an important component of mathematics education because it is the single vehicle which seems to be able to achieve at school level all three of the values of mathematics listed at the outset of this article: functional, logical and aesthetic. Let us consider how problem solving is a useful medium for each of these. It has already been pointed out that mathematics is an essential discipline because of its practical role to the individual and society. Through a problem-solving approach, this aspect of mathematics can be developed. Presenting a problem and developing the skills needed to solve that problem is more motivational than teaching the skills without a context. Such motivation gives problem solving special value as a vehicle for learning new concepts and skills or the reinforcement of skills already acquired (Stanic and Kilpatrick, 1989, NCTM, 1989). Approaching mathematics through problem solving can create a context which simulates real life and therefore justifies the mathematics rather than treating it as an end in itself. The National Council of Teachers of Mathematics (NCTM, 1980) recommended that problem solving be the focus of mathematics teaching because, they say, it

encompasses skills and functions which are an important part of everyday life. Furthermore it can help people to adapt to changes and unexpected problems in their careers and other aspects of their lives. More recently the Council endorsed this recommendation (NCTM, 1989) with the statement that problem solving should underly all aspects of mathematics teaching in order to give students experience of the power of mathematics in the world around them. They see problem solving as a vehicle for students to construct, evaluate and refine their own theories about mathematics and the theories of others. According to Resnick (1987) a problem-solving approach contributes to the practical use of mathematics by helping people to develop the facility to be adaptable when, for instance, technology breaks down. It can thus also help people to transfer into new work environments at this time when most are likely to be faced with several career changes during a working lifetime (NCTM, 1989). Resnick expressed the belief that school should focus its efforts on preparing people to be good adaptive learners, so that they can perform effectively when situations are unpredictable and task demands change'. Cockcroft (1982) also advocated problem solving as a means of developing mathematical thinking as a tool for daily living, saying that problem-solving ability lies 'at the heart of mathematics' because it is the means by which mathematics can be applied to a variety of unfamiliar situations. Problem solving is, however, more than a vehicle for teaching and reinforcing mathematical knowledge and helping to meet everyday challenges. It is also a skill which can enhance logical reasoning. Individuals can no longer function optimally in society by just knowing the rules to follow to obtain a correct answer. They also need to be able to decide through a process of logical deduction what algorithm, if any, a situation requires, and sometimes need to be able to develop their own rules in a situation where an algorithm cannot be directly applied. For these reasons problem solving can be developed as a valuable skill in itself, a way of thinking (NCTM, 1989), rather than just as the means to an end of finding the correct answer. Many writers have emphasised the importance of problem solving as a means of developing the logical thinking aspect of mathematics. 'If education fails to contribute to the development of the intelligence, it is obviously incomplete. Yet intelligence is essentially the ability to solve problems: everyday problems, personal problems ... '(Polya, 1980, p.1). Modern definitions of intelligence (Gardner, 1985) talk about practical intelligence which enables 'the individual to resolve genuine problems or difficulties that he or she encounters and also encourages the individual to find or create problems 'thereby laying the groundwork for the acquisition of new knowledge' .As was pointed out earlier, standard mathematics, with the emphasis on the acquisition of knowledge, does not necessarily cater for these needs. Resnick (1987) described the discrepancies which exist between the algorithmic approaches taught in schools and the 'invented' strategies which most people use in the workforce in order to solve practical problems which do not always fit neatly into a taught algorithm. As she says, most people have developed 'rules of thumb' for calculating, for example, quantities, discounts or the amount of change they should give, and these rarely involve standard algorithms. Training in problemsolving techniques equips people more readily with the ability to adapt to such situations. A further reason why a problem-solving approach is valuable is as an aesthetic form. Problem solving allows the student to experience a range of emotion associated with various stages in the solution process. Mathematicians who successfully solve problems say that the experience of having done so contributes to an appreciation for the 'power and beauty of mathematics' (NCTM, 1989, p.77), the "joy of banging your head against a mathematical wall, and then discovering that there might be ways of either going around or over that wall" (Olkin and Schoenfeld, 1994, p.43). They also speak of the willingness or even desire to engage with a task for a length of time which causes the task to cease being a 'puzzle' and allows it to become a problem. However, although it is this engagement which initially motivates the solver to pursue a problem, it is still necessary for certain techniques to be available for the involvement to continue successfully. Hence more needs to be understood about what these techniques are and how they can best be made available. Conclusion It has been suggested in this chapter that there are many reasons why a problem- solving approach can contribute significantly to the outcomes of a mathematics education. Not only is it a vehicle for developing logical thinking, it can provide students with a context for learning mathematical knowledge, it can enhance transfer of skills to unfamiliar situations and it is an aesthetic form in itself. A problem-solving approach can provide a vehicle for students to construct their own ideas about mathematics and to take responsibility for their own learning. There is little doubt that the mathematics program can be enhanced by the establishment of an environment in which students are exposed to teaching via problem solving, as opposed to more traditional models of teaching about problem solving. The challenge for teachers, at all levels, is to develop the process of mathematical thinking alongside the knowledge and to seek opportunities to present even routine mathematics tasks in problem-solving contexts.

Dynamical Systems Abhishek Singh B.Sc[CsMS] V Semester Introduction The aim of the article is to show how to gain information about a dynamical system. A dynamical system relates to an evolving system which is repeatedly coming under the influence of an action or force. There are mathematical theories where one understands the system without solving the state equations. The theory that deals with this is known as Dynamical Systems theory, and has become increasingly important in recent years. It is important to realize that most differential equations cannot be solved algebraically, and so it becomes necessary to gain information about the solutions via an alternative method. This method can involve a lot of high level mathematical theory, however, the practical application of these methods is relatively easy. A Dynamical System is mathematically a system of first order differential equations that depend on time, (explicitly or implicitly). The system usually describes a physical process, for example: Newtons law of gravitation, Van der Pols equation and the Lokta-Vloterra equations represent certain dynamical systems. The aim is to gain all the information you require about the long term behaviour of a system without ever having to solve the state equations. This is incredibly important as it turns out that most dynamical systems do not have closed form solutions, that is they cannot be solved for algebraically. So how do we go about gaining information about the long term behaviour of a system, if in general it cannot be solved for algebraically? Well, there is a large amount of quite advanced mathematical techniques that when used allow us to gain all the information we require. One wants to know the points at which the system remains in equilibrium. Conclusion This article is a very simple introduction to Dynamical Systems theory. There are far reaching consequences of this theory which are useful to a variety of pure and applied researchers. For instance economists use dynamical systems to make conclusions regarding evolution of markets. In Pure Mathematics there are studies relating to ergodicity, topological dynamics and probabilistic evolution all which are based on certain dynamical systems. Brahmagupta the Indian genius Sushmita.D This article is to make an exposition of one of the great Indian mathematician and astronomer Brahmagupta who wrote some important works on both mathematics and astronomy. He was from the state of Rajasthan of northwest India (he is often referred to as Bhillamalacarya, the teacher from Bhillamala). He later became the head of the astronomical observatory at Ujjain in central India.

It seems likely that Brahmagupta's works, especially his most famous text, the Brahmasphutasiddhanta, were brought by the 8th Century Abbasid caliph Al-Mansur to his newly founded centre of learning at Baghdad on the banks of the Tigris, providing an important link between Indian mathematics and astronomy and the nascent upsurge in science and mathematics in the Arabic world In his work on arithmetic, Brahmagupta explained how to find the cube and cube-root of an integer and gave rules facilitating the computation of squares and square roots. He also gave rules for dealing with five types of combinations of fractions. He gave the sum of the squares of the first n natural numbers as n(n + 1)(2n + 1) 6 and the sum of the cubes of the first n natural numbers as (n(n + 1)2). Brahmaguptas genius, though, came in his treatment of the concept of the number zero. Although often also attributed to the 7th Century Indian mathematician Bhaskara I, his Brahmasphutasiddhanta is probably the earliest known text to treat zero as a number in its own right, rather than as simply a placeholder digit as was done by the Babylonians , or as a symbol for a lack of quantity as was done by the Greeks and Romans Brahmagupta established the basic mathematical rules for dealing with zero (1 + 0 = 1; 1 - 0 = 1; and 1 x 0 = 0), although his understanding of division by zero was incomplete (he thought that 1 0 = 0). Almost 500 years later, in the 12th Century, another Indian mathematician, Bhaskara II, showed that the answer should be infinity, not zero (on the grounds that 1 can be divided into an infinite number of pieces of size zero), an answer that was considered correct for centuries. Brahmaguptas view of numbers as abstract entities, rather than just for counting and measuring, allowed him to make yet another huge conceptual leap which would have profound consequence for future mathematics. Previously, the sum 3 - 4, for example, was considered to be either meaningless or, at best, just zero. Brahmagupta, however, realized that there could be such a thing as a negative number, which he referred to as debt as a opposed to property or asset. He expounded on the rules for dealing with negative numbers (e.g. a negative times a negative is a positive, a negative times a positive is a negative, etc). Furthermore, he pointed out, quadratic equations (of the type x2 + 2 = 11, for example) could in theory have two possible solutions, one of which could be negative, because 32 = 9 and -32 = 9. In addition to his work on solutions to general linear equations and quadratic equations, Brahmagupta went yet further by considering systems of simultaneous equations (set of equations containing multiple variables), and solving quadratic equations with two unknowns, something which was not even considered in the West until a thousand years later, whenFermat was considering similar problems in 1657. Brahmagupta even attempted to write down these rather abstract concepts, using the initials of the names of colours to represent unknowns in his equations, one of the earliest intimations of what we now know as Algebra. Brahmagupta dedicated a substantial portion of his work to geometry and trigonometry. He established 10 (3.162277) as a good practical approximation for (3.141593), and gave a formula, now known as Brahmagupta's Formula, for the area of a cyclic quadrilateral, as well as a

celebrated theorem on the diagonals of a cyclic quadrilateral, usually referred to as Brahmagupta's Theorem. Ancient mathematics CHANDANA C.S,V SEM B.Sc [MECs] Introduction: The history of mathematics is a vast topic and one which can never be completely studied as much of the work of ancient times remains undiscovered or has been lost through time. Nevertheless there is much that is known and many important discoveries have been made, especially over the last 150 years, which have significantly altered the chronology of the history of mathematics, and the conceptions that had been commonly held prior to that. By the turn of the 21st century it was fair to say that there was definite knowledge of where and when a vast majority of the significant developments of mathematics occurred. Ancient mathematics was also about counting and measuring, surveying land. What is an ancient system of mathematics that is being taught in some of the most prestigious institutions in England and Europe and not in India? It is Vedic mathematicsa long forgotten technique for mathematical calculations! It is amazing how with the help of 16 sutras and 16 upa-sutras you will be able to solve/calculate complex mathematical problems-mentally! The basic roots of Vedic mathematics lie in Vedas jus as basic roots of Hinduism. Vedic Maths form part of Jyothisha which is one of the six Vedangas. To many Indians Vedic and Sanskrit slokas/manthras are relevant only for religious purposes/occasions. But Vedas (written around 1500-900 BCE) in fact are a treasure house of knowledge and human experience-both secular and spiritual. Here you will get an idea about the power of Vedic Mathematics. The later Sulba-sutras represent the 'traditional' material along with further related elaboration of Vedic mathematics. The Sulba-sutras have been dated from around 800200 BC, and further to the expansion of topics in the Vedangas, contain a number of significant developments. These include first 'use' of irrational numbers, quadratic equations of the form ax2 = c and ax2 + bx = c, unarguable evidence of the use of Pythagoras theorem and Pythagorean triples, predating Pythagoras (c 572 - 497 BC), and evidence of a number of geometrical proofs. This is of great interest as proof is a concept thought to be completely lacking in Indian mathematics. Conclusion: Indeed even in the very latest mathematics histories Indian 'sections' are still generally fairly brief. Indian mathematicians made great strides in developing arithmetic (they can generally be credited with perfecting use of the operators), algebra (before Arab scholars), geometry (independent of the Greeks), and infinite series expansions and calculus (attributed to 17th/18th century European scholars). Also Indian works, through a variety of translations, have had significant influence throughout the world, from China, throughout the Arab Empire, and ultimately Europe.

On Fuzzy systems and Mathematics Darshini 5th Semester B.Sc A fuzzy concept is one which lacks (Markusen,2003) clarity and are difficult to test or operationalize In Logic fuzzy concepts are often regarded as concepts which in their application are neither completely true nor completely false, or which are partly true and partly false. Fuzzy concepts and language Ordinary language, which uses symbolic conventions and associations which are often not logical, inherently contains many fuzzy concepts - "knowing what you mean" in this case depends on knowing the context or being familiar with the way in which a term is normally used, or what it is associated with. Origin of fuzzy concepts

The origin of fuzzy concepts is partly because the human brain does not operate like a computer. While computers use strict binary logic gates, the brain does not; i.e., it is capable of making all kinds of neural associations according to all kinds of ordering principles (or fairly chaotically) in associative patterns which are not logical but nevertheless meaningful. Something can be meaningful although we cannot name it, or we might only be able to name it and nothing else. In part, fuzzy concepts are also because learning or the growth of understanding involves a transition from a vague awareness, which cannot orient behavior greatly, to clearer insight, which can orient behavior. Some logicians argue that fuzzy concepts are a necessary consequence of the reality that any kind of distinction we might like to draw has limits of application. As a certain level of generality, it works fine. But if we pursued its application in a very exact and rigorous manner, or overextend its application, it appears that the distinction simply does not apply in some areas or contexts, or that we cannot fully specify how it should be drawn. An analogy might be that zooming a telescope, camera or a microscope in and out reveals that a pattern which is sharply focused at a certain distance disappears at another distance. Use of Fuzzy concepts Fuzzy concepts often play a role in the creative process of forming new concepts to understand something. In the most primitive sense, this can be observed in infants who, through practical experience, learn to identify, distinguish and generalise the correct application of a concept, and relate it to other concepts. However, fuzzy concepts may also occur in scientific, journalistic, programming and philosophical activity, when a thinker is in the process of clarifying and defining a newly emerging concept which is based on distinctions which, for one reason or another,

cannot (yet) be more exactly specified or validated. Fuzzy concepts are often used to denote complex phenomena, or to describe something which is developing and changing, which might involve shedding some old meanings and acquiring new ones. Applying truth values A basic application might characterize subranges of a continuous variable. For instance, a temperature measurement for anti-lock brakes might have several separate membership functions defining particular temperature ranges needed to control the brakes properly. Each function maps the same temperature value to a truth value in the 0 to 1 range. These truth values can then be used to determine how the brakes should be maneuvered. Analysis of fuzzy concepts In Mathematical logic and programming,fuzzy concepts can however be analyzed and defined more accurately or comprehensively as follows.

By specifying a range of conditions to which the concept applies. By classifying or categorizing all or most cases or uses to which the concept applies. By probing the assumptions on which a concept is based, or which are associated with its use . By identifying operational rules for the use of the concept, which cover all or most cases. By allocating different applications of the concept to different but related sets namely a Boolean type logic. By examining how probable it is that the concept applies, statistically or intuitively. By examining the distribution or distributional frequency of (possibly different) uses of the concept. By some other kind of measure or scale of the degree to which the concept applies. By specifying a series of logical operators (an inferential system or an algorithm) which captures all or most cases to which the concept applies. By mapping or graphing the applications of the concept using some basic parameters. By applying a meta language which includes fuzzy concepts in a more inclusive categorical system which is not fuzzy. By reducing or restating fuzzy concepts in terms which are simpler or similar, and which are not fuzzy or less fuzzy. By relating the fuzzy concept to other concepts which are not fuzzy or less fuzzy, or simply by replacing the fuzzy concept altogether with another, alternative concept which is not fuzzy yet "works exactly the same way".

Defining Probability Theory and its Use to Make Business Decisions Yoshitha, 5th Semester B.Sc The word "probability" in one form or another is part of our daily vocabulary. In using it we articulate in casual way our subjective estimates of probability regarding completion of deadlines, accomplishment of daily endeavors, departures and arrivals at work, play or home. "Honey, I'll probably be late for dinner", "There's a good chance I will get a pay raise", "If I get an MBA my chances of succeeding in the job market will increase substantially", and "I will probably be there by noon", are all examples of our daily use of personal probability theory. One method used to compute probability is to use physical or mathematical laws to get the likelihood of an event. This method checks out all possible scenarios by combinatorial formulae The second method consists of "observing the relative frequency over many, many repetitions of the situations." In the case of the die, one would have to sit and throw it a significant number of times and based on how it lands arrive at a conclusion regarding the probability that a specific occurrence or the way in which it could land would indeed occur. Clearly, making such a determination on the basis of a few throws of the die would only make me arrive at a poor conclusion. However Statistical laws say that when one observes large number of scenarios then the relative frequency method comes closely to the mathematical exact method. In business, probability theory is used in the calculation of long-term gains and losses. This is how a company whose business is based on risk calculates "probability of profitability" within acceptable margins. An example of this is the way in which life insurance companies calculate the cost of life insurance policies and is based on how many policy holders are reasonably expected to die within a year versus revenue generated from other policies extended. In this scenario it is important to point out that in order for a company to mitigate the risk associated with loss of revenue it must issue a substantial number of policies. Our decision to use of the throw of dice in this example is not probabilistic but chosen for a very specific reason in describing the way in which we have seen business decisions made by companies. We propose that the level of technology as a service offering to its customers and public ownership of the company are of substantial influence in making decisions based on short and shallow analysis of their probability for success . One can see this on the news almost every day or by reading The Wall Street Journal. Clearly, the bottom line in this economy is driven by expense, so much more in public owned companies where allocation of resources to research and investigation are often perceived as frivolous by stockholders. However we would like to assert that if one maintains ethics and make calculations according to the laws of Statistics and Mathematics then the world of business would definitely stand benefited by increasing the credibility of Investment hubs.

Mathematical Modeling of Lung Mechanics Divyashree 5th Semester B.Sc 1. Introduction The lung is an asymmetric branching system with a complex geometry. During forced Expiratory flow, coughing, high frequency ventilation and a variety of other pulmonary maneuvers, the lung experiences large volume excursions accompanied by large changes in the geometry of the conducting airway network. It has been shown that lung heterogeneity plays an important role in respiratory system pathology and influences results of lung examinations. Several experimental works have examined the MEFV curves during nonuniform maximal emptying of the respiratory system.Several computational models have been offered to probe the mechanics governing the MEFV curve. The shape of the maximum expiratory flow-volume (MEFV) curve registered during the forced expiration is determined by flow limitation. 2. Techniques The maximum expiratory flow-volume (MEFV) curve is a sensitive test of respiratory mechanics. Several mathematical models for forced expiration have been developed, but they suffer from various shortcomings. It is impossible to calculate the parts of the MEFV curve beyond the flow limiting conditions and computational algorithms do not allow a direct calculation of maximal flow. In 1998 Adam G.Polak has been constructed a complex, nonlinear forward model, including exciting signal and static recoil pressure lung volume descriptions and 132 parameters. Model for the driving pressure was Pd(v)=Pm(1+e-1/t)[1-V/VC]-RT.Q (1) where Pm is the maximal expiratory pressure that can be produced by muscles and elastic forces of the thorax, VC is the vital capacity, is the time constant, RT is tissue resistance, and Q is airflow. It has been shown that lung heterogeneity plays an important role in respiratory system pathology and influences results of lung examinations. Experimental and model studies on the respiratory system demonstrate that heterogeneous constriction of airways accompany asthma and can be a crucial determinant of hyper-responsiveness via an increase in lung impedance. In 2003 Adam G.Polak presented a computational model to predict maximal expiration through a morphometry-based asymmetrical bronchial tree. A computational model with the Horsfield-like geometry of the airway structure, including wave-speed flow limitation and taking into consideration separate airflows from several independent alveolar compartments has been derived. The airflow values are calculated for quasistatic conditions by solving a system of nonlinear differential equations describing static pressure losses along the airway branches. Calculations done for succeeding lung volumes result in the semi dynamic maximal expiratory flowvolume (MEFV) curve. This phenomenon has been described by Lambert et al. using the conservation of omentum dP/dx=-f(x)/[1-sws2(x)]=-f(x)/{1-pq2/A3(x)[A/ Pm]} (2) Where dP/dx is the gradient of lateral pressure along the bronchus, Sws= u/ c the local is speed index equal to the ratio between flow (u) and wave (c) speed, q is the volume flow in the bronchus, denotes gas density, A is the cross sectional area, and A /Ptm is the elementary compliance of the airway wall dependent on transmural pressure . Ptm The elementary dissipative pressure loss f (pressure drop per unit distance) is described by the following empirical formula:

f(x)=(a+bRN(x))8q/A2(x) (3) Where a and b are scaling coefficients, RN the local Reynolds number and is gas viscosity. In 2008 Adam G.Polak again investigated a model-based method for flow limitation analysis in the heterogeneous human lung. Conclusion: Flow limitation in the airways is a fundamental process constituting the maximal expiratory flow-volume curve. Its location is referred to as the choke point. In this work, expressions enabling the calculation of critical flows in the case of wave-speed, turbulent or viscous limitation were derived. Then a computational model for the forced expiration from the heterogeneous lung was used to analyse the regime and degree of flow limitation as well as movement and arrangement of the choke points. The conclusion is that flow limitation begins at similar time in every branch of the bronchial tree developing a parallel arrangement of the choke points. A serial configuration of flow-limiting sites is possible for short time periods in the case of increased airway heterogeneity. The most probable locations of choke points are the regions of airway junctions. Probability theory

Puneeth Gowda, 5th Semester

Introduction: Probability theory is the branch of mathematics concerned with analysis of Random phenomena. The central objects of probability theory are Random variables, stochastic processes and random events, , and : mathematical abstractions of non deterministic events or measured quantities that may either be single occurrences or evolve over time in an apparently random fashion. If an individual coin toss or the roll of die is considered to be a random event, then if repeated many times the sequence of random events will exhibit certain patterns, which can be studied and predicted. Two representative mathematical results describing such patterns are thelaw of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of large sets of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics. A great discovery of twentieth century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. History: The mathematical theory of probability has its roots in attempts to analyze games of chance by G.Cardano in the sixteenth century, and byPierre de Fermat and Blaise Pascal in the seventeenth C.Huygens century published a book on the subject in 1657. Initially, probability theory mainly considered discrete events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of continuous variables into the theory.

This culminated in modern probability theory, on foundations laid by A.N.Kolmogorov. Kolmogorov combined the notion of Sample space, introduced by Richard von Mises, andmeasure theory and presented his axiomatic system for probability theory in 1933. Fairly quickly this became the mostly undisputed axiomatic basis for modern probability theory but alternatives exist. Motivation: Consider an experiment that can produce a number of outcomes. The collection of all results is called the sample space of the experiment. The power set of the sample space is formed by considering all different collections of possible results. For example, rolling a die produces one of six possible results. One collection of possible results produces an odd number. Thus, the subset {1, 3, 5} is an element of the power set of the sample space of die rolls. These collections are called events. In this case, {1, 3, 5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every "event" a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) be assigned a value of one. To qualify as a probability distribution, the assignment of values must satisfy the requirement that if you look at a collection of mutually exclusive events (events that contain no common results, e.g., the events {1, 6}, {3}, and {2, 4} are all mutually exclusive), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events. The probability that one of the events {1, 6}, {3}, or {2, 4} will occur is 5/6. This is the same as saying that the probability of event {1, 2, 3, 4, 6} is 5/6. This event encompasses the possibility of any number except five being rolled. The mutually exclusive event {5) has a probability of 1/6, and the event {1, 2, 3, 4, 5, 6} has a probability of 1 - absolute certainty. For convenience's sake, we ignore the possibility that the die, once rolled, will be obliterated before it can hit the table.

Numerical Weather Prediction (NWP):Nagendra 5th Semester B.Sc Jain University Numerical weather prediction uses current weather conditions as input into mathematical models of the atmosphere to predict the weather. Although the first efforts to accomplish this were done in the 1920s. It wasn't until the advent of the computer and computer simulation that it was feasible to do in real-time. Manipulating the huge datasets and performing the complex calculations necessary to do this on a resolution fine enough to make the results useful requires the use of some of the most powerful supercomputers in the world. A number of forecast models, both global and regional in scale, are run to help create forecasts for nations

worldwide. Use of model ensemble forecasts helps to define the forecast uncertainty and extend weather forecasting farther into the future than would otherwise be possible. Numerical analysis is the study of algorithms that use numerical approximation (as opposed to general symbolic manipulations) for the problems of continuous mathematics (as distinguished from discrete mathematics). One of the earliest mathematical writings is the Babylonian tablet YBC 7289, which gives a sexagesimal numerical approximation of 2 , the length of the diagonal in a unit square. Being able to compute the sides of a triangle (and hence, being able to compute square roots) is extremely important, for instance, in carpentry and construction. Numerical analysis continues this long tradition of practical mathematical calculations. Much like the Babylonian approximation to 2 , modern numerical analysis does not seek exact answers, because exact answers are often impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors. Numerical analysis naturally finds applications in all fields of engineering and the physical sciences, but in the 21st century, the life sciences and even the arts have adopted elements of scientific computations. Ordinary differential equations appear in the movement of heavenly bodies (planets, stars and galaxies) optimization occurs in portfolio management; numerical linear algebra is important for data analysis; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology. Before the advent of modern computers numerical methods often depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead. The interpolation algorithms nevertheless may be used as part of the software for solving differential equations. Numerical solution to the equations :An example of one momentum equation for 1-dimensional wind accelerated by only the pressure gradient force is Du/Dt=-1/ [p/x] Computers cannot analytically solve even this very simple equation..!!!

Elections, Exit Polls and Mathematics Pooja, 5th Semester B.Sc

An election procedure takes the voters ballots or ranking of the n candidates (see How to Vote) and returns a ranking of the candidates (if there is a tie, then there may be rankings of the candidates). As such, an election procedure can be viewed as a map from the set of all possible ballots to a final ranking. Mathematics is used not only to study a specific map or election procedure, but also classes of procedures or all election procedures, as Arrows Impossibility Theorem demonstrates. Further, mathematics can be used to restrict the set of ballots or to decompose the set into pieces that drive particular behavior. In either case, mathematics o A reasonable election procedure should not depend on the name of the candidates, just how the ballots are marked. For example, suppose that the ballots are cast and an election outcome yields A top-ranked, then B in second place, and C ranked last. If everyone switched the positions of A and B on their ballots, then the result should change accordingly. That is, B should be top-ranked, then A in second place, followed by C bottom-ranked. Such a change in the preferences is the same as if candidates A and B had switched their names! A procedure shouldnt depend on how the candidates are named, but on how the votes are distributed. Switching candidates A and B (and leaving C fixed) is an example of a permutation. The set of all possible permutations of candidates on the ballots is called the symmetric group. f voting theory focuses on the structure of the maps and the structure of the set of ballots. Below are structure-related phenomena and brief explanations of how the phenomena arise in voting theory. The material focuses on elections with three candidates, but the ideas extrapolate to any number of candidates. Exit Polls: An exit poll is a sample survey of people who have just voted at a polling station. Statistical sampling methods are used to determine which voters to interview. The main point of the "interview" is to ask the voter to complete a duplicate ballot paper. Exit polls differ fundamentally from pre-election polls (voting intention polls, or opinion polls) in that only people who actually vote are included in the sample From a suitably constructed statistical model, the required estimates of changes in party vote shares are obtained for every constituency. These estimated changes are then applied to the known results of the previous election, constituency by constituency, to produce estimated party vote shares at the current election. The important word here is estimated: on the basis of just an exit poll, nothing is known with certainty! In particular, the exit poll does not tell us which party will win any given seat; but it can tell us how likely each party is to win a given seat. Once the winprobabilities for each constituency have been obtained, it is a simple matter to calculate the expected number of seats that a party will win: just add up all of that party's winprobabilities across different constituencies. Electronic vote counting equipment makes it easy for a few persons to electronically manipulate votecounts and virtually impossible to independently audit vote count accuracy. Without routine independent audits of hand-countable voter verified paper ballots, insiders have freedom to manipulate vote counts with negligible possibility of detection. This new exit poll discrepancy function allows us, for the first time, to know

what patterns of exit poll discrepancy result from various combinations of vote miscounts. Mathematical and Theoretical Biology Priya, 5th Semester B.Sc Mathematical and theoretical biology is an important interdisciplinary scientific research field with a range of applications in biology, medicine and biotechnology The field may be referred to as mathematical biology or biomathematics to stress the mathematical side, or as theoretical biology to stress the biological side. It includes at least four major subfields: namely Biological Mathematical modeling, Bioinformatics, Biocomputing and biological systems. Mathematical biology aims at the mathematical representation, treatment and modeling of biological processes, using a variety of applied mathematical techniques and tools. It has both theoretical and practical applications in biological, biomedical and biotechnology research. For example, in cell biology, protein interactions are often represented as "cartoon" models, which, although easy to visualize, do not accurately describe the systems studied. In order to do this, precise mathematical models are required. By describing the systems in a quantitative manner, their behavior can be better simulated, and hence properties can be predicted that might not be evident to the experimenter. The major mathematical theories used are function theory, probability Statistics, Combinatorics,coding theory,Linear algebra, geometry and topology.

Importance Applying mathematics to biology has a long history, but only recently has there been an explosion of interest in the field. Some reasons for this include: The explosion of data-rich information sets, due to thegenomics revolution, which are difficult to understand without the use of analytical tools, Recent development of mathematical tools such as chaos theory complex, nonlinear mechanisms in biology,

to help understand

An increase in computing power which enables calculations and simulations to be performed that were not previously possible, and An increasing interest in silico experimentation due to ethical considerations, risk, unreliability and other complications involved in human and animal research.

Molecular set theory Molecular set theory was introduced by Anthony Bartholomew, and its applications were developed in mathematical biology and especially in Mathematical Medicine. Molecular set theory (MST) is a mathematical formulation of the wide-sense chemical kinetics of bimolecular reactions in terms of sets of molecules and their chemical transformations represented by set-theoretical mappings between molecular sets. In a more general sense, MST is the theory of molecular categories defined as categories of molecular sets and their chemical transformations represented as set-theoretical mappings of molecular sets. The theory has also contributed to biostatistics and the formulation of clinical biochemistry problems in mathematical formulations of pathological, biochemical changes of interest to Physiology, Clinical Biochemistry and Medicine. Other areas are population dynamics and relational biology. CHAOS THEORY Akshaya.M.S 5th sem, B.Sc, PCM

Chaos theory studies the behavior of dynamic systems that are highly sensitive to initial conditions. Small differences in the initial conditions lead to diverging outcomes for chaotic systems. Hence long term predictions of such type of systems become impossible. This happens even in case of deterministic systems (system whose future behavior can be fully determined by initial conditions without involving random elements). They are not predictable. This behavior of systems is called deterministic chaos or just chaos. Chaos theory is a field of study in mathematics. It has applications in several fields like physics, economics, biology, philosophy etc. Chaos means a state of disorder. Chaos theory comes from the fact that the systems that the theory describes are apparently disordered. But chaos theory is really about finding the underlying order in this apparently random data. The first true experimenter in this field was a meteorologist named Edward Lorenz (while he was working on the problem of weather prediction). There is no universally accepted mathematical definition of chaos. The commonly used definition says that for a dynamic system to be classified as chaotic it must satisfy following properties: It must be sensitive to initial conditions.

It must be topologically mixing. Its periodic orbits must be dense.

Sensitivity of initial conditions is popularly known as the butterfly effect. It was called so because of the title of a paper by Edward Lorenz. The title goes as Predictability: Does the flap of a butterflys wing in Brazil set off a tornado in Texas. The flapping wing of butterfly represents small change in initial condition of system that causes a chain of events leading to a large scale phenomenon. Had the butterfly not flapped its wing the trajectory of the system might have been vastly different. The consequences of sensitivity to initial conditions is that if we start with only a finite amount of information about the system then after certain period of time the system will no longer be predictable. This can be observed in case of weather which is generally predicted only about a week ahead. Topological mixing (topological transitivity) means that the system evolves over time so that any given region or space will eventually overlap with any other region. The mixing of colored dyes or fluids is an example for chaotic system. But topological mixing is often emitted in case of few chaotic systems which equate chaos with sensitivity of initial conditions (but sensitive dependence on initial conditions alone does not lead to chaos). Let us consider an example of a simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial condition at every point. However it is not an example for topological mixing and hence has no chaos. An earlier proponent of chaos theory was Henri Poincare, while studying the three body problem he found that there could be orbits that are non periodic yet not forever increasing or approaching a fixed point. Much of the earlier theory was developed entirely by mathematics under the name ergodic theory. Most of the studies in this theory were directly inspired by physics by problems such as the three body problem, turbulence and astronomical problems, radio engineering etc. Despite this initial insight during twentieth century chaos theory became formalized only later when it first became evident for scientists that linear theory simply could not explain the observed behavior of certain experimental systems. Electronic computers were mainly responsible for development of chaos theory. Much of mathematics in chaos theory involves iteration

of simple mathematical formulae which cannot be done by hand. This repeated calculation was made practical by electronic computers. Chaotic behavior has been observed in laboratory in a variety of systems like lasers, oscillating chemical reactions, fluid dynamics, electric circuits, mechanical and magneto mechanical devices as well as computer models of chaotic process. Chaotic behavior observed in nature include changes in weather, dynamics of satellites in solar system, time evolution of magnetic field of celestial bodies, population growth in ecology, dynamics of action potential in neurons and molecular vibrations. Chaos theory has applications outside science as well. Computer art has become more realistic through use of chaos and fractals. Chaos already has a lasting effect in science and Yet there is much to be discovered. Aspects of chaos shows up everywhere in the world from current of ocean and flow of blood through fractal blood vessels to the branches to trees and effects of turbulence. Chaos has inescapably become part of modern science.
Problem Solving on a Computer Keshavini 5th Semester B.Sc Introduction There are four basic characteristics which have led to the widespread use of modern digital devices (computers and programmable calculators) for applications of numerical methods to solve various types of computational problems. They are : speed, reliability, flexibility and efficiency. In order to solve a problem on a computer, one has to write program and test it for correctness. Once a program is written to solve a specific problem, it is easy to use it for solving another problem of a similar type by changing certain data values and/or changing a few lines of the program. Numerical algorithms can be efficiently implemented on a computer. Typical iterative algorithms can be programmed very efficiently using the looping statement along with a condition testing by a logical statement. Steps of Computer-aided problem solving To solve complex problems on a computer following sequence of tasks are followed

Analysis of the Problem Mathematical modeling of the problem Development of a suitable algorithm Drawing a flow chart Coding of the program in a computer language

Compilation of the program Debugging and testing of the program Documentation of the program
Analysis of the problem The analysis of the problem requires that one should understand what is it that is required to be computed and printed by the computer and what input data is to be made available to the computer. Once the input data and the output variables are identified, one should ascertain whether all the input data are sufficient to compute the desired output. Mathematical modeling of the problem Given the physical problem to be solved on a computer one has to design an appropriate mathematical model of the problem. The mathematical model should specify the entities which correspond to the physical parameters of the problem and the relations which these entities satisfy. For certain problems this step becomes unnecessary because the problem itself might be in a mathematical form. Suitable algorithms are developed with the following points in mind. (1) (2) The algorithm should be comparatively fast. The best algorithm should have a minimum number of operations. The algorithm should require a minimum amount of space in the computer storage area.

We conclude this write-up by elaborating on the concept of algorithm. The concept of algorithm has been widely used to refer to any precise description of the steps for solving a problem. An algorithm may be defined as a sequence of unambiguous steps for solving a problem. A complex process can be analysed and subdivided into simpler sub-task and each sub-task can ultimately be written into a sequence of primitive steps. A particular instruction is understandable to the computer if it can be carried out by the computer in the appropriate manner as required by the steps of the algorithm. While writing programs in FORTRAN, we would consider the FORTRAN statement as the primitive instructions for the computer. An algorithm has the following characteristics : (i) (ii) (iii) definiteness which signifies that each step of the algorithm is to be defined precisely such that there is no ambiguity or contradiction. Finiteness the algorithm must terminate in a finite number of steps. Completeness in the sense that the algorithm must be able to solve all problems of the particular type for which it is designed.

The efficiency of an algorithm can be measured as a function of three aspects:

speed of computation and thus economy of computer operations. Accuracy of the result, i.e. how accurately the result is computed by the algorithm. Stability of the solution, i.e. small, errors in the input data should not produce large errors in the output.

History of Pythagoras Theorem Puneeth Krishna, 5th Sem B.Sc We are all familiar with this equation and the statement In any right triangle, the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares whose sides are the two legs (the two sides that meet at a right angle). But do we really know the history behind the formulation of this theorem.. More than 4000 years ago, the Babylonians and the Chinese already knew that a triangle with the sides of 3, 4 and 5 must be a right triangle. They used this knowledge to construct right angles. By dividing a string into twelve equal pieces and then laying it into a triangle so that one side is three, the second side four and the last side five sections long, they could easily construct a right angle. A Greek scholar named Pythagoras, who lived around 500 BC, was also fascinated by triangles with these special side ratios. He studied them a bit closer and found that the two shorter sides of the triangles squared and then added together, equal exactly the square of the longest side. And he proved that this doesn't only work for the special triangles, but for any right triangle. Today we would write it somehow like this: a2 + b2= c2. In the time of Pythagoras they didn't use letters yet to replace variables. Instead they wrote down everything in words, like this: if you have a right triangle, the squares of the two sides adjacent to the right angle will always be equal to the square of the longest side. We can't be sure if Pythagoras really was the first person to have found this relationship between the sides of right triangles, since no texts written by him were found. In fact, we can't even prove the guy lived. But the theorem a2 + b2= c2 got his name. Another Greek, Euclid, wrote about the theorem about 200 years later in his book called "Elements". There we also find the first known proof for the theorem. Now there are about 600 different proofs. There is also some work on the converse of Pythagoras theorem.

Artificial intelligence

Ranjitha 5th Semester B.Sc

Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents"[ where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines." Sixty years ago his most famous paper was published, introducing the idea of a

Universal Computing Machine ten years before the first stored programme digital computer actually ran. This was only one of a string of varied achievements. It is known now that his work on deciphering the German Enigma code at Bletchley Park during the Second World War made a significant contribution to winning that war, though this remained unknown to his closest friends until after his tragic death from taking potassium cyanide in 1954. Turing's wartime work played a significant role in marking out the importance of mechanical computing facilities. Although much of the hack work was done mechanically, an enormous team of human computers was also involved. Structural similarities between brains and computer circuits, both viewed as networks, have been found before. One property that is shared by many networks from brains and ecosystems to computer circuits and social networks is known as small worldness. "A small world network has a high degree of clustering, that is high connectivity between nearest neighbours, but it also has a small average path length between pairs of nodes," says Bullmore. "In a social network, this means that if you and I are friends, then it's very likely that another friend of mine is also a friend of yours that's clustering." Short path length means that a relatively small number of nodes separate any two given nodes. So even though you might not personally know the Queen, you're probably connected to her via a surprisingly short chain of acquaintances. The machines' ability to think logically comes from a mathematical system called temporal logic, whose roots can be traced back to the 10th century Persian philosopher Ibn Sina. Like other systems of logic, it gives a way of representing statements about the world in a formal language and sets out rules of logical inference that can be implemented on a computer. For example, if we know that a statement P (eg "something is moving towards me") implies a statement Q (eg "there will be a collision"), then if we observe that statement P is actually true, we can immediately deduce that statement Q is also true. Temporal logic has the added ability to deal with statements that can change over time: while in more basic systems a statement like "something is moving towards me" is either true or false, temporal logic allows it to change its truth value over time, depending on other factors. This enables a machine to explore sequences of events and the implications of any course of action it decides to take. What makes the new technology particularly easy to use is the fact that it uses natural language programming. "We want engineers to be able to programme these intelligent systems without too much requirement for programming skills," says an AI expert. A new trend is equipping spacecrafts and satellites with human-like reasoning capabilities, which will enable them to make important decisions for themselves. Using a new control system called 'sysbrain'.

No X-aggeration

Harshitha.B, Vth Sem B.Sc

How companies can gather information and still preserve privacy

Companies and individuals are often at odds,concerned either with collecting information or with preserving privacy. online stores and services are always eager to know more about their customers-income,age,tastes-whereas most of us are not eager to reveal much.

Math suggests a way out of this bind.A few years ago researchers developed an idea that makes telling the truth less worrisome. The idea works if companies are content with accurate aggregate data and not detais about individuals. Here is how it goes: you provide the numerical answer to certain intrusive online questions, but a random number is added to (or subtracted from) it,and only the sum(or difference) is submitted to the company. the statistics needed to recover approximate averages from the submitted numbers is not that difficult, and our privacy is preserved.

thus,say you are 39 and are asked your age. the number sent to the site might be anywhere in the range of 19 to 59, depending on a random number between -20 and +20 that is generated (by the company if you trust it,by an independent site or by you). similar fudge factors would apply to incomes,zip codes,years of schooling,size of family, and so on,with approximate ranges for the generated random number.

another,older example from probability theory illustrates a variant of the idea. imagine you are on an organisation's website, and the organisation wishes to find out how many of its subscribers have ever X-ed, with X being something embarrasing or illegal. not surprisingly, many people will lie if they answer the question at all. once again, random masking comes to the rescue. the site asks the question,"Have you ever X-ed? Yes or no", but represents that before answering it,privately flip a coin. if the coin lands heads, the site requests that you simply answer yes. if the coin lands tails, you are instructed to answer truthfully. because a yes response might indicate only a coin's landing heads,people presumably would have little reason to lie.

the math needed to recover an approximation of the percentage of respondents who have ever X-ed is easy. to illustrate: if 545 of 1,000 respnses answer yes,we would know that about 500 of these yesses were the result of the coin's landing heads because roughly half of all responses would,by heads.of the other approximately 500 people whose coin landed tails,about 45 of them also answered yes. we conclude that because 45 or so of the approximately 500 who answered truthfully have X-ed, the percentage of X-ers is about 45/500, or percent.

in some situations, variants of this low-tech technique, in conjunction with appropriate legislation,would work-or so thinks this 6'9" X-er.

Shing-Tung Yau The Emperor in Math Shanky Singh 5th Sem B.Sc Introduction A Barefoot Boy That Shing-Tung Yau, born in 1949, had such potential was not always obvious. His family fled the mainland and the Communist takeover when he was a baby. As one of eight children of a college professor and a librarian, growing up poor without electricity or running water in a village outside Hong Kong, he was the leader of a street gang and

often skipped school. But talks with his father instilled in him a love of literature and philosophy and, he learned when he started studying math, a taste for abstract thinking. His earlier studies were at the Chinese University of Hong Kong and later he emerged as a precious mathematician after doing his graduate studies at University of California at Berkeley. In 1971, at age 22, Dr. Yau took his new Ph.D. to the Institute for Advanced Study, then to the State University of New York at Stony Brook and Stanford, where he arrived in 1973 in time for a conference on geometry and general relativity Einsteins theory that ascribes gravity to warped space-time geometry. At the conference, Dr. Yau had a brainstorm, realizing he could disprove a longstanding conjecture by the University of Pennsylvania professor Eugenio Calabi that the dimensions of space could be curled up like the loops in a carpet. Contribution in mathematics Dr. Yau set to work on a paper. But two months later he got a letter from Dr. Calabi and realized there was a gap in his reasoning. After agonizing for two weeks, he concluded that the opposite was true: the Calabi conjecture was right. His proof of that, published in 1976, made him a star. His paper would also lay part of the foundation 10 years later for string theory, showing how most of the 10 dimensions of space-time required by the theory of everything could be rolled up out of sight in what are now called Calabi-Yau spaces. Three years later, Dr. Yau proved another important result about Einsteins theory of general relativity: any solution to Einsteins equations must have positive energy. Otherwise, said Dr. Strominger, the Harvard physicist, space-time would be unstable you could have perpetual motion. Prizes and honors flowed Dr. Yaus way after the Calabi triumph, including the Fields Medal, a MacArthur genius grant in 1985 and a National Medal of Science in 1997. He became a United States citizen in 1990. (He said he put away the money from the MacArthur grant for his two childrens college education.)

Dr. Yau has devoted himself to building up Chinese mathematics and promoting basic research, arranging for Chinese students to come to the United States, donating money and books, and tapping rich friends to found mathematics institutes in Hong Kong,

Beijing and Hangzhou. He even lived in Taiwan in the early 1990s so his children would learn Chinese. In 2004, Dr. Yau was honored at the Great Hall of the People for his contributions to Chinese mathematics. In a speech he said that when he won the Fields Medal, I held no passport of any country and should certainly be considered Chinese. Conclusion In August, Dr. Perelman was awarded the Fields Medal at a meeting of the International Mathematical Union in Madrid, but he declined to accept it. A week later a drawing in The New Yorker showed Dr. Yau trying to grab the Fields Medal from the neck of Dr. Perelman. I work on mathematics because of its great beauty, he said. History will judge this work, not a committee.

TOPOLOGY Lokeshwar Vth Semester BSc Topology, sometimes referred to as the mathematics of continuity, or rubber sheet geometry, or the theory of abstract topological spaces, is all of these, but, above all, it is a language, used by mathematicians in practically all branches of our science. We shall start building the library of examples, both nice and natural such as manifolds or the Cantor set, other more complicated and even pathological. Those examples often possess other structures in addition to topology and this provides the key link between topology and other branches of geometry. They will serve as illustrations and the testing ground for the many other notions and Methods. Topological spaces The notion of topological space is defined by means of rather simple and abstract axioms. It is very useful as an umbrella concept which allows to use the geometric language and the geometric way of thinking in a broad variety of vastly different situations. The axioms referred above relate to the fact that points of a geometric object are fundamentally elements of some sets. So one defines a Universal set upon which a topology is built. For instance the real line has R as the universal set. Now for various points we have open sets or intervals containing the point. The there are a set of open sets covering R. The empty set and the universal set are always considered open. The intersection of finite number of open sets is open while the union of any number of open sets (including an infinite union) is again open. These definitions nicely explain the transformations of geometric objects like a sphere, platonic solids, and other exotic structures like the Klein bottle. Even structures studied by Life scientists like the DNA, RNA and other knotted shapes can be analyzed by topology. The Ingredients: Topologists study continuous maps between the spaces that indicate some sort of geometry. The maps are called homeomorphisms. If one puts a relation on all topological objects saying that an object A is related to an object B if and only if they

are homeomorphic, then one can see that it is an equivalence relation. Using this relation one can classify all topological objects. Thus a Sphere, and an Ellipsoid are the same objects for a topologist since they are homeomorphic. Similarly for a topologist a Coffee-cup and a Medu Vada are again the same kind of objects again thanks to the notion of homeomorphism! More subtle properties that topologists study are orientability, homology, cohomology, seperatedness etc. The Devils Curve Shruthi 5th Semester [CsMS] The very first thing which made me settle on this topic was its intriguing name. Having a fondness for fictitious stories and fairy tales I plunged headlong into this devils lure. In geometry, a Devil's curve is a curve defined in the Cartesian plane by an equation of the form y2(y2 a2) = x2(x2 b2). When marked on the x-y plane this looks exactly like a hourglass placed vertically on it, with its upper half on the positive y-axis and its other half on the negative x-axis It has a crunode at the origin. A crunode, also known as an ordinary double point, of a plane curve is point where a curve intersects itself so that two branches of the curve have distinct tangent lines. Gabriel Cramer was the first to investigate the curve, in 1750. Cramer was a Swiss mathematician, he is best known for his work on determinants. Lacroix also studied the curve in 1810.There was a publication about the curve in the Nouvelles Annales in 1858. He was a Swiss mathematician, born in Geneva. He showed promise in mathematics from an early age. At18 he received his doctorate and at 20 he was co-chair of mathematics This devil's curve is also known as the devil on two sticks. I quickly learnt that the devil in the name of the curve is from a juggling game called diabolo, which involves two sticks, a string holding them together, and a spinning prop in the likeness of the form of this curve, which is in the shape of an hourglass. The confusion is the result of the Italian word diabolo meaning 'devil' The above also clarifies the meaning of the sticks in 'the devil on two sticks', these are the sticks used to handle the diabolo. A special case of the curve for a=25/24 the curve is called the electric motor curve. The middle of the curve shows the coils of a wire, which rotate by means of the forces exerted by surrounding magnets This was all about the intriguing and unique curve which has been kindling the interests of many people who have the thirst for exploring the untapped arena of mathematics just like me.

Space Mathematics and Communication Satellites

Prathik, Vth Sem B.Sc ABSTRACT The purpose of this article is to allow the reader to understand the significance of mathematics in a communications satellite. Mathematics plays a vital role in making all satellites work properly. This project focuses particularly on orbital calculations related to these satellites. For example, in order for the satellite to function accurately, it must be thrust at a velocity that conveys an adequate amount of energy to keep it in a particular orbit without adding any more force. When the orbit is low, the outer atmospheres resistance will cause the satellite to decompose, explicitly to lose orbital speed and reenter the earths atmosphere. The higher the orbit is above the earth, the longer the satellite will last, however, disturbing forces still have to be considered. Communications satellites, are different than other satellites because they provide communication over long distances by reflecting or relaying radiofrequency signals. BACKGROUND Communications satellites allocate radio, television, and telephone transmissions to be conveyed everywhere in the world. If we did not have satellites, transmissions would be problematical, and often times, unfeasible at long distances. The signals, which travel in straight lines, could not curve to mold around the earth to reach a target far away. Satellites are in orbit for this purpose. The signals are sent immediately into space and then retransmitted to another satellite, where they are then sent unswervingly to their intentioned destination. Using transponders, commonly called electric devices, satellites receive, magnify, and retransmit signals to the Earth. Iridium: A Communications Satellite Many of the original communications satellites were intended to function in passive mode. That is mirrored signals that [are] beamed up to them by transmitting stations on the ground. Signals [are] reflected in all directions, so they [can pick] up receiving stations around the world, unlike the latest communications satellites of today who vigorously transmit radio signals. Satellites have several unique characteristics which make them particularly useful for everyday life. Because communication in todays society has become very technical it involves precise and specialized equipment. Using orbiting satellites, the telecommunications industry provides telephone, television, and data services between widely separated fixed location points on earth. Communication satellites operate as relay stations, receiving radio signal messages from one position and then conveying them to another. A

communications satellite can transmit numerous television programs and a myriad of telephone calls at the same time. By using mathematics, it is possible to determine the factors that decide the length of a satellites orbit around Earth, and calculate the transmission time for messages to be relayed. To determine the length of a satellites orbit around Earth: Recognizing that the Earth rotates 360 degrees in 24 hours, or: (60minutes/1hour) X (24hours/1day) = (1440minutes/1day) Dividing 360 degrees by 1440 minutes shows that the Earth is rotating 0.25 degrees every minute. Here's the math: (360/1440minutes) = (.25/1minute) The satellites that we will track travel around the Earth in approximately 102 minutes. That is, that if the satellite crossed the equator at 0 degrees longitude on one orbit, it would cross over 25.5 degrees longitude 102 minutes later. (25/1minute) = (25.5/102minutes) Because the Earth is extremely large in relation to the very modest thickness of the atmosphere leads to frequent, intentional distortions of scale in map projections. Constructing a true scale physical model of an orbiting satellite's path will lay the groundwork for insights into geographical configurations on a three-dimensional sphere, and the physical characteristics of the satellite's orbit (Gulf of Maine Aquarium, 2000). Communications satellites usually travel in geostationary orbit, this means that at the orbital altitude of 35,800 kilometers, a geostationary satellite takes as much time to orbit the Earth, as it does for the Earth to revolve once. From Earth, consequently, the satellite appears to be motionless, hovering above the identical area of the Earth. The area to which it can convey is called the satellites footprint (Galactics, 1997). Communications satellites also have the ability to travel in highly elliptical orbits. This type of orbit is roughly shaped like an egg, with the Earth being near the pinnacle of the egg. Hinging on where the satellite is in its orbital path depends on the satellites change in velocity in the extremely elliptical orbit. Because Earths gravitational pull is stronger, the satellite moves faster in orbit when it is closer to Earth. This means that the satellite can be over a stationed area, for a majority of its orbit. The only time that it will be out of acquaintance with the specific area is when it rapidly goes by the Earth (Galactics, 1997). Communication satellites have made it quicker and easier to communicate by lowering the costs of long distance telephone calls.

Photography and Math

Kaushik.M (5 BSc-PCM) I. Introduction A camera is a light tight box with a lens, or even a pinhole, at one end. At the other end of a digital camera is an electronic image sensor. Film, coated with light sensitive chemicals, is at the other end of a film camera. The lens is made up of one or more curved pieces of glass or plastic. The shape of the lens, and the material it is made of, causes light reflected from an object to bend or refract as it travels through the lens. This refraction causes an image to be formed as indicated in Figure 1.

Just the right amount of light must hit the film. Cameras have a door which opens letting light hit the film or sensor and closes to stop the light from hitting the film or sensor. The amount of light hitting the film or sensor depends on how long that door is open and how big the door is. The door is called the shutter. The size of the door is called the aperture. We will examine the shutter in Section II and the aperture in section III. In section IV we will examine the "focal length" of lenses and see how that affects aperture. II. Shutter Speeds The camera shutter is like a door which opens to let light in. Shutter speeds are the time the door or shutter remains open. Common shutter speeds in seconds are 1/1000, 1/500, 1/250, 1/125, 1/60, 1/30, 1/15, 1/8, 1/4, 1/2 and 1. Most also have a B (Bulb) setting where the shutter stays open until you release the button. A common shutter speed is 1/125 second which is a tiny part of one second. In taking pictures of the night sky, however, photographers often use shutter speeds of several seconds, minutes or even hours. Shutter speed dials often use only the denominator instead of the entire fraction. Hence, 1,000 on a shutter speed dial means 1/1000 of a second. The shutter speeds form a geometric sequence. In moving from 1/1000 second towards 1 second each increase in shutter speed increases the time and the amount of light hitting the film by a factor of 2. The common ratio is hence 2. III. Aperture Aperture Defined. The aperture is the diameter of the lens. In other words, the aperture is how big the door is. The larger the diameter, the more light that can pass through the

lens. Cameras can actually change the size of their door or aperture. We will, therefore, first consider telescopes which do not change the size of their opening and are hence easier to understand. Telescopes need a large diameter to collect the light from distant objects in the sky. The diameter of the telescope lens is often much more important that its power or magnification. The price of telescopes increases as the lens diameter increases. Telescope Types. It is easier to make a large diameter curved mirror than it is to make a lens. Larger diameter telescopes therefore often use a curved mirror instead of a lens. The image is formed as shown in the diagram below. Telescopes that use mirrors are called reflecting telescopes. Telescopes using lenses are called refracting telescopes.

Area = Light Gathering Power. While the aperture is measured by the diameter of the lens or mirror, the light gathering power is determined by the area of lens or mirror. The area of a circle equals pi () times the radius squared or Area = r2. A 60mm telescope has an area of r2 = 30mm2 = 2827mm2. (Radius = 1/2 diameter.) A 120mm telescope has an area of 120mm2 = (60mm)2 = 11,310mm2. While the diameter is double, the area and light gathering capacity goes up four times since area is calculated using the square of the radius. (11,310 divided by 2837 = 4) On the test you should be able to determine the area of a lens with a given diameter. You should also be able to compare the area of lens of one diameter with the area of a lens of another diameter. Since we are comparing two quantities by division, we are finding a ratio. You should also be able to take into account the secondary mirror of a reflecting telescope. The secondary mirror is also round. The incoming light is blocked by the secondary mirror. My 6 inch reflecting telescope has a 1.5 inch seconding mirror. You calculate the effective light gathering area as follows: The area of a six inch diameter mirror is r2 = (3in)2 = 28.27 square inches. Subtract from this the area of a 1.5 inch diameter circle. r2 = (1.55in)2 = 1.77 in2 . 28.27 - 1.77 = 26.5 in2. You can also calculate the percentage loss due to the secondary mirror, 1.77 divided 28.27 = .063 = 6.3%, a relatively small loss. Designation of Aperture. The diameter of a telescope lens or mirror is usually given on the telescope, the box and/or the owner's manual. Binoculars are basically two small

telescopes joined together. Binoculars have designations such as 7 x 35, 7 x 50, 10 x 50, or 3 x 21 on them. The first number is the magnification or power. The second number is the lens diameter in millimeters. Camera lenses also designate the aperture but in a more complex way. IV. Focal Length Focal Length Described. Before we discuss the aperture of camera lenses, we must first consider another important characteristic of lenses, the focal length. The focal length can be thought of how long a lens is. Long lenses, like a photographer uses at a football game, makes things appear close up. Short lenses make things look further away but give you a wide angle of view. Now, let's get a little more technical. The focal length of a lens is the distance between the optical center of the lens and the points where a clear image is formed. This is shown in figure 1 repeated below. We measured the focal length of several magnifying glasses which are one piece, or "simple," lenses. A camera lens is usually composed of several individual lenses and is called a compound lens. The focal length of modern camera lenses and telescopes is usually measured in millimeters.

Focal length is very important in photography. Short focal length lenses gives you a wide view and are called wide angle lenses. Longer focal length lenses have a narrow view and make things appear closer. They are called telephoto lenses. In between are normal lenses which have an angle of view similar to the human eye. 35mm cameras use film that is 35mm wide. Common wide angle focal lengths for 35mm cameras are 24mm, 28mm, and 35mm. Common normal focal lengths for 35mm cameras are 50mm and 55mm. Common telephoto lenses for 35mm cameras are 100mm, 135mm, 200mm, 300mm and 400mm. Zoom lenses are popular today. A zoom lens has a range of focal lengths. The photographer changes the focal length with a button on the camera or a ring on the lens. Examples of common zoom lenses for 35mm cameras are 28mm to 80mm, 70mm to 210mm and 100mm to 300mm.

Angle of View. Looking through the viewfinder of a camera, we identified points that were on the edge of the scene. We treated these as points on two rays extending from the camera which was the vertex. In this way we could fairly precisely measure with the large protractors we made the angle of view of a lens with a particular focal length. Figure 3 is a diagram of the angle of view of lenses with focal lengths 28, 35, 55, 100, 135, 200 and 400. The values are from a "Canon EOS System" brochure, page 12, "EF Lens Specifications." These angles are somewhat wider than yours since they measure the angle of view across the diagonal of the viewfinder. SLR Cameras. The cameras we were using are called 35mm single lens reflex (SLR) cameras. 35mm refers to the width of film. The actual area of the image is 24mm x 36mm. Single lens reflex refers to the fact that you see through the lens that is actually creating the image on the film. This is accomplished by a mirror and pentagonal prism. At the instant the picture is taken, however, the mirror pops up, the shutter opens and light passes to the film instead of your eye. The light path, mirror and pentagonal prism are shown in figure 4. SLR Advantages. SLR cameras have two very important advantages over other camera types. First, SLR cameras allow you to see almost exactly what will hit the film or image sensor. This is especially important when taking photographs at a close distance. With a separate viewfinder, what you see in the viewfinder may be different that what the film or image sensor "sees" through the lens. Second, most SLR cameras allow you to change the lens giving you a wide variety of focal lengths to choose from. V. Adjustable Camera Apertures - f-stops Unlike telescopes, you can adjust the aperture or diameter of many camera lenses as shown in Figure 5 using a ring on the lens or an electronic button. This is one of the two primary ways to get the proper amount of light hitting the film.

Figure 5: Adjustable Aperture Lens Relation Between Aperture and Focal Length. The numbers in figure 5 require considerable explanation. A larger aperture (diameter) results in more light. The amount of light is also related to focal length, however. Let's say you are taking a photograph of a wall 20m by 30m with a lens having a 28mm focal length and an aperture (diameter) of 14mm. Now stand at the same location and switch to a 55mm lens, approximately twice the focal length of a 28mm lens. Assume the 55mm lens has the same aperture or diameter. Now you will only see a portion of the wall 10m by 15m; half the distance for both dimensions. The area you see with the 55mm lens will be almost 4 times less than with the 28mm lens. 20m x 30m = 600m2. 10m x 15m = 150m2. 600 divided by 150 = 4. With 1/4 of the scene, you receive 1/4 the light even though both lens openings have an area of r2 = (7mm)2 = 154mm2. To get the same amount of light you need a lens opening with 4 times the area. 4 (154mm2) = 616mm2. Use r2 to find the radius needed. 616mm2 = r2. r2 = 616/. Taking the square root of both sides, r equals approximately 14mm or diameter equals 2r or 28mm. Therefore, to get the same amount of light as a 28mm focal length lens with a diameter of 14 mm, you need a 55mm lens with about twice the diameter or 28mm. f-stops. Relax! Photographers really don't calculate all of this to take a photograph.

Instead, they use f-stops.

. We can also solve this for diameter.

. F-stops are designed so that the same f-stop number on any lens results in the same amount of light hitting the film no matter what the focal length. Cameras use the following sequence of f-stop numbers: 1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22. This sequence seems pretty mysterious at first. We found out what it means, however, by finding the diameter and area for different focal length lenses. Let's find the diameter and area for various f-stops on the 28mm and 55mm lenses.

As the f-stop moves in the direction f1.4 to f16, the area, and hence the amount of light, is cut in half with each increasing f-stop number. It is hence a geometric ratio with the common ratio of 1/2. Conversely, moving in the direction of f16 to f1.4, the area, and hence the amount of light, doubles (increases by a factor of 2) with each decreasing f-stop number. It is hence a geometric ratio with a common ratio of 2. The f-stops and diameters are also geometric ratios decreasing or increasing by a factor of the square root of 2 or about 1.4 with each change of f-stop. The common ratio is the square root of 2 because the amount of light depends on area which changes by the square of radius. The table also shows that for a given f-stop, the area of the 55mm lens is about 4 times that of the 28mm. As explained above, however, the actual light hitting the film is the same since the 55mm lens takes in only 1/4 the scene that the 28mm lens takes in.

On the test I will expect you to know how to use the equations and . For a given focal length and f-stop, you must be able to find the diameter. I expect you to know that the same f-stop will deliver the same amount of light for any focal length lens, although the diameters of the lenses will be different.

You must also recognize that f-stops and the corresponding diameters and areas are geometric ratios with a common factor of the square root of 2 (f-stops and diameters) or 2 (area). Finally, you must know that as you increase from one f-stop number to the next (e.g. f5.6 to f8), you cut the amount of light in half. Conversely, as you decrease from fstop number to the next (e.g. f8 to f5.6), you double the amount of light. What photographers remember is the last two sentences. Designation of Focal Length and Maximum f-stop. Figure 6 shows the front of a lens. The designation 1:1.4/50 means its focal length is 50mm and its maximum f-stop is 1.4. Figure 6 also shows an SLR camera with the focusing ring (red arrow) aperture ring (orange) and shutter speed dial (black) which we will learn about next.

VI. Exposure Correct Exposure with Shutter Speeds and f-stops. To get a photograph that is not too dark or not too light the photographer uses the f-stops and shutter speeds to control the amount of light hitting the film. The amount of light hitting the film is called the exposure. The photographer determines how much light is needed by using a light meter, suggestions on the box of film, or from experience. Since the mid 1960s most single lens reflex cameras have light meters inside them. The photographer can set the shutter speed and then move the aperture ring until a needle or light indicates the exposure is right. Conversely, the photographer can set the f-stop and then move the shutter speed dial until the needle or light indicates the correct exposure. Modern cameras will also generally have a setting that will automatically set a proper shutter speed and f-stop.

Since there are two variables (shutter speeds and f-stops) that control the amount of light hitting the film, there are several combinations of shutter speeds and f-stops that will give the same exposure. Suppose the meter says 1/125, f8 is a proper exposure. I can double the exposure by increasing the shutter speed to 1/60. I can then halve the exposure by changing the aperture to f11. The new setting of 1/60, f11 gives the same exposure (i.e. allows the same amount of light) as the old setting of 1/125, f 8. On the test you will be given a particular shutter speed and f-stop setting. You will then be asked to find several other settings which will give the equivalent exposure. Why Two Controls Are Useful. Why do you need both shutter speed and f-stop settings? First, having both allows you to vary the light over a wide range. For example, 1/1000, f16 lets in very little light, while 1 second, f1.4 lets in a large amount of light. If you just had the shutter speeds or just had the f-stops, you would not have such a wide range. Second, varying the shutter speed or aperture affects how the photograph looks. If you want to freeze action you can use a fast shutter speed of 1/1000, 1/500 or 1/250 of a second. Conversely, you might want to blur a fast moving subject to give the illusion of movement by using a slow shutter speed. You might also use a slow shutter speed with the camera mounted on a tripod to give a misty, feathery appearance to a waterfall. Aperture affects how much of the scene is in focus. If I use an f16 f-stop to take a photograph of a person's head with a bookcase behind them, both his or her head and the bookcase will be in focus. If I instead use an f2 f-stop, his or her head will be in focus but the books in the bookcase will not. I might want the books to be blurred if I am interested in viewers focusing their attention on the person's face rather than the books. This concept of how much is in focus is called depth of field. Many single lens reflex cameras allow you to close the aperture to the desired f-stop while looking through the viewfinder to visualize what the depth of field will be. This is called depth of field preview. Many lenses also have a scale on them that helps you determine the depth of field. VII. ISO Rating There is one more geometric sequence with a common ratio of two that affects exposure it's the film's ISO rating. Film usually has a designation with the letters ISO followed by a number. The number is usually 50, 100, 200, 400, 800, or 1600. This sequence forms a geometric ratio. Other ratings such as 64, 125, and 1,000 also occur. ISO stands for International Standards Organization. The ISO rating is a measure of how sensitive the film is to light. In the scale above, 50 is least sensitive and 1600 is most sensitive. 50 requires twice the exposure of 100, 100 requires twice the exposure of 200, and so forth. Therefore, if my light meter tells me to expose a photograph at 1/125, f8 with ISO 100 film, I can expose it at 1/250, f8 with ISO 200 film, or 1/500, f8 with ISO 400 film. Photographers say the higher numbers are "faster" films and the lower numbers are "slower" films. While "fast" films are good for low light or fast action, they tend to more expensive and may be have more "grain" or less resolution. On the test I might ask you to give me an equivalent exposure if I change the film ISO rating as in the example

above. Also know that if you double the ISO rating, you must cut the exposure in half using either the shutter speed or f-stop. When digital cameras came out, they continued to use the ISO rating. Digital cameras allow you to select different ISO settings at any time. I can take one photo at ISO 100. On the next picture I can set the dial to ISO 200 and cut the exposure in half. Setting the dial to ISO 200 essentially increases the sensitivity of the image sensor to light. With film the only way to change the ISO was to change the film. Many modern digital single lens reflex cameras have ISO settings from ISO 100 to perhaps ISO 3200 or even more today. At higher ISO settings, however, the image quality may suffer with significant "noise," however. VIII. Modern SLRs and Digital Cameras The cameras you used were from the 1960s and 1970s. Modern film SLRs work in the same way except they have more electronic controls, more automation and usually automatic focus in addition to manual focus. Modern film SLR cameras do not necessarily take better pictures, but they are more convenient. When I first wrote this in 2004, digital SLRs had just broken the $1,000 price level with the Canon Digital Rebel 6.3 mega pixel camera which was introduced in October 2003. Just a few years earlier digital SLRs could cost over $10,000 with much lower resolution. As a write the revised version of this article in 2010, you can buy a digital SLR with 10.2 megapixels for $400. A digital SLR with optical through the lens viewing works pretty much the same as a film SLR camera. All of the information in this article applies to both film and digital SLR cameras. IX. Conclusion If you are interested in photography, but are confused by some of the math, don't let that stop your interest. You don't need to understand all the math here to enjoy photography. If you find the math interesting, but don't care for photography, enjoy the math. Photography uses a lot of middle school math concepts. In any event, read through this paper to prepare for the test. Pay particular attention to the "On the test" clues. Focus on the calculations we did in class. You will get to use this paper during the test. While this is a long paper, we have really just touched the surface of studying photography and math. For example, we have not discussed film processing and printing, or digital image manipulation.

Mathematics is a vast subject and finds its application in almost all the fields. Everything around us is of mathematics and we just fail to realize its importance. In fact we all think its just mere calculations and crack our head in getting the result or expected answers! One such amazing thing of mathematics is MATHE MATICS IN POETRY. POEMS WITH MATHEMATICAL IMAGERY:


Axes beget coordinates, Dutifully expressing Functions, graphs Helpful in justifications Keeping legendary mathematics New or peculiarly quite rational So that understandings visual With x,y,z. 2) GRAVITY AND LEVITY This is the bigger world than it was once It expands an explosion it cant help it has Nothing to do with us whether we know or Not whether our theories can be proved Whether or not a mathematician Knew a better class of circles (He has a name, Taniyama conjecture) Than was ever known before before Not circles elliptic curves. Not doughnuts. Not anything that is nearly, only is, such A world is hard to imagine, harder living in Harder still to leave. A little like Love, Dear.



A TRIANGULAR POEM- MORE THAN COUNTING Its a type of visual poem, its layouts and contents are related. In fact, this stanza shows the sound effect of different line lengths. One can notice how the pace changes when read as the length of the line changes and feel the shape of the poem. One Added Forever Joined by zero, Paired to opposites These build the integers, Base for construction of more New numbers from old: ratios, Radical roots and transcendental, Transfinite cardinalsconstruction bold!!!