Sie sind auf Seite 1von 75

MICROECONOMICS ANALYSIS

MEC-1

MICROECONOMICS ANALYSIS

Selected & Expected Questions


Easy to Remember
Best Guarantee of Success
Planned with superior experience & high quality

Page 1
MICROECONOMICS ANALYSIS

QNO1:- What do you mean by a pure public good?

ANS:- In economics, a public good is a good that is both non-excludable and non-rivalrous in that
individuals cannot be effectively excluded from use and where use by one individual does not reduce
availability to others. Examples of public goods include fresh air, knowledge, lighthouses, national
defense, flood control systems and street lighting. Public goods that are available everywhere are
sometimes referred to as global public goods.

Many public goods may at times be subject to excessive use resulting in negative externalities affecting
all users; for example air pollution and traffic congestion. Public goods problems are often closely related
to the "free-rider" problem or the tragedy of the commons, in which people not paying for the good may
continue to access it. Thus, the good may be under-produced, overused or degraded. Public goods may
also become subject to restrictions on access and may then be considered to be club goods or private
goods; exclusion mechanisms include copyright, patents, congestion pricing, and pay television.

Uncoordinated markets driven by self-interested parties may be unable to provide these goods. There is a
good deal of debate and literature on how to measure the significance of public goods problems in an
economy, and to identify the best remedies.

The economic concept of public goods should not be confused with the expression "the public good",
which is usually an application of a collective ethical notion of "the good" in political decision-making.
Another common confusion is that public goods are goods provided by the public sector. Although it is
often the case that Government is involved in producing public goods, this is not necessarily the case.
Public goods may be naturally available. They may be produced by private individuals and firms, by non-
state collective action, or they may not be produced at all.

The theoretical concept of public goods does not distinguish with regard to the geographical region in
which a good may be produced or consumed. However, some theorists (such as Inge Kaul) use the term
'global public good' for public goods which is non-rival and non-excludable throughout the whole world,
as opposed to a public good which exists in just one national area. Knowledge has been held to be an
example of a global public good, but also as a commons, the Knowledge commons.

Graphically, non-rivalry means that if each of several individuals has a demand curve for a public good,
then the individual demand curves are summed vertically to get the aggregate demand curve for the public
good. This is in contrast to the procedure for deriving the aggregate demand for a private good, where
individual demands are summed horizontally.

Examples

Common examples of public goods include: defense, public fireworks, lighthouses, clean air and other
environmental goods, and information goods, such as software development, authorship, and invention.
Some goods (such as orphan drugs) require special governmental incentives to be produced, but can't be
classified as public goods since they don't fulfill the above requirements (Non-excludable and non-
rivalrous.) Law enforcement, streets, libraries, museums, and education are commonly misclassified as
public goods, but they are technically classified in economic terms as quasi-public goods because
excludability is possible, but they do still fit some of the characteristics of public goods.

The provision of a lighthouse has often been used as the standard example of a public good, since it is
difficult to exclude ships from using its services. No ship's use detracts from that of others, but since most

Page 2
MICROECONOMICS ANALYSIS

of the benefit of a lighthouse accrues to ships using particular ports, lighthouse maintenance fees can
often profitably be bundled with port fees (Ronald Coase, The Lighthouse in Economics 1974). This has
been sufficient to fund actual lighthouses.

Technological progress can create new public goods. The most simple examples are street lights, which
are relatively recent inventions (by historical standards). One person's enjoyment of them does not detract
from other persons' enjoyment, and it currently would be prohibitively expensive to charge individuals
separately for the amount of light they presumably use. On the other hand, a public good's status may
change over time. Technological progress can significantly impact excludability of traditional public
goods: encryption allows broadcasters to sell individual access to their programming. The costs for
electronic road pricing have fallen dramatically, paving the way for detailed billing based on actual use.

There is some question as to whether defense is a public good. Murray Rothbard argues, "'national
defense' is surely not an absolute good with only one unit of supply. It consists of specific resources
committed in certain definite and concrete waysand these resources are necessarily scarce. A ring of
defense bases around New York, for example, cuts down the amount possibly available around San
Francisco." Jeffrey Rogers Hummel and Don Lavoie note, "Americans in Alaska and Hawaii could very
easily be excluded from the U.S. government's defense perimeter, and doing so might enhance the
military value of at least conventional U.S. forces to Americans in the other forty-eight states. But, in
general, an additional ICBM in the U.S. arsenal can simultaneously protect everyone within the country
without diminishing its services."

QNO2:- How would you differentiate a static game from that of a dynamic game?
ANS: -The difference is not that static game is represented by normal (strategic) form while dynamic
game is represented by extensive form (game tree). As a matter fact, both can be represented by normal
form or extensive form. The main difference between them is what is known by the players when they
make their decision!! In the previous game, the wife knew whether her husband had bought meat or fish
when she needs to choose between red or white wine. Information set for a player is a set of decision
nodes in a game tree such that 1 the player concerned (and no other) is making a decision; 2 the player
does not know which node has been reached (only that it is one of the nodes in the set). So a player must
have the same choices at all nodes in an information set.

For a dynamic game, behavioral strategy performs randomization at the information set.
In working backward through the game tree, we found a best response at each information set. So the end
result is an equilibrium in "behavioral strategies".
QNO3:- Explain how and in what circumstances markets would lead to a Pareto efficient allocation
of resources?

ANS: - Pareto efficiency, or Pareto optimality, is a concept in economics with applications in


engineering. The term is named after Vilfredo Pareto (18481923), an Italian economist who used the
concept in his studies of economic efficiency and income distribution. In a Pareto efficient economic
allocation, no one can be made better off without making at least one individual worse off. Given an
initial allocation of goods among a set of individuals, a change to a different allocation that makes at least
one individual better off without making any other individual worse off is called a Pareto improvement.
An allocation is defined as "Pareto efficient" or "Pareto optimal" when no further Pareto improvements
can be made.

Pareto efficiency is a minimal notion of efficiency and does not necessarily result in a socially desirable
distribution of resources: it makes no statement about equality, or the overall well-being of a society. The

Page 3
MICROECONOMICS ANALYSIS

notion of Pareto efficiency can also be applied to the selection of alternatives in engineering and similar
fields. Each option is first assessed under multiple criteria and then a subset of options is identified with
the property that no other option can categorically outperform any of its members.

It is commonly accepted that outcomes that are not Pareto efficient are to be avoided, and therefore Pareto
efficiency is an important criterion for evaluating economic systems and public policies. If economic
allocation in any system is not Pareto efficient, there is potential for a Pareto improvementan increase
in Pareto efficiency: through reallocation, improvements can be made to at least one participant's well-
being without reducing any other participant's well-being. It is important to note, however, that a change
from an inefficient allocation to an efficient one is not necessarily a Pareto improvement. Thus, in
practice, ensuring that nobody is disadvantaged by a change aimed at achieving Pareto efficiency may
require compensation of one or more parties. For instance, if a change in economic policy eliminates a
monopoly and that market subsequently becomes competitive and more efficient, the monopolist will be
made worse off. However, the loss to the monopolist will be more than offset by the gain in efficiency.
This means the monopolist can be compensated for its loss while still leaving a net gain for others in the
economy, a Pareto improvement. In real-world practice, such compensations have unintended
consequences. They can lead to incentive distortions over time as agents anticipate such compensations
and change their actions accordingly. Under certain idealized conditions, it can be shown that a system of
free markets will lead to a Pareto efficient outcome. This is called the first welfare theorem. It was first
demonstrated mathematically by economists Kenneth Arrow and Grard Debreu. However, the result
only holds under the restrictive assumptions necessary for the proof (markets exist for all possible goods
so there are no externalities, all markets are in full equilibrium, markets are perfectly competitive,
transaction costs are negligible, and market participants have perfect information). In the absence of
perfect information or complete markets, outcomes will generically be Pareto inefficient, per the
GreenwaldStiglitz theorem.

A "weak Pareto optimum" (WPO) is an allocation for which there are no possible alternative allocations
whose realization would cause every individual to gain. Thus an alternative allocation is considered to be
a Pareto improvement only if the alternative allocation is strictly preferred by all individuals. When
contrasted with weak Pareto efficiency, a standard Pareto optimum as described above may be referred to
as a "strong Pareto optimum" (SPO).

Weak Pareto-optimality is "weaker" than strong Pareto-optimality in the sense that the conditions for
WPO status are "weaker" than those for SPO status: any SPO also qualifies as a WPO, but a WPO
allocation is not necessarily an SPO.

QNO4:- Why would you prefer Pareto's approach welfare analysis over that of Pigou? Elaborate
your view points.
ANS: - In view of the distinct and seminal contributions of Pareto and Pigou to the economics of welfare,
Pigous enduring influence in the field of public finance and Paretos hostility to developments in that
field of study, the lack of a comparative study of their contributions is unfortunate. This study contrasts
the place of ophelimity and utility and in these authors approaches to welfare studies. Attention is also
given to the place of individuals consciousness of consumption by others in the treatment of economic
welfare and total welfare. It is found that the substantive differences in the welfare studies of these two
scholars have less to do with Pigous direct and Paretos less direct materialistic focus of welfare
economics or the differing ordinal/cardinal dimensions to their analysis, than with Paretos and Pigous

Page 4
MICROECONOMICS ANALYSIS

diverse views: on the theoretical representation of the economic phenomenon when individual behaviour
is influenced by the consumption by others; and on the character of science. These last two differences are
important because they have direct consequences for the scope of economic and social welfare theories.

Welfare economics is a branch of economics that uses microeconomic techniques to evaluate economic
well-being, especially relative to competitive general equilibrium within an economy as to economic
efficiency and the resulting income distribution associated with it. It analyzes social welfare, however
measured, in terms of economic activities of the individuals that compose the theoretical society
considered. Accordingly, individuals, with associated economic activities, are the basic units for
aggregating to social welfare, whether of a group, a community, or a society, and there is no "social
welfare" apart from the "welfare" associated with its individual units.

Welfare economics typically takes individual preferences as given and stipulates a welfare improvement
in Pareto efficiency terms from social state A to social state B if at least one person prefers B and no one
else opposes it. There is no requirement of a unique quantitative measure of the welfare improvement
implied by this. Another aspect of welfare treats income/goods distribution, including equality, as a
further dimension of welfare.

Social welfare refers to the overall welfare of society. With sufficiently strong assumptions, it can be
specified as the summation of the welfare of all the individuals in the society. Welfare may be measured
either cardinally in terms of "utils" or dollars, or measured ordinally in terms of Pareto efficiency. The
cardinal method in "utils" is seldom used in pure theory today because of aggregation problems that make
the meaning of the method doubtful, except on widely challenged underlying assumptions. In applied
welfare economics, such as in cost-benefit analysis, money-value estimates are often used, particularly
where income-distribution effects are factored into the analysis or seem unlikely to undercut the analysis.

The capabilities approach to welfare argues that freedom - what people are free to do or be - should be
included in welfare assessments, and the approach has been particularly influential in development policy
circles where the emphasis on multi-dimensionality and freedom has shaped the evolution of the Human
Development Index.

Other classifying terms or problems in welfare economics include externalities, equity, justice, inequality,
and altruism.

QNO5:- Write short note on Rawl's theory of justice.

ANS: - A Theory of Justice is a work of political philosophy and ethics by John Rawls. It was originally
published in 1971 and revised in both 1975 (for the translated editions) and 1999. In A Theory of Justice,
Rawls attempts to solve the problem of distributive justice (the socially just distribution of goods in a
society) by utilizing a variant of the familiar device of the social contract. The resultant theory is known
as "Justice as Fairness", from which Rawls derives his two principles of justice: the liberty principle and
the difference principle.

Objective

In A Theory of Justice, Rawls argues for a principled reconciliation of liberty and equality. Central to this
effort is an account of the circumstances of justice, inspired by David Hume, and a fair choice situation
for parties facing such circumstances, similar to some of Immanuel Kant's views. Principles of justice are

Page 5
MICROECONOMICS ANALYSIS

sought to guide the conduct of the parties. These parties are recognized to face moderate scarcity, and
they are neither naturally altruistic nor purely egoistic. They have ends which they seek to advance, but
prefer to advance them through cooperation with others on mutually acceptable terms. Rawls offers a
model of a fair choice situation (the original position with its veil of ignorance) within which parties
would hypothetically choose mutually acceptable principles of justice. Under such constraints, Rawls
believes that parties would find his favoured principles of justice to be especially attractive, winning out
over varied alternatives, including utilitarian and libertarian accounts.

According to Rawls, ignorance of these details about oneself will lead to principles that are fair to all. If
an individual does not know how he will end up in his own conceived society, he is likely not going to
privilege any one class of people, but rather develop a scheme of justice that treats all fairly. In particular,
Rawls claims that those in the Original Position would all adopt a maximum strategy which would
maximize the prospects of the least well-off.

The basic liberties of citizens are, the political liberty to vote and run for office, freedom of speech and
assembly, liberty of conscience, freedom of personal property and freedom from arbitrary arrest.
However, he says:

liberties not on the list, for example, the right to own certain kinds of property (e.g. means of production)
and freedom of contract as understood by the doctrine of laissez-faire are not basic; and so they are not
protected by the priority of the first principle.

The first principle may not be violated, even for the sake of the second principle, above an unspecified but
low level of economic development. However, because various basic liberties may conflict, it may be
necessary to trade them off against each other for the sake of obtaining the largest possible system of
rights. There is thus some uncertainty as to exactly what is mandated by the principle, and it is possible
that a plurality of sets of liberties satisfy its requirements.

Rawls' claim in (a) is that departures from equality of a list of what he calls primary goods"things
which a rational man wants whatever else he wants" [Rawls, 1971, pg. 92]are justified only to the
extent that they improve the lot of those who are worst-off under that distribution in comparison with the
previous, equal, distribution. His position is at least in some sense egalitarian, with a proviso that equality
is not to be achieved by worsening the position of the least advantaged. An important consequence here,
however, is that inequalities can actually be just on Rawls' view, as long as they are to the benefit of the
least well off. His argument for this position rests heavily on the claim that morally arbitrary factors (for
example, the family one is born into) shouldn't determine one's life chances or opportunities. Rawls is
also keying on an intuition that a person does not morally deserve their inborn talents; thus that one is not
entitled to all the benefits they could possibly receive from them; hence, at least one of the criteria which
could provide an alternative to equality in assessing the justice of distributions is eliminated.

The stipulation in (b) is lexically prior to that in (a). Fair equality of opportunity requires not merely that
offices and positions are distributed on the basis of merit, but that all have reasonable opportunity to
acquire the skills on the basis of which merit is assessed. It may be thought that this stipulation, and even
the first principle of justice, may require greater equality than the difference principle, because large
social and economic inequalities, even when they are to the advantage of the worst-off, will tend seriously
to undermine the value of the political liberties and any measures towards fair equality of opportunity.

QNO6:- Write short note on Public goods.

Page 6
MICROECONOMICS ANALYSIS

ANS:- In economics, a public good is a good that is both non-excludable and non-rivalrous in that
individuals cannot be effectively excluded from use and where use by one individual does not reduce
availability to others. Examples of public goods include fresh air, knowledge, lighthouses, national
defence, flood control systems and street lighting. Public goods that are available everywhere are
sometimes referred to as global public goods.

Many public goods may at times be subject to excessive use resulting in negative externalities affecting
all users; for example air pollution and traffic congestion. Public goods problems are often closely related
to the "free-rider" problem or the tragedy of the commons, in which people not paying for the good may
continue to access it. Thus, the good may be under-produced, overused or degraded. Public goods may
also become subject to restrictions on access and may then be considered to be club goods or private
goods; exclusion mechanisms include copyright, patents, congestion pricing, and pay television.

Uncoordinated markets driven by self-interested parties may be unable to provide these goods. There is a
good deal of debate and literature on how to measure the significance of public goods problems in an
economy, and to identify the best remedies.

QNO7:- Why would you say Boumol's model of sales maximization is. an alternative theory of firm
? Which features of his model may be considered to support your view points? Explain your
answer.

ANS: - 1. Baumols Sales Maximisation Model Pankaj Kumar


2. Prof. Baumol in his article on the theory of oligopoly presented a managerial theory of the firm
based on sales maximisation. Assumption: Theory is based on the following assumptions: There
is a single period time horizon of the firm. Firm aims at maximising its total sales and revenue in
the long run subject to the profit constraint. Firms minimum profit constraint is set competitively
in terms of the current market value of its shares. Firm is oligopolistic whose cost curves are U-
shaped and the demand curve is downward sloping.Its total revenue and cost curves are also of
conventional type.
3. Baumols findings of oligopoly firms suggest that the business firms are much concerned about
their total increase in sales than profits. He gives number of arguments to support his point of
view: A firm attaches great importance to the magnitude of Sales and is much concerned about
declining sales. If sales are declining,banks,creditors and capital market are not prepared to
provide finance to it. Its own distributors and dealers might stop taking interest in it. Consumers
might not buy the products because of lack of popularity Firm reduces its managerial and other
staff with the fall in sales If the firms sales are large, there are economies of scale, the firm
expands and earns profits Salaries of workers and management also depends on the large sale
4. By sales maximization Baumol means maximization of total revenue It does not imply the
sales of large quantities of output, but refers to the increase in the money sales. Sales can be
increase upto the point of profit maximization where the marginal cost is equal to the marginal
revenue If sales are increased beyond this point, money sales may increase at the expense of
profits. But oligopolist firms wants its money sales to grow even though it earns minimum
profits. Minimum profits are determined on the basis of firms need to maximize sales and also to
sustain the growth of sales. it is required either in the form of retained earnings or new capital
from the market. The firm also needs minimum profits to finance the future sales, to pay the
dividends on the share capital and for meeting other financial requirements. Thus minimum
profits serve as a constraint on the maximization of a firms revenue maximum revenue will be
obtained only at the output at which the elasticity of demand is unity,i.e.at which MR is equal to
zero. This is the condition which replaces the MC=MR profit maximization rule
5. BaumolS Model TC TR TP E L B C Q D K M S P OUTPUT TR/ TC/ P R O F I T S X Y

Page 7
MICROECONOMICS ANALYSIS

6. In the fig.TC is the total cost curve and MP the minimum profit or profit constraint line.. Firm
maximizes its profits at OQ level of output corresponding to the highest point B on the TP curve.
But aim of firm is to maximize sales rather than profits The sales maximization output is OK
where the total revenue KL is maximum at the highest of TR. the sales maximization output OK
is greater than the profit maximization output OQ. But the sales maximization is subject to the
minimum profit constraint.
7. If minimum profit constraint is represented by line MP The output OK will maximise the sales
as minimum profits OM are not being covered by the total profits KS. For sales maximization the
firm should produce OD level of output where minimum profits DC (=OM) are consistent with
DE amount of total revenue at the price DE/OD,( total revenue /total output) Baumols model of
sales maximization points out that the profit maximization output OQ willl be smaller than the
sales maximization output OK, and price higher than under sales maximization
8. Model With Advertising Baumol has further shown that the profit constraint under sales
maximization is also effective in advertising and thereby increases the firms revenue. This is
shown in the diagram given on the next slide
9. o Q D M C P AdC TR TC T S A E TP X Y Advertising Outlay TR/ TC/ P
10. In the fig., expenditure on advertising is shown on the horizontal axis . TR is the total revenue
curve. The 45 line AdC is the advertisement cost curve. By adding a fixed amount of other costs
equal to OC to AdC curve we get the total cost curve TC. Here production costs OC are assumed
independent of advertising costs. TP is the total profit curve which is the difference between TR
and TP curve. MP is the minimum profit constraint line. The profit maximization firm will spend
OQ on advertising and its total revenue will be OS(=QA).
11. On the other hand given the profit constraint MP, the sales maximization firm will spend OD
on advertising and earn OT (=DE) as the total revenue. Thus the sales maximisation firm spends
more on advertising OD than the profit maximizing firm (OQ), OD>OQ and also earns higher
revenue (DE) than the latter (QA), DE>QA, at the profit constraint MP. Thus it will always pay
the sales maximiser to increase his advertising outlay until he is stopped by the profit constraint.
12. Conclusion The theory leads to the conclusion that the sales revenue maximization firm: Will
produce at a higher level Will keep the prices low Will invest in such a manner,as on
advertisement, that the demand for its product will increase.

QNO8:- Write short note on VNM utility function.

ANS: - In 1947, John von Neumann and Oskar Morgenstern exhibited four relatively modest axioms of
"rationality" such that any agent satisfying the axioms has a utility function. That is, they proved that an
agent is (VNM-)rational if and only if there exists a real-valued function u defined on possible outcomes
such that every preference of the agent is characterized by maximizing the expected value of u, which can
then be defined as the agent's VNM-utility (it is unique up to adding a constant and multiplying by a
positive scalar). No claim is made that the agent has a "conscious desire" to maximize u, only that u
exists.

The expected utility hypothesis is that rationality can be modeled as maximizing an expected value, which
given the theorem, can be summarized as "rationality is VNM-rationality".

VNM-utility is a decision utility in that it is used to describe decision preferences. It is related but not
equivalent to so-called E-utilities (experience utilities), notions of utility intended to measure happiness
such as that of Bentham's greatest happiness principle.

Setup

Page 8
MICROECONOMICS ANALYSIS

In the theorem, an individual agent is faced with options called lotteries. Given some mutually exclusive
outcomes, a lottery on them is a scenario where each outcome will happen with a given probability, all
summing to one. For example,

denotes a scenario where P(A) = 25% and P(B) = 75% (and exactly one of them will occur). More
generally, for a lottery with many possible outcomes Ai, we write

with the sum of the s equalling 1.

The outcomes in a lottery can themselves be lotteries between other outcomes, and the expanded
expression is considered an equivalent lottery: 0.5(0.5A + 0.5B) + 0.5C = 0.25A + 0.25B + 0.50C.

We also declare that L = M if the agent is indifferent between L and M. This is not necessary, however,
and can be handled using a more explicit indifference relation instead; see Kreps (1988).

The axioms

The four axioms of VNM-rationality are then completeness, transitivity, continuity, and independence.

Completeness assumes that an individual has well defined preferences:

Axiom 1 (Completeness) For any lotteries L,M, exactly one of the following holds:

, , or (either L is preferred, M is preferred, or there is no


preference).

Transitivity assumes that preference is consistent across any three options:

Axiom 2 (Transitivity) If and , then .

Continuity assumes that there is a "tipping point" between being better than and worse than a given
middle option:

Axiom 3 (Continuity): If , then there exists a probability such that

Instead of continuity, an alternative axiom can be assumed that does not involve a precise equality, called
the Archimedean property[3]. It says that any separation in preference can be maintained under a
sufficiently small deviation in probabilities:

Page 9
MICROECONOMICS ANALYSIS

Axiom 3 (Archimedean property): If , then there exists a probability


such that

Only one of (3) and (3) need be assumed, and the other will be implied by the theorem.

Independence of irrelevant alternatives assumes that a preference holds independently of the possibility of
another outcome:

Axiom 4 (Independence): If , then for any and ,

The theorem

For any VNM-rational agent (i.e. satisfying 14), there exists a function u assigning to each outcome A a
real number u(A) such that for any two lotteries,

where Eu(L) denotes the expected value of u in L:

As such, u can be uniquely determined (up to adding a constant and a multiplying by a positive scalar) by
preferences between simple lotteries, meaning those of the form pA + (1 p)B having only two
outcomes. Conversely, any agent acting to maximize the expectation of a function u will obey axioms 1
4. Such a function is called the agent's von NeumannMorgenstern (VNM) utility.

Reaction

Von Neumann and Morgenstern anticipated surprise at the strength of their conclusion. But according to
them, the reason their utility function works is that it is constructed precisely to fill the role of something
whose expectation is maximized:

"Many economists will feel that we are assuming far too much ... Have we not shown too much? ... As far
as we can see, our postulates [are] plausible ... We have practically defined numerical utility as being that
thing for which the calculus of mathematical expectations is legitimate." VNM 1953, 3.1.1 p.16 and
3.7.1 p. 28

Thus, the content of the theorem is that the construction of u is possible, and they claim little about its
nature.

Consequences

Page 10
MICROECONOMICS ANALYSIS

Automatic consideration of risk aversion

It is often the case that a person, faced with real-world gambles with money, does not act to maximize the
expected value of their savings in dollars. For example, a person who only owns $1000 may be reluctant
to risk it all for a 20% chance odds to win $10,000, even though

However, if the person is VNM-rational, such facts are automatically accounted for in their utility
function u. In this example, we could conclude that

where the dollar amounts here really represent outcomes, the three possible situations the individual could
face. In particular, u can exhibit properties like u($1)+u($1) u($2) without contradicting VNM-
rationality at all. This leads to a quantitative theory of monetary risk aversion.

Implications for the expected utility hypothesis

In 1738, Daniel Bernoulli published, in which he posits that rational behavior can be described as
maximizing the expectation of a function u, which in particular need not be monetary-valued, thus
accounting for risk aversion. This is the expected utility hypothesis. As stated, the hypothesis may appear
to be a bold claim. The aim of the expected utility theorem is to provide "modest conditions" (i.e. axioms)
describing when the expected utility hypothesis holds, which can be evaluated directly and intuitively:

"The axioms should not be too numerous, their system is to be as simple and transparent as possible, and
each axiom should have an immediate intuitive meaning by which its appropriateness may be judged
directly. In a situation like ours this last requirement is particularly vital, in spite of its vagueness: we
want to make an intuitive concept amenable to mathematical treatment and to see as clearly as possible
what hypotheses this requires." VNM 1953 3.5.2, p. 25

As such, claims that the expected utility hypothesis does not characterize rationality must reject one of the
VNM axioms. A variety of generalized expected utility theories have arisen, most of which drop or relax
the independence axiom.

Implications for ethics and moral philosophy

Because the theorem assumes nothing about the nature of the possible outcomes of the gambles, they
could be morally significant events, for instance involving the life, death, sickness, or health of others. A
von NeumannMorgenstern rational agent is capable of acting with great concern for such events,
sacrificing much personal wealth or well-being, and all of these actions will factor into the
construction/definition of the agent's VNM-utility function. In other words, both what is naturally
perceived as "personal gain", and what is naturally perceived as "altruism", are implicitly balanced in the
VNM-utility function of a VNM-rational individual. Therefore, the full range of agent-focussed to agent-
neutral behaviors are possible with various VNM-utility functions.

Page 11
MICROECONOMICS ANALYSIS

Distinctness from other notions of utility

Some utilitarian moral theories are concerned with quantities called the "total utility" and "average utility"
of collectives, and characterize morality in terms of favoring the utility or happiness of others with
disregard for one's own. These notions can be related to, but are distinct from, VNM-utility:

1) VNM-utility is a decision utility: it is that according to which one decides, and thus by
definition cannot be something which one disregards.

2) VNM-utility is not canonically additive across multiple individuals (see Limitations), so "total
VNM-utility" and "average VNM-utility" are not immediately meaningful (some sort of
normalization assumption is required).

The term E-utility for "experience utility" has been coined to refer to the types of "hedonistic" utility like
that of Bentham's greatest happiness principle. Since morality affects decisions, a VNM-rational agent's
morals will affect the definition of its own utility function (see above). Thus, the morality of a VNM-
rational agent can be characterized by correlation of the agent's VNM-utility with the VNM-utility, E-
utility, or "happiness" of others, among other means, but not by disregard for the agent's own VNM-
utility, a contradiction in terms.

Limitations

Nested gambling

Since if L and M are lotteries, then pL + (1 p)M is simply "expanded out" and considered a lottery
itself, the VNM formalism ignores what may be experienced as "nested gambling". This is related to the
Ellsberg problem where people choose to avoid the perception of risks about risks. Von Neumann and
Morgenstern recognized this limitation:

"...concepts like a specific utility of gambling cannot be formulated free of contradiction on this level.
This may seem to be a paradoxical assertion. But anybody who has seriously tried to axiomatize that
elusive concept, will probably concur with it." VNM 1953 3.7.1, p. 28.

Incomparability between agents

Since for any two VNM-agents X and Y, their VNM-utility functions uX and uY are only determined up to
additive constants and multiplicative positive scalars, the theorem does not provide any canonical way to
compare the two. Hence expressions like uX(L) + uY(L) and uX(L) uY(L) are not canonically defined,
nor are comparisons like uX(L) < uY(L) canonically true or false. In particular, the aforementioned "total
VNM-utility" and "average VNM-utility" of a population are not canonically meaningful without
normalization assumptions.

Applicability to economics

The expected utility hypothesis, as applied to economics, has limited predictive accuracy, simply because
in practice, humans do not always behave VNM-rationally. This can be interpreted as evidence that

humans are not always rational, or


VNM-rationality is not an appropriate characterization of rationality, or

Page 12
MICROECONOMICS ANALYSIS

Some combination of both.

QNO9:- Differentiate between Moral hazard and adverse selection.

ANS: - Adverse selection versus moral hazard

Adverse selection and moral hazard are both examples of market failure situations, caused due to
asymmetric information between buyers and sellers in a market. This article discusses the similarities and
differences between adverse selection and moral hazard.

Key difference: before versus after the deal

Adverse selection: asymmetry in information prior to the deal

Adverse selection occurs when the seller values the good more highly than the buyer, because the seller
has a better understanding of the value of the good. Due to this asymmetry of information, the seller is
unwilling to part with the good for any price lower than the value the seller knows it has. On the other
hand, the buyer, who is not sure of the value of good, is unwilling to pay more than the expected value of
the good, which takes into account the possibility of getting a bad piece.

It is this asymmetry of information prior to the transaction that prevents the transaction from occurring. If
both the seller and the buyer were uncertain of the quality, they would be willing to trade the good based
on expected values. Similarly, if both the seller and the buyer were certain of the quality, they would be
willing to trade the good based on its actual value.

Moral hazard: asymmetry in information/inability to control behavior after the deal

Moral hazard is seen for services such as insurance and warranties. In these cases, after the deal is done,
one of the parties to the deal (in this case, the person purchasing the insurance or warranty) may be more
careless because he/she has the insurance, and thus does not need to pay the full cost of a damage. For
instance, a person possessing insurance against theft may be less careful about closing the windows when
leaving the house. Here, it is not the prior information that either party has, but the inability of the
insurance provider to control and monitor increased risk-taking behavior that creates the potential for
market failure.

Also, while in adverse selection, the seller is usually the one possessing more information, moral hazard
usually has the buyer (of the insurance service) having too much control.

Examples of situations where adverse selection and moral hazard are related

Health insurance is an example of a service that suffers both from adverse selection and from moral
hazard, and often it is difficult to differentiate the two. Here are some examples:

The insured person may choose to conceal certain unhealthy habits or genetic traits that make the
insurance attractive for the person but unprofitable for the company. This is an example of
adverse selection: The person getting insured has more information about the quality of his or her
health than the insurance company.

Page 13
MICROECONOMICS ANALYSIS

After getting insured, the person is more careless about health. For instance, he/she may take
fewer dietary precautions, smoke or drink more, or indulge in physical activities dangerous to the
health. This is an example of moral hazard.

There is some fuzziness between the problem of concealing a habit prior to getting insured, and becoming
more reckless after getting insured.

Examples of situations where adverse selection occurs but moral hazard does not

In most situations that do not involve insurance, warranties, legal liabilities, renting services, or any form
of continued contract and obligation, moral hazard is unlikely to occur. On the other hand, adverse
selection can occur for any experience good, i.e., any good whose value is determined only after buying it
and using it.

For instance, when selling a used car, the seller does not need to worry about how the buyer will treat the
car after the deal is done, because the seller has no continued obligation to the buyer to ensure that the car
remains in good condition. However, the problem of adverse selection may still occur if buyers have no
easy way of evaluating the quality of the car without actually buying it.

Examples of situations where moral hazard occurs involve a somewhat different form of adverse selection

Any situation involving moral hazard also involves adverse selection to at least some extent. This is
because, as in the case of health insurance, the person who could indulge in potentially risk-taking
behavior may have prior information about his/her excessive risk-taking tendencies and this prior
information may have influenced the decision to purchase insurance. This makes insurance sellers set
overly cautious rates, and thus, the buyers who are actually less risk-prone end up not buying insurance.

However, this adverse selection differs from the more usual adverse selection seen in used-car markets. In
the adverse selection seen in insurance, it is the buyer who has more information, and it is this that makes
the buyer unlikely to purchase an insurance that is based on actuarial estimates made by the seller.

QNO10:-Write a short note on Hotelling's lemma.

ANS: - Hotelling's lemma is a result in microeconomics that relates the supply of a good to the profit of
the good's producer. It was first shown by Harold Hotelling, and is widely used in the theory of the firm.
The lemma is very simple, and can be stated:

Let be a firm's net supply function in terms of a certain good's price ( ). Then:

for the profit function of the firm in terms of the good's price, assuming that and that derivative
exists.

Page 14
MICROECONOMICS ANALYSIS

The proof of the theorem stems from the fact that for a profit-maximizing firm, the maximum of the firm's
profit at some output is given by the minimum? of at some price, ,

namely where holds. Thus, ; QED.

The proof is also a corollary of the envelope theorem.

Shephard's lemma is a major result in microeconomics having applications in the theory of the firm and in
consumer choice.

The lemma states that if indifference curves of the expenditure or cost function are convex, then the cost
minimizing point of a given good ( ) with price is unique. The idea is that a consumer will buy a
unique ideal amount of each item to minimize the price for obtaining a certain level of utility given the
price of goods in the market.

The lemma is named after Ronald Shephard who gave a proof using the distance formula in his book
Theory of Cost and Production Functions (Princeton University Press, 1953).

The equivalent result in the context of consumer theory was first derived by Lionel W. McKenzie in
1957. It states that the partial derivatives of the expenditure function with respect to the prices of goods
equal the Hicksian demand functions for the relevant goods. Similar results had already been derived by
John Hicks (1939) and Paul Samuelson (1947).

In consumer theory, Shephard's lemma states that the demand for a particular good i for a given level of
utility u and given prices p, equals the derivative of the expenditure function with respect to the price of
the relevant good:

where hi(p,u) is the Hicksian demand for good , e(p,u) is the expenditure function, and both functions
are in terms of prices (a vector p) and utility .

Likewise, in the theory of the firm, the lemma gives a similar formulation for the conditional factor
demand for each input factor: the derivative of the cost function c(w,y) with respect to the factor price:

where xi(w,y) is the conditional factor demand for input , c(w,y) is the cost function, and both functions
are in terms of factor prices (a vector w) and output .

Although Shephard's original proof used the distance formula, modern proofs of the Shephard's lemma
use the envelope theorem.

Proof for the Differentiable Case

Page 15
MICROECONOMICS ANALYSIS

The proof is stated for the two-good case for ease of notation. The expenditure function is
the minim and of the constrained optimization problem characterized by the following Lagrangian:

By the envelope theorem the derivatives of the minim and with respect to the parameter
can be computed as such:

where is the minimizer (i.e. the Hicksian demand function for good 1). This completes the proof.

Application

Shephard's lemma gives a relationship between expenditure (or cost) functions and Hicksian demand. The
lemma can be re-expressed as Roy's identity, which gives a relationship between an indirect utility
function and a corresponding Marshallian demand function.

QNO11:- Differentiate between First and third degrees of price discrimination.

ANS: - Price discrimination or price differentiation exists when sales of identical goods or services are
transacted at different prices from the same provider. In a theoretical market with perfect information,
perfect substitutes, and no transaction costs or prohibition on secondary exchange (or re-selling) to
prevent arbitrage, price discrimination can only be a feature of monopolistic and oligopolistic markets,
where market power can be exercised. Otherwise, the moment the seller tries to sell the same good at
different prices, the buyer at the lower price can arbitrage by selling to the consumer buying at the higher
price but with a tiny discount. However, product heterogeneity, market frictions or high fixed costs
(which make marginal-cost pricing unsustainable in the long run) can allow for some degree of
differential pricing to different consumers, even in fully competitive retail or industrial markets. Price
discrimination also occurs when the same price is charged to customers which have different supply costs.

The effects of price discrimination on social efficiency are unclear; typically such behavior leads to lower
prices for some consumers and higher prices for others. Output can be expanded when price
discrimination is very efficient, but output can also decline when discrimination is more effective at
extracting surplus from high-valued users than expanding sales to low valued users. Even if output
remains constant, price discrimination can reduce efficiency by misallocating output among consumers.

Price discrimination requires market segmentation and some means to discourage discount customers
from becoming resellers and, by extension, competitors. This usually entails using one or more means of
preventing any resale, keeping the different price groups separate, making price comparisons difficult, or
restricting pricing information. The boundary set up by the marketer to keep segments separate are
referred to as a rate fence. Price discrimination is thus very common in services where resale is not
possible; an example is student discounts at museums. Price discrimination in intellectual property is also
enforced by law and by technology. In the market for DVDs, DVD players are designed - by law - with
chips to prevent an inexpensive copy of the DVD (for example legally purchased in India) from being

Page 16
MICROECONOMICS ANALYSIS

used in a higher price market (like the US). The Digital Millennium Copyright Act has provisions to
outlaw circumventing of such devices to protect the enhanced monopoly profits that copyright holders can
obtain from price discrimination against higher price market segments.

Price discrimination can also be seen where the requirement that goods be identical is relaxed. For
example, so-called "premium products" (including relatively simple products, such as cappuccino
compared to regular coffee) have a price differential that is not explained by the cost of production. Some
economists have argued that this is a form of price discrimination exercised by providing a means for
consumers to reveal their willingness to pay.

Types of price discrimination

First degree price discrimination

This type of price discrimination requires the monopoly seller of a good or service to know the absolute
maximum price (or reservation price) that every consumer is willing to pay. By knowing the reservation
price, the seller is able to absorb the entire consumer's surplus from the consumer and transform it into
revenues. The seller produces more of his product than he would to achieve monopoly profits with no
price discrimination, which means that there is no deadweight loss. Examples of where this might be
observed are in markets where consumers bid for tenders, though, in this case, the practice of collusive
tendering could reduce the market efficiency.

Second degree price discrimination

In second degree price discrimination, price varies according to quantity demanded. Larger quantities are
available at a lower unit price. This is particularly widespread in sales to industrial customers, where bulk
buyers enjoy higher discounts.

Additionally to second degree price discrimination, sellers are not able to differentiate between different
types of consumers. Thus, the suppliers will provide incentives for the consumers to differentiate
themselves according to preference. As above, quantity "discounts", or non-linear pricing, is a means by
which suppliers use consumer preference to distinguish classes of consumers. This allows the supplier to
set different prices to the different groups and capture a larger portion of the total market surplus.

In reality, different pricing may apply to differences in product quality as well as quantity. For example,
airlines often offer multiple classes of seats on flights, such as first class and economy class. This is a way
to differentiate consumers based on preference, and therefore allows the airline to capture more
consumers surplus.

Third degree price discrimination

In third degree price discrimination, price varies by attributes such as location or by customer segment, or
in the most extreme case, by the individual customer's identity; where the attribute in question is used as a
proxy for ability/willingness to pay.

Additionally to third degree price discrimination, the supplier(s) of a market where this type of
discrimination is exhibited are capable of differentiating between consumer classes. Examples of this
differentiation are student or senior discounts. For example, a student or a senior consumer will have a
different willingness to pay than an average consumer, where the reservation price is presumably lower

Page 17
MICROECONOMICS ANALYSIS

because of budget constraints. Thus, the supplier sets a lower price for that consumer because the student
or senior has a more elastic Price elasticity of demand (see the discussion of Price elasticity of demand as
it applies to revenues from the first degree price discrimination, above). The supplier is once again
capable of capturing more market surplus than would be possible without price discrimination.

Note that it is not always advantageous to the company to price discriminate even if it is possible,
especially for second and third degree discrimination. In some circumstances, the demands of different
classes of consumers will encourage suppliers to ignore one or more classes and target entirely to the rest.
Whether it is profitable to price discriminate is determined by the specifics of a particular market.

Fourth degree price discrimination

In fourth degree price discrimination, prices are the same for different customers, however costs to the
organization may vary. For example, one may buy a plane ticket, but call ahead to order a vegetarian
meal, possibly costing the company more to provide, but your ticket has no greater cost to you. This is
also known as reverse price discrimination, as the effects are reflected on the producer.

QNO12:- Distinguish between pure strategy Nash equilibrium and mixed strategy equilibrium.
When would you use mixed strategy equilibrium?

ANS:- A player's strategy, in game theory, refers to one of the options he can choose in a setting where
the outcome depends not only on his own actions but on the action of others. A player's strategy will
determine the action the player will take at any stage of the game.

The strategy concept is sometimes (wrongly) confused with that of a move. A move is an action taken by
a player at some point during the play of a game (e.g., in chess, moving white's Bishop a2 to b3). A
strategy on the other hand is a complete algorithm for playing the game, telling a player what to do for
every possible situation throughout the game.

A strategy profile (sometimes called a strategy combination) is a set of strategies for each player which
fully specifies all actions in a game. A strategy profile must include one and only one strategy for every
player.

Strategy set

A player's strategy set defines what strategies are available for them to play.

A player has a finite strategy set if they have a number of discrete strategies available to them. For
instance, in a single game of Rock-paper-scissors, each player has the finite strategy set {rock, paper, and
scissors}.

A strategy set is infinite otherwise. For instance, an auction with mandated bid increments may have an
infinite number of discrete strategies in the strategy set {$10, $20, $30, ...}. Alternatively, the Cake
cutting game has a bounded continuum of strategies in the strategy set {Cut anywhere between zero
percent and 100 percent of the cake}.

In a dynamic game, the strategy set consists of the possible rules a player could give to a robot or agent on
how to play the game. For instance, in the Ultimatum game, the strategy set for the second player would
consist of every possible rule for which offers to accept and which to reject.

Page 18
MICROECONOMICS ANALYSIS

In a Bayesian game, the strategy set is similar to that in a dynamic game. It consists of rules for what
action to take for any possible private information.

Choosing a strategy set

In applied game theory, the definition of the strategy sets is an important part of the art of making a game
simultaneously solvable and meaningful. The game theorist can use knowledge of the overall problem to
limit the strategy spaces, and ease the solution.

For instance, strictly speaking in the Ultimatum game a player can have strategies such as: Reject offers
of ($1, $3, $5, ..., $19), accept offers of ($0, $2, $4, ..., $20). Including all such strategies makes for a very
large strategy space and a somewhat difficult problem. A game theorist might instead believe they can
limit the strategy set to: {Reject any offer x, accept any offer > x; for x in ($0, $1, $2, ..., $20)}.

Pure and mixed strategies

A pure strategy provides a complete definition of how a player will play a game. In particular, it
determines the move a player will make for any situation he or she could face. A player's strategy set is
the set of pure strategies available to that player.

A mixed strategy is an assignment of a probability to each pure strategy. This allows for a player to
randomly select a pure strategy. Since probabilities are continuous, there are infinitely many mixed
strategies available to a player, even if their strategy set is finite.

Of course, one can regard a pure strategy as a degenerate case of a mixed strategy, in which that particular
pure strategy is selected with probability 1 and every other strategy with probability 0.

A totally mixed strategy is a mixed strategy in which the player assigns a strictly positive probability to
every pure strategy. (Totally mixed strategies are important for equilibrium refinement such as trembling
hand perfect equilibrium.)

Mixed strategy

Illustration

A B

A 1, 1 0, 0

B 0, 0 1, 1

Pure coordination game

Consider the payoff matrix pictured to the right (known as a coordination game). Here one player chooses
the row and the other chooses a column. The row player receives the first payoff, the column player the

Page 19
MICROECONOMICS ANALYSIS

second. If row opts to play A with probability 1 (i.e. play A for sure), then he is said to be playing a pure
strategy. If column opts to flip a coin and play A if the coin lands heads and B if the coin lands tails, then
she is said to be playing a mixed strategy, and not a pure strategy.

Significance

In his famous paper, John Forbes Nash proved that there is an equilibrium for every finite game. One can
divide Nash equilibria into two types. Pure strategy Nash equilibria are Nash equilibria where all players
are playing pure strategies. Mixed strategy Nash equilibria are equilibria where at least one player is
playing a mixed strategy. While Nash proved that every finite game has a Nash equilibrium, not all have
pure strategy Nash equilibria. For an example of a game that does not have a Nash equilibrium in pure
strategies, see Matching pennies. However, many games do have pure strategy Nash equilibria (e.g. the
Coordination game, the Prisoner's dilemma, the Stag hunt). Further, games can have both pure strategy
and mixed strategy equilibria.

A disputed meaning

During the 1980s, the concept of mixed strategies came under heavy fire for being "intuitively
problematic". Randomization, central in mixed strategies, lacks behavioral support. Seldom do people
make their choices following a lottery. This behavioral problem is compounded by the cognitive difficulty
that people are unable to generate random outcomes without the aid of a random or pseudo-random
generator.

In 1991, game theorist Ariel Rubinstein described alternative ways of understanding the concept. The
first, due to Harsanyi (1973), is called purification, and supposes that the mixed strategies interpretation
merely reflects our lack of knowledge of the players' information and decision-making process.
Apparently random choices are then seen as consequences of non-specified, payoff-irrelevant exogeneous
factors. However, it is unsatisfying to have results that hang on unspecified factors.

A second interpretation imagines the game players standing for a large population of agents. Each of the
agents chooses a pure strategy, and the payoff depends on the fraction of agents choosing each strategy.
The mixed strategy hence represents the distribution of pure strategies chosen by each population.
However, this does not provide any justification for the case when players are individual agents.

Later, Aumann and Brandenburger (1995), re-interpreted Nash equilibrium as an equilibrium in beliefs,
rather than actions. For instance, in Rock-paper-scissors an equilibrium in beliefs would have each player
believing the other was equally likely to play each strategy. This interpretation weakens the predictive
power of Nash equilibrium, however, since it is possible in such an equilibrium for each player to actually
play a pure strategy of Rock.

Ever since, game theorists' attitude towards mixed strategies-based results have been ambivalent. Mixed
strategies are still widely used for their capacity to provide Nash equilibria in games where no equilibrium
in pure strategies exists, but the model does not specify why and how players randomize their decisions.

Behavior strategy

While a mixed strategy assigns a probability distribution over pure strategies, a behavior strategy assigns
at each node a probability distribution over the set of possible actions. While the two concepts are very
closely related in the context of normal form games, they have very different implications for extensive

Page 20
MICROECONOMICS ANALYSIS

form games. Roughly, a mixed strategy randomly chooses a deterministic path through the game tree,
while a behavior strategy can be seen as a stochastic path.

The relationship between mixed and behavior strategies is the subject of Kuhn's Theorem. The result
establishes that in any finite extensive-form game with perfect recall, for any player and any mixed
strategy, there exists a behavior strategy that, against all profiles of strategies (of other players), induces
the same distribution over terminal nodes as the mixed strategy does. The converse is also true.

A famous example of why perfect recall is required for the equivalence is given by Piccione and
Rubinstein (1997) with their Absent-Minded Driver game.

QNO13:- Write short note on Envelope theorem.

ANS: - The envelope theorem is a theorem about optimization problems (max & min) in
microeconomics. It may be used to prove Hotelling's lemma, Shephard's lemma, and Roy's identity. It
also allows for easier computation of comparative statics in generalized economic models.

The theorem exists in two versions, a regular version (unconstrained optimization) and a generalized
version (constrained optimization). The regular version can be obtained from the general version because
unconstrained optimization is just the special case of constrained optimization with no constraints (or
constraints that are always satisfied, i.e. constraints that are identities such as or
.

The theorem gets its name from the fact that it shows that a less constrained maximization (or
minimization) problem (where some parameters are turned into variables) is the upper (or lower for min)
envelope of the original problem. For example, see cost minimization, and compare the long-run (less
constrained) and short-run (more constrained some factors of production are fixed) minimization
problems.

For the theorem to hold, the functions being dealt with must have certain well-behaved properties.
Specifically, the correspondence mapping parameter values to optimal choices must be differentiable,
with it being single-valued (and hence a function) a necessary but not sufficient condition.

The theorem is described below. Note that bold face represents a vector.

Envelope theorem

A curve in a two dimensional space is best represented by the parametric equations like x(c) and y(c). The
family of curves can be represented in the form where c is the parameter. Generally,
the envelope theorem involves one parameter but there can be more than one parameter involved as well.

The envelope of a family of curves g(x,y,c) = 0 is a curve such that at each point on the curve there is
some member of the family that touches that particular point tangentially. This forms a curve or surface
that is tangential to every curve in the family of curves forming an envelope.

Consider an arbitrary maximization (or minimization) problem where the objective function
depends on some parameters :

Page 21
MICROECONOMICS ANALYSIS

The function is the problem's optimal-value function it gives the maximized (or minimized)
value of the objective function as a function of its parameters .

Let be the (arg max) value of , expressed in terms of the parameters, that solves the optimization
problem, so that . The envelope theorem tells us how changes as a
parameter changes, namely:

That is, the derivative of with respect to is given by the partial derivative of with
respect to , holding fixed, and then evaluating at the optimal choice .

General envelope theorem

There also exists a version of the theorem, called the general envelope theorem, used in constrained
optimization a problem which relates the partial derivatives of the optimal-value function to the partial
derivatives of the Lagrangian function.

We are considering the following optimization problem in formulating the theorem (max may be replaced
by min, and all results still hold):

Which gives the Lagrangian function:

Where:

is the dot product

Then the general envelope theorem is:

Page 22
MICROECONOMICS ANALYSIS

Note that the Lagrange multipliers are treated as constants during differentiation of the Lagrangian
function, then their values as functions of the parameters are substituted in afterwards.

Envelope theorem in generalized calculus

In the calculus of variations, the envelope theorem relates evolutes to single paths. This was first proved
by Jean Gaston Darboux and Ernst Zermelo (1894) and Adolf Kneser (1898). The theorem can be stated
as follows:

"When a single-parameter family of external paths from a fixed point O has an envelope, the integral
from the fixed point to any point A on the envelope equals the integral from the fixed point to any second
point B on the envelope plus the integral along the envelope to the first point on the envelope, JOA = JOB +
JBA."

QNO14:- Why is there a social cost to monopoly power? Suppose a Production Process gives rise to
negative externalities, would your answer on social cost of monopoly change? Explain.
ANS:- A monopoly exists when a specific person or enterprise is the only supplier of a particular
commodity (this contrasts with a monopsony which relates to a single entity's control of a market to
purchase a good or service, and with oligopoly which consists of a few entities dominating an industry).
Monopolies are thus characterized by a lack of economic competition to produce the good or service and
a lack of viable substitute goods. The verb "monopolize" refers to the process by which a company gains
the ability to raise prices or exclude competitors. In economics, a monopoly is a single seller. In law, a
monopoly is a business entity that has significant market power, that is, the power, to charge high prices.
Although monopolies may be big businesses, size is not a characteristic of a monopoly. A small business
may still have the power to raise prices in a small industry (or market).

A general equilibrium analysis of monopoly power is proposed as an alternative to the partial equilibrium
analyses of monopolization common to most antitrust texts. This analysis introduces the notion of a cost
minimizing market equilibrium. The empirical implications of this equilibrium concept for antitrust
policy is derived in terms of a family of equilibrium inequalities over market data from observations on a
market economy with competitive factor markets. The social cost of monopoly power is measured using
Debreu's coefficient of resource utilization. That is, we propose Pareto optimality as the ultimate
objective of antitrust policy.

Market structures

In economics, the idea of monopoly is important for the study of market structures, which directly
concerns normative aspects of economic competition, and provides the basis for topics such as industrial
organization and economics of regulation. There are four basic types of market structures by traditional
economic analysis: perfect competition, monopolistic competition, oligopoly and monopoly. A monopoly
is a market structure in which a single supplier produces and sells a given product. If there is a single
seller in a certain industry and there are not any close substitutes for the product, then the market structure
is that of a "pure monopoly". Sometimes, there are many sellers in an industry and/or there exist many

Page 23
MICROECONOMICS ANALYSIS

close substitutes for the goods being produced, but nevertheless companies retain some market power.
This is termed monopolistic competition, whereas by oligopoly the companies interact strategically.

In general, the main results from this theory compare price-fixing methods across market structures,
analyze the effect of a certain structure on welfare, and vary technological/demand assumptions in order
to assess the consequences for an abstract model of society. Most economic textbooks follow the practice
of carefully explaining the perfect competition model, only because of its usefulness to understand
"departures" from it (the so-called imperfect competition models).

The boundaries of what constitutes a market and what doesn't are relevant distinctions to make in
economic analysis. In a general equilibrium context, a good is a specific concept entangling geographical
and time-related characteristics (grapes sold during October 2009 in Moscow is a different good from
grapes sold during October 2009 in New York). Most studies of market structure relax a little their
definition of a good, allowing for more flexibility at the identification of substitute-goods. Therefore, one
can find an economic analysis of the market of grapes in Russia, for example, which is not a market in the
strict sense of general equilibrium theory monopoly.

Characteristics

Profit Maximizer: Maximizes profits.


Price Maker: Decides the price of the good or product to be sold.
High Barriers to Entry: Other sellers are unable to enter the market of the monopoly.
Single seller: In a monopoly, there is one seller of the good that produces all the output.
Therefore, the whole market is being served by a single company, and for practical purposes, the
company is the same as the industry.
Price Discrimination: A monopolist can change the price and quality of the product. He sells
more quantities charging less price for the product in a very elastic market and sells less
quantities charging high price in a less elastic market.

Sources of monopoly power

Monopolies derive their market power from barriers to entry circumstances that prevent or greatly
impede a potential competitor's ability to compete in a market. There are three major types of barriers to
entry; economic, legal and deliberate.

Economic barriers: Economic barriers include economies of scale, capital requirements, cost
advantages and technological superiority.

Economies of scale: Monopolies are characterized by decreasing costs for a relatively large range
of production. Decreasing costs coupled with large initial costs give monopolies an advantage
over would-be competitors. Monopolies are often in a position to reduce prices below a new
entrant's operating costs and thereby prevent them from continuing to compete. Furthermore, the
size of the industry relative to the minimum efficient scale may limit the number of companies
that can effectively compete within the industry. If for example the industry is large enough to
support one company of minimum efficient scale then other companies entering the industry will
operate at a size that is less than MES, meaning that these companies cannot produce at an
average cost that is competitive with the dominant company. Finally, if long-term average cost is
constantly decreasing, the least cost method to provide a good or service is by a single company.
Capital requirements: Production processes that require large investments of capital, or large
research and development costs or substantial sunk costs limit the number of companies in an

Page 24
MICROECONOMICS ANALYSIS

industry. Large fixed costs also make it difficult for a small company to enter an industry and
expand.
Technological superiority: A monopoly may be better able to acquire, integrate and use the best
possible technology in producing its goods while entrants do not have the size or finances to use
the best available technology. One large company can sometimes produce goods cheaper than
several small companies. No substitute goods: A monopoly sells a good for which there is no
close substitute. The absence of substitutes makes the demand for the good relatively inelastic
enabling monopolies to extract positive profits.
Control of natural resources: A prime source of monopoly power is the control of resources that
are critical to the production of a final good.
Network externalities: The use of a product by a person can affect the value of that product to
other people. This is the network effect. There is a direct relationship between the proportion of
people using a product and the demand for that product. In other words the more people who are
using a product the greater the probability of any individual starting to use the product. This
effect accounts for fads and fashion trends. It also can play a crucial role in the development or
acquisition of market power. The most famous current example is the market dominance of the
Microsoft operating system in personal computers.

Legal barriers: Legal rights can provide opportunity to monopolise the market of a good.
Intellectual property rights, including patents and copyrights, give a monopolist exclusive control
of the production and selling of certain goods. Property rights may give a company exclusive
control of the materials necessary to produce a good.
Deliberate actions: A company wanting to monopolise a market may engage in various types of
deliberate action to exclude competitors or eliminate competition. Such actions include collusion,
lobbying governmental authorities, and force (see anti-competitive practices).

In addition to barriers to entry and competition, barriers to exit may be a source of market power. Barriers
to exit are market conditions that make it difficult or expensive for a company to end its involvement with
a market. Great liquidation costs are a primary barrier for exiting. Market exit and shutdown are separate
events. The decision whether to shut down or operate is not affected by exit barriers. A company will shut
down if price falls below minimum average variable costs.

QNO15:- Differentiate between ISO cost and ISO grant functions.

ANS: - In economics and is cost line shows all combinations of inputs which cost the same total
amount.[1][2] Although similar to the budget constraint in consumer theory, the use of the is cost line
pertains to cost-minimization in production, as opposed to utility-maximization. For the two production
inputs labour and capital, with fixed unit costs of the inputs, the equation of the is cost line is

where w represents the wage rate of labour, r represents the rental rate of capital, K is the amount of
capital used, L is the amount of labour used, and C is the total cost of acquiring those quantities of the two
inputs.

The absolute value of the slope of the isocost line, with capital plotted vertically and labour plotted
horizontally, equals the ratio of unit costs of labour and capital. The slope is:

Page 25
MICROECONOMICS ANALYSIS

The isocost line is combined with the isoquant map to determine the optimal production point at any
given level of output. Specifically, the point of tangency between any isoquant and an isocost line gives
the lowest-cost combination of inputs that can produce the level of output associated with that isoquant.
Equivalently, it gives the maximum level of output that can be produced for a given total cost of inputs. A
line joining tangency points of isoquants and isocosts (with input prices held constant) is called the
expansion path.

Incentive stock options (ISOs), are a type of employee stock option that can be granted only to employees
and confer a U.S. tax benefit. ISOs are also sometimes referred to as incentive share options or Qualified
Stock Options by IRS .

The tax benefit is that on exercise the individual does not have to pay ordinary income tax (nor
employment taxes) on the difference between the exercise price and the fair market value of the shares
issued (however, the holder may have to pay U.S. alternative minimum tax instead). Instead, if the shares
are held for 1 year from the date of exercise and 2 years from the date of grant, then the profit (if any)
made on sale of the shares is taxed as long-term capital gain. Long-term capital gain is taxed in the U.S. at
lower rates than ordinary income.

Although ISOs have more favorable tax treatment than non-ISOs (aka non-statutory stock option (NSO)
or non-qualified stock option (NQO)), they also require the holder to take on more risk by having to hold
onto the stock for a longer period of time if the holder is to receive optimal tax treatment. However, even
if the holder disposes of the stock within a year, it is possible that there will still be marginal tax deferral
value (as compared to NQOs) if the holding period, though less than a year, straddles the ending of the
taxpayer's taxable reporting period.

Note further that an employer generally does not claim a corporate income tax deduction (which would be
in an amount equal to the amount of income recognized by the employee) upon the exercise of its
employee's ISO, unless the employee does not meet the holding-period requirements. But see Coughlan,
Section 174 R&E Deduction Upon Statutory Stock Option Exercise, 58 Tax Law. 435 (2005). With
NQSOs, on the other hand, the employer is always eligible to claim a deduction upon its employee's
exercise of the NQSO.

Additionally, there are several other restrictions which have to be met (by the employer or employee) in
order to qualify the compensatory stock option as an ISO. For a stock option to qualify as ISO and thus
receive special tax treatment under Section 421(a) of the Internal Revenue Code (the "Code"), it must
meet the requirements of Section 422 of the Code when granted and at all times beginning from the grant
until its exercise. The requirements include:

The option may be granted only to an employee (grants to non-employee directors or independent
contractors are not permitted), who must exercise the option while he/she is an employee or no
later than three (3) months after termination of employment (unless the option holder is disabled,
in which case this three-month period is extended to one year. In case of death the option can be
exercised by the legal heirs of the deceased until the expiration date).

The option must be granted under a written plan document specifying the total number of shares
that may be issued and the employees who are eligible to receive the options. The plan must be
approved by the stockholders within 12 months before or after plan adoption.

Page 26
MICROECONOMICS ANALYSIS

Each option must be granted under an ISO agreement, which must be written and must list the
restrictions placed on exercising the ISO. Each option must set forth an offer to sell the stock at
the option price and the period of time during which the option will remain open.

The option must be granted within 10 years of the earlier of adoption or shareholder approval,
and the option must be exercisable only within 10 years of grant.

The option exercise price must equal or exceed the fair market value of the underlying stock at
the time of grant.

The employee must not, at the time of grant, own stock representing more than 10% of voting
power of all stock outstanding, unless the option exercise price is at least 110% of the fair market
value and the option expires no later than five (5) years from the time of the grant.

The ISO agreement must specifically state that ISO cannot be transferred by the option holder
other than by will or by the laws of descent and that the option cannot be exercised by anyone
other than the option holder.

The aggregate fair market value (determined as of the grant date) of stock bought by exercising
ISOs that are exercisable for the first time cannot exceed $ 100,000 in a calendar year. To the
extent it does, Code section 422(d) provides that such options are treated as non-qualified stock
options.

QNO16:- Differentiate between Public goods and merit goods.

ANS:- Public goods are defined as products where, for any given output, consumption by additional
consumers does not reduce the quantity consumed by existing consumers. There are very few absolutely
public goods, but common examples include law, parks, street-lighting, defence etc. As there is no
marginal cost in producing the public goods, it is generally argued that they must be provided free of
charge, because otherwise the people who benefit less than the cost of using the public good, will not use
it. That will lead to a loss of welfare. Also the goods are mostly non-excludable, that means that if once
provided everybody can use them, which when charged will lead to "free-riding". So these goods will not
be provided by free markets as there is no way to charge for the usage, the solution is, that state must
provide these goods and finance them from taxes collected from everybody.

Merit goods on the other hand are products generally not distributed by means of the price system, but
based on merit or need, because people although having perfect knowledge would buy the wrong amount
of them. These goods can be supplied by free market, but not on the right quantity. Merit goods are, for
example, education and to some extent the health-care. They are provided by state as "good for you".

QNO17:- Differentiate between Cobb-Douglas and CES production functions.

ANS: - In economics, the CobbDouglas functional form of production functions is widely used to
represent the relationship of output and two inputs. The Cobb-Douglas form was developed and tested
against statistical evidence by Charles Cobb and Paul Douglas during 19001947.

Formulation

Page 27
MICROECONOMICS ANALYSIS

In its most standard form for production of a single good with two factors, the function is

where:

Y = total production (the monetary value of all goods produced in a year)


L = labor input (the total number of person-hours worked in a year)
K = capital input (the monetary worth of all machinery, equipment, and buildings)
A = total factor productivity
and are the output elasticities of labor and capital, respectively. These values are constants
determined by available technology.

Output elasticity measures the responsiveness of output to a change in levels of either labor or capital
used in production, ceteris paribus. For example if = 0.15, a 1% increase in labor would lead to
approximately a 0.15% increase in output.

Further, if:

+ = 1,

the production function has constant returns to scale: Doubling capital K and labor L will also double
output Y. If

+ < 1,

returns to scale are decreasing, and if

+>1

returns to scale are increasing. Assuming perfect competition and + = 1, and can be shown to be
labor and capital's share of output.

Cobb and Douglas were influenced by statistical evidence that appeared to show that labor and capital
shares of total output were constant over time in developed countries; they explained this by statistical
fitting least-squares regression of their production function. There is now doubt over whether constancy
over time exists.

In economics, Constant elasticity of substitution (CES) is a property of some production functions and
utility functions.

More precisely, it refers to a particular type of aggregator function which combines two or more types of
consumption, or two or more types of productive inputs into an aggregate quantity. This aggregator
function exhibits constant elasticity of substitution.

CES production function

Page 28
MICROECONOMICS ANALYSIS

The CES production function is a type of production function that displays constant elasticity of
substitution. In other words, the production technology has a constant percentage change in factor (e.g.
labour and capital) proportions due to a percentage change in marginal rate of technical substitution. The
two factor (Capital, Labor) CES production function introduced by Solow [1] and later made popular by
Arrow, Chenery, Minhas, and Solow is:[2][3][4]

where

= Output
= Factor productivity
= Share parameter
, = Primary production factors (Capital and Labor)

= = Elasticity of substitution.

As its name suggests, the CES production function exhibits constant elasticity of substitution between
capital and labor. Leontief, linear and Cobb-Douglas production functions are special cases of the CES
production function. That is, if we have a linear function 1, if approaches zero, in the limit we
get the Cobb-Douglas function; and as approaches negative infinity we get the Leontief function. The
general form of the CES production function is:

where

= Output
= Factor productivity

= Share parameter of input i,


= Production factors (i = 1,2...n)
= Elasticity of substitution.

Extending the CES (Solow) form to accommodate multiple factors of production creates some problems,
however. There is no completely general way to do this. Uzawa showed the only possible n-factor
production functions (n>2) with constant partial elasticities of substitution require either that all
elasticities between pairs of factors be identical, or if any differ, these all must equal each other and all
remaining elasticities must be unity. This is true for any production function. This means the use of the

Page 29
MICROECONOMICS ANALYSIS

CES form for more than 2 factors will generally mean that there is not constant elasticity of substitution
among all factors.

Nested CES functions are commonly found in partial/general equilibrium models. Different nests (levels)
allow for the introduction of the appropriate elasticity of substitution.

The CES is a neoclassical production function.

CES utility function

The same functional form arises as a utility function in consumer theory. For example, if there exist
types of consumption goods , then aggregate consumption could be defined using the CES
aggregator:

Here again, the coefficients are share parameters, and is the elasticity of substitution. Therefore the
consumption goods are perfect substitutes when approaches infinity and perfect complements when
approaches zero. The CES aggregator is also sometimes called the Armington aggregator, which was
discussed by Armington (1969).

A CES utility function is one of the cases considered by Avinash Dixit and Joseph Stiglitz in their study
of optimal product diversity in a context of monopolistic competition.[8

QNO18:- Present and explain Slutsky's theorem: (a) graphically and (b) mathematically.

ANS:- Evgeny "Eugen" Evgenievich Slutsky ( SLOOT-skee; Russian: ; 7


April [O.S. 23 February] 1880 10 March 1948) was a Russian/Soviet mathematical statistician,
economist and political economist.

Slutsky's work in economics

He is principally known for work in deriving the relationships embodied in the very well known Slutsky
equation which is widely used in microeconomic consumer theory for separating the substitution effect
and the income effect of a price change on the total quantity of a good demanded following a price
change in that good, or in a related good that may have a cross-price effect on the original good quantity.
There are many Slutsky analogs in producer theory.

He is less well known by Western economists than some of his contemporaries, due to his own changing
intellectual interests as well as external factors forced upon him after the Bolshevik Revolution in 1917.
His seminal paper in Economics, and some argue his last paper in Economics rather than probability
theory, was published in 1915 (Sulla teoria del bilancio del consumatore). Paul Samuelson noted that until
1936, he had been entirely unaware of Slutsky's 1915 "masterpiece" due to World War I and the paper's
Italian language publication. R. G. D. Allen did the most to propagate Slutsky's work on consumer theory
in published papers in 1936 and 1950.

Vincent Barnett argues:

Page 30
MICROECONOMICS ANALYSIS

"A good case can be made for the notion that Slutsky is the most famous of all Russian
economists, even more well-known [than] N. D. Kondratiev, L. V. Kantorovich, or Mikhail
Tugan-Baranovsky. There are eponymous concepts such as the Slutsky equation, the Slutsky
diamond, the Slutsky matrix, and the Slutsky-Yule effect, and a journals-literature search
conducted on his name for the years 1980-1995 yielded seventy-nine articles directly using some
aspect of Slutskys work... Moreover, many microeconomics textbooks contain prominent
mention of Slutskys contribution to the theory of consumer behavior, most notably the Slutsky
equation, christened by John Hicks as the Fundamental Equation of Value Theory'. Slutskys
work is thus an integral part of contemporary mainstream economics and econometrics, a claim
that cannot really be made by any other Soviet economist, perhaps even by any other Russian
economist."

In the 1920s Slutsky turned to working on probability theory and stochastic processes, but in 1927 he
published his second famous article on economic theory, 'The Summation of Random Causes as a Source
of Cyclical Processes'. This opened up a new approach to business cycle theory by hypothesising that the
interaction of chance events could generate periodicity when none existed initially.

Mathematical statistics work

Slutsky's later work was principally in probability theory and the theory of stochastic processes. He is
generally credited for the result known as Slutsky's theorem.

QNO19:- Define 'Nash equilibrium' and explain with the help of the game 'Prisoner's Dilemma'.
Provide an example of a game with multiple Nash equilibria.

ANS:- the Nash equilibrium is a solution concept of a non-cooperative game involving two or more
players, in which each player is assumed to know the equilibrium strategies of the other players, and no
player has anything to gain by changing only his own strategy unilaterally. :14 If each player has chosen a
strategy and no player can benefit by changing his or her strategy while the other players keep their
unchanged, then the current set of strategy choices and the corresponding payoffs constitute a Nash
equilibrium.

Stated simply, Amy and Phil are in Nash equilibrium if Amy is making the best decision she can, taking
into account Phil's decision, and Phil is making the best decision he can, taking into account Amy's
decision. Likewise, a group of players are in Nash equilibrium if each one is making the best decision that
he or she can, taking into account the decisions of the others.

The prisoner's dilemma is a canonical example of a game analyzed in game theory that shows why two
individuals might not cooperate, even if it appears that it is in their best interests to do so. It was
originally framed by Merrill Flood and Melvin Dresher working at RAND in 1950. Albert W. Tucker
formalized the game with prison sentence payoffs and gave it the name "prisoner's dilemma"(Poundstone,
1992). A classic example of the game is presented as follows:

Two men are arrested, but the police do not have enough information for a conviction. The police
separate the two men, and offer both the same deal: if one testifies against his partner
(defects/betrays), and the other remains silent (cooperates with/assists his partner), the betrayer
goes free and the one that remains silent gets a one-year sentence. If both remain silent, both are
sentenced to only one month in jail on a minor charge. If each 'rats out' the other, each receives a
three-month sentence. Each prisoner must choose either to betray or remain silent; the decision of
each is kept secret from his partner until the sentence is announced. What should they do?

Page 31
MICROECONOMICS ANALYSIS

In the classic version of the game, collaboration is dominated by betrayal; if the other prisoner chooses to
stay silent, then betraying them gives a better reward (no sentence instead of 1 month), and if the other
prisoner chooses to betray then betraying them also gives a better reward (3 months instead of 1 year).
Because betrayal always rewards more than cooperation, all purely rational self-interested prisoners
would betray the other, and so the only possible outcome for two purely rational prisoners is for them
both to betray each other. The interesting part of this result is that pursuing individual reward logically
leads the prisoners to both betray, but they would get a better reward if they both cooperated. In reality,
humans display a systematic bias towards cooperative behavior in this and similar games, much more so
than predicted by a theory based only on rational self-interested action.

There is also an extended "iterative" version of the game, where the classic game is played over and over,
and consequently, both prisoners continuously have an opportunity to penalize the other for previous
decisions. If the number of times the game will be played is known, the finite aspect of the game means
that (by backward induction) the two prisoners will betray each other repeatedly.

In casual usage, the label "prisoner's dilemma" may be applied to situations not strictly matching the
formal criteria of the classic or iterative games: for instance, those in which two entities could gain
important benefits from cooperating or suffer from the failure to do so, but find it merely difficult or
expensive, not necessarily impossible, to coordinate their activities to achieve cooperation.

QNO20:- How do you explain the prevalence of high wage rate along with unemployment in an
economy, using the efficiency wage model?

ANS:- In labor economics, the efficiency wage hypothesis argues that wages, at least in some markets,
are determined by more than supply and demand. Specifically, it points to the incentive for managers to
pay their employees more than the market-clearing wage in order to increase their productivity or
efficiency. This increased labor productivity pays for the higher wages.

Because workers are paid more than the equilibrium wage, there will be unemployment. Efficiency wages
are therefore a market failure explanation of unemployment in contrast to theories which emphasize
government intervention (such as minimum wages).

The idea of efficiency wages was expressed as early as 1920 by Alfred Marshall. Efficiency wage theory
has reemerged several times and is especially important in new Keynesian economics.

There are several theories (or "microfoundations") of why managers pay efficiency wages (wages above
the market clearing rate):

Avoiding shirking: If it is difficult to measure the quantity or quality of a worker's effortand


systems of piece rates or commissions are impossiblethere may be an incentive for him or her
to "shirk" (do less work than agreed). The manager thus may pay an efficiency wage in order to
create or increase the cost of job loss, which gives a sting to the threat of firing. This threat can be
used to prevent shirking (or "moral hazard").

Minimizing turnover: By paying above-market wages, the worker's motivation to leave the job
and look for a job elsewhere will be reduced. This strategy makes sense because it is often
expensive to train replacement workers.

Page 32
MICROECONOMICS ANALYSIS

Adverse selection: If job performance depends on workers' ability and workers differ from each
other in those terms, firms with higher wages will attract more able job-seekers. An efficiency
wage means that the employer can pick and choose among applicants to get the best possible.

Sociological theories: Efficiency wages may result from traditions. Akerlof's theory (in very
simple terms) involves higher wages encouraging high morale, which raises productivity.

Nutritional theories: In developing countries, efficiency wages may allow workers to eat well
enough to avoid illness and to be able to work harder and even more productively.

The model of efficiency wages, largely based on shirking, developed by Carl Shapiro and Joseph E.
Stiglitz has been particularly influential.

Shirking

In the Shapiro-Stiglitz model workers are paid at a level where they do not shirk. This prevents wages
from dropping to market clearing levels. Full employment cannot be achieved because workers would
shirk if they were not threatened with the possibility of unemployment. The curve for the no-shirking
condition (labeled NSC) goes to infinity at full employment.

The shirking model begins with the fact that complete contracts rarely (or never) exist in the real world.
This implies that both parties to the contract have some discretion, but frequently, due to monitoring
problems, it is the employees side of the bargain which is subject to the most discretion. (Methods such
as piece rates are often impracticable because monitoring is too costly or inaccurate; or they may be based
on measures too imperfectly verifiable by workers, creating a moral hazard problem on the employers
side.) Thus the payment of a wage in excess of market-clearing may provide employees with cost-
effective incentives to work rather than shirk (Gintis 1968).

In the Shapiro and Stiglitz model, workers either work or shirk, and if they shirk they have a certain
probability of being caught, with the penalty of being fired. Equilibrium then entails unemployment,
because in order to create an opportunity cost to shirking, firms try to raise their wages above the market
average (so that sacked workers face a probabilistic loss). But since all firms do this the market wage
itself is pushed up, and the result is that wages are raised above market-clearing, creating involuntary
unemployment. This creates a low, or no income alternative which makes job loss costly, and serves as a
worker discipline device. Unemployed workers cannot bid for jobs by offering to work at lower wages,
since if hired, it would be in the workers interest to shirk on the job, and he has no credible way of

Page 33
MICROECONOMICS ANALYSIS

promising not to do so. Shapiro and Stiglitz point out that their assumption that workers are identical (e.g.
there is no stigma to having been fired) is a strong one in practice reputation can work as an additional
disciplining device.

The shirking model does not predict (counterfactually) that the bulk of the unemployed at any one time
are those who are fired for shirking, because if the threat associated with being fired is effective, little or
no shirking and sacking will occur. Instead the unemployed will consist of a (rotating) pool of individuals
who have quit for personal reasons, are new entrants to the labour market, or who have been laid off for
other reasons. Pareto optimality, with costly monitoring, will entail some unemployment, since
unemployment plays a socially valuable role in creating work incentives. But the equilibrium
unemployment rate will not be Pareto optimal, since firms do not take into account the social cost of the
unemployment they help to create.

One criticism of this and other flavours of the efficiency wage hypothesis is that more sophisticated
employment contracts can under certain conditions reduce or eliminate involuntary unemployment.
Lazear (1979, 1981) demonstrates the use of seniority wages to solve the incentive problem, where
initially workers are paid less than their marginal productivity, and as they work effectively over time
within the firm, earnings increase until they exceed marginal productivity. The upward tilt in the age-
earnings profile here provides the incentive to avoid shirking, and the present value of wages can fall to
the market-clearing level, eliminating involuntary unemployment. Lazear and Moore (1984) find that the
slope of earnings profiles is significantly affected by incentives.

However, a significant criticism is that moral hazard would be shifted to employers, since they are
responsible for monitoring the workers effort. Obvious incentives would exist for firms to declare
shirking when it has not taken place. In the Lazear model, firms have obvious incentives to fire older
workers (paid above marginal product) and hire new cheaper workers, creating a credibility problem. The
seriousness of this employer moral hazard depends on the extent to which effort can be monitored by
outside auditors, so that firms cannot cheat, although reputation effects (e.g. Lazear 1981) may be able to
do the same job.

Labor turnover

On the labor turnover flavor of the efficiency wage hypothesis, firms also offer wages in excess of
market-clearing (e.g. Salop 1979, Schlicht 1978, Stiglitz 1974), due to the high cost of replacing workers
(search, recruitment, training costs). If all firms are identical, one possible equilibrium involves all firms
paying a common wage rate above the market-clearing level, with involuntary unemployment serving to
diminish turnover. These models can easily be adapted to explain dual labor markets: if low-skill, labor-
intensive firms have lower turnover costs (as seems likely), there may be a split between a low-wage,
low-effort, high-turnover sector and a high-wage, high effort, low-turnover sector. Again, more
sophisticated employment contracts may solve the problem.

Adverse selection

The adverse selection model adds yet another flavour to our broad set of efficiency wage models. These
use the framework that performance on the job depends on ability, that workers are heterogeneous with
respect to ability, and that workers ability and reservation wages are positively correlated (workers know
their own worth). In addition there are two crucial assumptions, that firms cannot screen applicants either
before or after applying, and that there is costless self-employment available which realises a workers
marginal product. If there are two kinds of firm (low and high wage), then we effectively have two sets of
lotteries (since firms cannot screen), the difference being that high-ability workers do not enter the low-

Page 34
MICROECONOMICS ANALYSIS

wage lotteries as their reservation wage is too high. Thus low-wage firms attract only low-ability lottery
entrants, while high-wage firms attract workers of all abilities (i.e. on average they will select average
workers). Thus high-wage firms are paying an efficiency wage they pay more, and, on average, get
more (see e.g. Malcolmson 1981; Stiglitz 1976; Weiss 1980). However, the assumption that firms are
unable to measure effort and pay piece rates after workers are hired or to fire workers whose output is too
low is quite strong. Firms may also be able to design self-selection or screening devices that induce
workers to reveal their true characteristics.

Sociological models

Fairness, norms, and reciprocity

Standard economic models ("neoclassical economics") assume that people pursue only their own self-
interest and do not care about "social" goals ("homo economicus"). Some attention has been paid to the
idea that people may be altruistic (care about the well-being of others), but it is only with the addition of
reciprocity and norms of fairness that the model becomes accurate.(e.g. Rabin 1993; Dufwenberg and
Kirchsteiger 2000; Fehr and Schmidt 2000). Thus of crucial importance is the idea of exchange: a person
who is altruistic towards another expects the other to fulfil some kind of fairness norm, be it reciprocating
in kind, in some other but according to some shared standard equivalent way; or simply by being
grateful. If the expected reciprocation is not forthcoming, the altruism is unlikely to be repeated or
continued. In addition, similar norms of fairness will typically lead people into negative forms of
reciprocity too in the form of retaliation for acts perceived as vindictive. This can bind actors into
vicious loops where vindictive acts are met with further vindictive acts.

In practice, despite the neat logic of standard neoclassical models, these kinds of sociological models do
impinge upon very many economic relations, though in different ways and to different degrees. For
example, if an employee has been exceptionally loyal, a manager may feel some obligation to treat that
employee well, even when it is not in his (narrowly defined, economic) self-interest to do so. It would
appear that although broader, longer-term economic benefits may result (e.g. through reputation, or
perhaps through simplified decision-making according to fairness norms), a major factor must be that
there are noneconomic benefits the manager receives, such as not having a guilty conscience (loss of self-
esteem). For real-world, socialised, normal human beings (as opposed to abstracted factors of
production), this is likely to be the case quite often. (As a quantitative estimate of the importance of this,
Weisbrods 1988 estimate of the total value of voluntary labor in the US - $74 billion annually will
suffice.) Examples of the negative aspect of fairness include consumers "boycotting" firms they
disapprove of by not buying products they otherwise would (and therefore settling for second-best); and
employees sabotaging firms they feel hard done by.

Rabin (1993) offers three stylised facts as a starting-point on how norms affect behaviour: (a) people are
prepared to sacrifice their own material well-being to help those who are being kind; (b) they are also
prepared to do this to punish those being unkind; (c) both (a) and (b) have a greater effect on behaviour as
the material cost of sacrificing (in relative rather than absolute terms) becomes smaller. Rabin supports
his Fact A by Dawes and Thalers (1988) survey of the experimental literature, which concludes that, for
most one-shot public good decisions in which the individually optimal contribution is close to 0%, the
contribution rate ranges from 40 to 60% of the socially optimal level. Fact B is demonstrated by the
ultimatum game (e.g. Thaler 1988), where an amount of money is split between two people, one
proposing a division, the other accepting or rejecting (where rejection means both get nothing).
Rationally, the proposer should offer no more than a penny, and the decider accept any offer of at least a
penny, but in practice, even in one-shot settings, proposers make fair proposals, and deciders are prepared
to punish unfair offers by rejecting them. Fact C is tested and partially confirmed by Gerald Leventhal

Page 35
MICROECONOMICS ANALYSIS

and David Anderson (1970), but is also fairly intuitive. In the ultimatum game, a 90% split (regarded as
unfair) is (intuitively) far more likely to be punished if the amount to be split is $1 than if it is $1 million.

A crucial point (as noted in Akerlof 1982) is that notions of fairness depend on the status quo and other
reference points. Experiments (Fehr and Schmidt 2000) and surveys (Kahneman, Knetsch and Thaler
1986) indicate that people have clear notions of fairness based on particular reference points
(disagreements can arise in the choice of reference point). Thus for example firms who raise prices or
lower wages to take advantage of increased demand or increased labour supply are frequently perceived
as acting unfairly, where the same changes are deemed acceptable when the firm makes them due to
increased costs (Kahneman et al.). In other words, in peoples intuitive nave accounting (Rabin 1993),
a key role is played by the idea of entitlements embodied in reference points (although as Dufwenberg
and Kirchsteiger 2000 point out, there may be informational problems, e.g. for workers in determining
what the firms profit actually is, given tax avoidance and stock-price considerations). In particular it is
perceived as unfair for actors to increase their share at the expense of others, although over time such a
change may become entrenched and form a new reference point which (typically) is no longer in itself
deemed unfair.

Sociological efficiency wage models

Solow (1981), drawing on these kinds of concepts, argued that wage rigidity may be at least partly due to
social conventions and principles of appropriate behaviour, which are not entirely individualistic in
origin. Akerlof (1982) provided the first explicitly sociological model leading to the efficiency wage
hypothesis. Using a variety of evidence from sociological studies, Akerlof argues that worker effort
depends on the work norms of the relevant reference group. In Akerlofs partial gift exchange model, the
firm can raise group work norms and average effort by paying workers a gift of wages in excess of the
minimum required, in return for effort above the minimum required. The sociological model can explain
phenomena inexplicable on neoclassical terms, such as why firms do not fire workers who turn out to be
less productive; why piece rates are so little used even where quite feasible; and why firms set work
standards exceeded by most workers. A possible criticism is that workers do not necessarily view high
wages as gifts, but as merely fair (particularly since typically 80% or more of workers consider
themselves to be in the top quarter of productivity), in which case they will not reciprocate with high
effort.

Akerlof and Yellen (1990), responding to these criticisms and building on work from psychology,
sociology, and personnel management, introduce the fair wage-effort hypothesis, which states that
workers form a notion of the fair wage, and if the actual wage is lower, withdraw effort in proportion, so
that, depending on the wage-effort elasticity and the costs to the firm of shirking, the fair wage may form
a key part of the wage bargain. This provides an explanation of persistent evidence of consistent wage
differentials across industries (e.g. Slichter 1950; Dickens and Katz 1986; Krueger and Summers 1988): if
firms must pay high wages to some groups of workers perhaps because they are in short supply or for
other efficiency-wage reasons such as shirking then demands for fairness will lead to a compression of
the pay scale, and wages for other groups within the firm will be higher than in other industries or firms.

The union threat model is one of several explanations for industry wage differentials. This Keynesian
economics model looks at the role of unions in wage determination. The degree in which union wages
exceed non-union member wages is known as union wage premium and some firms seek to prevent
unionization in the first instances. Varying costs of union avoidance across sectors will lead some firms to
offer supracompetitive wages as pay premiums to workers in exchange for their avoiding unionization.[4]
Under the union threat model (Dickens 1986), the ease with which an industry can defeat a union drive

Page 36
MICROECONOMICS ANALYSIS

has a negative relationship with its wage differential. In other words, inter-industry wage variability
should be low where the threat of unionization is low.

Empirical literature

Raff and Summers (1987) conduct a case study on Henry Fords introduction of the five dollar day in
1914. Their conclusion is that the Ford experience supports efficiency wage interpretations. Fords
decision to increase wages so dramatically (doubling for most workers) is most plausibly portrayed as the
consequence of efficiency wage considerations, with the structure being consistent, evidence of
substantial queues for Ford jobs, and significant increases in productivity and profits at Ford. Concerns
such as high turnover and poor worker morale appear to have played a significant role in the five-dollar
decision. Fords new wage put him in the position of rationing jobs, and increased wages did yield
substantial productivity benefits and profits. There is also evidence that other firms emulated Fords
policy to some extent, with wages in the automobile industry 40% higher than in the rest of
manufacturing (Rae 1965, quoted in Raff and Summers). Given low monitoring costs and skill levels on
the Ford production line, such benefits (and the decision itself) appear particularly significant.

Fehr, Kirchler, Weichbold and Gchter (1998) conduct labour market experiments to separate the effects
of competition and social norms/customs/standards of fairness. They find that in complete contract
markets, firms persistently try to enforce lower wages. By contrast, in gift exchange markets and bilateral
gift exchanges, wages are higher and more stable. It appears that in complete contract situations,
competitive equilibrium exerts a considerable drawing power, whilst in the gift exchange market it does
not.

Fehr et al. stress that reciprocal effort choices are truly a one-shot phenomenon, without reputation or
other repeated-game effects. It is, therefore, tempting to interpret reciprocal effort behavior as a
preference phenomenon.(p344). Two types of preferences can account for this behaviour: a) workers
may feel an obligation to share the additional income from higher wages at least partly with firms; b)
workers may have reciprocal motives (reward good behaviour, punish bad). In the context of this
interpretation, wage setting is inherently associated with the signalling of intentions, and workers
condition their effort responses on the inferred intentions. (p344). Charness (1996), quoted in Fehr et al.,
finds that when signalling is removed (wages are set randomly or by the experimenter), workers exhibit a
lower, but still positive, wage-effort relation, suggesting some gain-sharing motive and some reciprocity
(where intentions can be signalled).

Fehr et al. state that Our preferred interpretation of firms wage-setting behavior is that firms voluntarily
paid job rents to elicit non-minimum effort levels. Although excess supply of labour created enormous
competition among workers, firms did not take advantage. In the long run, instead of being governed by
competitive forces, firms wage offers were solely governed by reciprocity considerations because the
payment of non-competitive wages generated higher profits. Thus, both firms and workers can be better
off when they rely on stable reciprocal interactions.

That reciprocal behavior generates efficiency gains has been confirmed by several other papers e.g. Berg,
Dickhaut and McCabe (1995) - even under conditions of double anonymity and where actors know even
the experimenter cannot observe individual behaviour, reciprocal interactions and efficiency gains are
frequent. Fehr, Gchter and Kirchsteiger (1996, 1997) show that reciprocal interactions generate
substantial efficiency gains. However the efficiency-enhancing role of reciprocity is, in general,
associated with serious behavioural deviations from competitive equilibrium predictions. To counter a
possible criticism of such theories, Fehr and Tougareva (1995) showed these reciprocal exchanges

Page 37
MICROECONOMICS ANALYSIS

(efficiency-enhancing) are independent of the stakes involved (they compared outcomes with stakes
worth a weeks income with stakes worth 3 months income, and found no difference).

As one counter to over-enthusiasm for efficiency wage models, Leonard (1987) finds little support for
either shirking or turnover efficiency wage models, by testing their predictions for large and persistent
wage differentials. The shirking version assumes a trade-off between self-supervision and external
supervision, while the turnover version assumes turnover is costly to the firm. Variation across firms in
the cost of monitoring/shirking or turnover then is hypothesized to account for wage variations across
firms for homogeneous workers. But Leonard finds that wages for narrowly defined occupations within
one sector of one state are widely dispersed, suggesting other factors may be at work.

QNO21:- Explain with a diagram the basic Williamson model of managerial discretion and show
that the expenditure on staff is greater under this model as compared to profit maximization.

ANS:- Oliver E. Williamson hypothesized (1964) that profit maximization would not be the objective of
the managers of a joint stock organization. This theory, like other managerial theories of the firm,
assumes that utility maximization is a managers sole objective. However it is only in a corporate form of
business organization that a self-interest seeking manager can maximize his/her own utility, since there
exists a separation of ownership and control. The managers can use their discretion to frame and execute
policies which would maximize their own utilities rather than maximizing the shareholders utilities. This
is essentially the principalagent problem. This could however threaten their job security, if a minimum
level of profit is not attained by the firm to distribute among the shareholders.

The basic assumptions of the model are:

1. Imperfect competition in the markets.


2. Divorce of ownership and management.
3. A minimum profit constraint exists for the firms to be able to pay dividends to their share holders.

Managerial utility function

The managerial utility function includes variables such as salary, job security, power, status, dominance,
prestige and professional excellence of managers. Of these, salary is the only quantitative variable and
thus measurable. The other variables are non-pecuniary, which are non-quantifiable. The variables
expenditure on staff salary, management slack, discretionary investments can be assigned nominal values.
Thus these will be used as proxy variables to measure the real or unquantifiable concepts like job security,
power, status, dominance, prestige and professional excellence of managers, appearing in the managerial
utility function.

Utility function or "expense preference" of a manager can be given by:

Where U denotes the Utility function, S denotes the monetary expenditure on the staff, M stands for
"Management Slack" and ID stands for amount of "Discretionary Investment".

"Monetary expenditure on staff" include not only the manager's salary and other forms of monetary
compensation received by him from the business firm but also the number of staff under the control of the
manager as there is a close positive relationship between the number of staff and the manager's salary.

Page 38
MICROECONOMICS ANALYSIS

"Management slack" consists of those non-essential management perquisites such as entertainment


expenses, lavishly furnished offices, luxurious cars, large expense accounts, etc. which are above
minimum to retain the managers in the firm. These perks, even if not provided would not make the
manager quit his job, but these are incentives which enhance their prestige and status in the organisation
in turn contributing to efficiency of the firm's operations. The Management Slack is also a part of the cost
of production of the firm.

"Discretionary investment" refers to the amount of resources left at a manager's disposal, to be able to
spend at his own discretion. For example spending on latest equipment, furniture, decoration material, etc.
It satisfies their ego and gives them a sense of pride. These give a boost to the manager's esteem and
status in the organisation. Such investments are over and above the amount required for the survival of the
firm (such as periodic replacement of the capital equipment).

Concepts of profit in the model

The various concepts of profit used in the model needs to be understood clearly before moving to the
main model. Williamson has put forth four main concepts of profits in his model:

Actual profit ()

Where R is the total revenue, C is the cost of production and S is the staff expenditure.

Reported profit (r)

Where is the actual profit and M is the management slack.

Minimum profit (0)

It is the amount of profit after tax which should be paid to the shareholders of the firm, in the form of
dividends, to keep them satisfied. If the minimum level of profit cannot be given out to the shareholders,
they might resort of bulk sale of their shares which will transfer the ownership to other hands leaving the
company in the risk of a complete take over. Since the shareholders have the voting rights, they might
also vote for the change of the top level of management. Thus the job security of the manager is also
threatened. Ideally the reported profits must be either equal to or greater than the minimum profits plus
the taxes, as it is only after paying out the minimum profit that the additional profit can be used to
increase the managerial utility further.

Where r is the reported profit, 0 is the minimum profit and T is the tax.

Page 39
MICROECONOMICS ANALYSIS

Discretionary profit (D)

It is basically the entire amount of profit left after minimum profits and tax which is used to increase the
mangers utility, that is, to pay out managerial emoluments as well as allow them to make discretionary
investments.

Where D is the discretionary profit, is the actual profit, 0 is the minimum profit and T is the tax
amount.

However, what appears in the managerial utility function is discretionary investments (ID) and not
discretionary profits. Thus it is very important to distinguish between the two as further in the model we
would have to maximize the managerial utility function given the profit constraint.

Where r is the reported profit, 0 is the minimum profit and T is the tax amount.

Thus it can be seen that the difference in the Discretionary Profit and the Discretionary investment arises
because of the amount of managerial slack. This can be represented by the given equation

Where D is the Discretionary profit, ID is the Discretionary investment and M is the management slack.

Model framework

For simple representation of the model the managerial slack is considered to be zero. Thus there is no
difference between the actual profit and reported profit, which implies that the discretionary profit is
equal to the discretionary investment.

i.e. or

where r is the reported profit, is the actual profit, D is the discretionary profit and ID is the
discretionary investment.

Such that the utility function of the manager becomes

where S is the staff expenditure and ID is the discretionary investment.

Page 40
MICROECONOMICS ANALYSIS

There is a tradeoff between these two variables. Increase in either will give the manager a higher level of
satisfaction. At any point of time the amount of both these variables combined is the same, therefore an
increase in one would automatically require a decrease in the other. The manager therefore has to make a
choice of the correct combination of these two variables to attain a certain level of desired utility.

Substituting

in the new managerial utility function, it can re written as

The relationship between the two variables in the managers utility function is determined by the profit
function. Profit of a firm is dependent on the demand and cost conditions. Given the cost conditions the
demand is dependent of the price, staff expenditures and the market condition.

Price and market condition is assumed to be given exogenously at equilibrium. Thus the profit of the firm
becomes dependent on the staff expenditure which can be written as

Discretionary profit can be rewritten as

In the model, the managers would try to maximise their utility given the profit constraint

Max

Subject to

Graphical representation of the model

Page 41
MICROECONOMICS ANALYSIS

Fig 1. Utility indifference curves of managers

Fig 1. shows the various levels of utility (U1, U2, U3) derived by the manager by combining different
amounts of discretionary profits and staff expenditure. Higher the indifference curve, higher is the level of
utility derived by the manager. Hence the manager would try to be on the highest level of indifference
curve possible given the constraints. Staff expenditure is plotted on the x axis and discretionary profits on
the y axis.
The discretionary profit in this simplified model is equal to the discretionary investment. The indifference
curves are downward sloping and convex to the origin. This shows diminishing marginal rate of
substitution of staff expenditure for discretionary profits. The curves are asymptotic in nature which
implies that at any point of time and under any given circumstance the manager will choose positive
amounts of both discretionary profits and staff expenditure.

Fig 2. Discretionary profit curve

Assuming that the firm is producing an optimum level of output and the market environment is given, the
discretionary profits curve is generated, shown in Fig 2. It gives the relationship between staff
expenditure and discretionary profits.
It can be seen from the figure that profit will be positive in the region between the points B and C.
Initially with increase in profits, the staff expenditure the discretionary profits also increase, but this is
only till the point max, that is, till S level of staff expenditure. Beyond this if staff expenditure is
increased due to increase in output, then a fall in the discretionary profits is noticed. Staff expenditure of
less than B and more than C is not feasible as it wouldn't satisfy the minimum profit constraint and would
in turn threaten the job security of managers.

Page 42
MICROECONOMICS ANALYSIS

Fig 3. Equilibrium of a firm in Williamson's Model

To find the equilibrium in the model, Fig 1. is superimposed on Fig 2. The equilibrium point is the point
where the discretionary profit curve is tangent to the highest possible indifference curve of the manager,
which is point E in Fig 3. Staying at the highest profit point would require the manager to be at a lower
indifference curve U2.In this case the highest attainable level of utility is U3. At equilibrium, the level of
profits would be lower but staff expenditure S* is higher than the staff expenditure made at the maximum
profit point. As indifference curve is downward sloping, the equilibrium point would always be on the
right of the maximum profit point. Thus the model shows the higher preference of managers for staff
expenditure as compared to the discretionary investments.

Criticism

1. The model fails to describe how businesses take their price and output decisions in a highly
competitive set up.
2. The relationship between better performance of managers and the increasing amounts spent on
managers utility by the firm is not always true.
3. The model does not apply in a dynamic set up like changing demand and cost conditions during
booms and recessions.

QNO22:- Write short note on CES production function.

ANS: - In economics, Constant elasticity of substitution (CES) is a property of some production functions
and utility functions.

More precisely, it refers to a particular type of aggregator function which combines two or more types of
consumption, or two or more types of productive inputs into an aggregate quantity. This aggregator
function exhibits constant elasticity of substitution.

CES production function

The CES production function is a type of production function that displays constant elasticity of
substitution. In other words, the production technology has a constant percentage change in factor (e.g.
labour and capital) proportions due to a percentage change in marginal rate of technical substitution. The
two factor (Capital, Labor) CES production function introduced by Solow [1] and later made popular by
Arrow, Chenery, Minhas, and Solow is:

Page 43
MICROECONOMICS ANALYSIS

where

= Output
= Factor productivity
= Share parameter
, = Primary production factors (Capital and Labor)

= = Elasticity of substitution.

As its name suggests, the CES production function exhibits constant elasticity of substitution between
capital and labor. Leontief, linear and Cobb-Douglas production functions are special cases of the CES
production function. That is, if we have a linear function 1, if approaches zero, in the limit we
get the Cobb-Douglas function; and as approaches negative infinity we get the Leontief function. The
general form of the CES production function is:

where

= Output
= Factor productivity

= Share parameter of input i,


= Production factors (i = 1,2...n)
= Elasticity of substitution.

Extending the CES (Solow) form to accommodate multiple factors of production creates some problems,
however. There is no completely general way to do this. Uzawa showed the only possible n-factor
production functions (n>2) with constant partial elasticities of substitution require either that all
elasticities between pairs of factors be identical, or if any differ, these all must equal each other and all
remaining elasticities must be unity. This is true for any production function. This means the use of the
CES form for more than 2 factors will generally mean that there is not constant elasticity of substitution
among all factors.

Nested CES functions are commonly found in partial/general equilibrium models. Different nests (levels)
allow for the introduction of the appropriate elasticity of substitution.

The CES is a neoclassical production function.

Page 44
MICROECONOMICS ANALYSIS

CES utility function

The same functional form arises as a utility function in consumer theory. For example, if there exist
types of consumption goods , then aggregate consumption could be defined using the CES
aggregator:

Here again, the coefficients are share parameters, and is the elasticity of substitution. Therefore the
consumption goods are perfect substitutes when approaches infinity and perfect complements when
approaches zero. The CES aggregator is also sometimes called the Armington aggregator, which was
discussed by Armington (1969).

A CES utility function is one of the cases considered by Avinash Dixit and Joseph Stiglitz in their study
of optimal product diversity in a context of monopolistic competition.

QNO23:- Write a short note on welfare economics.

ANS: - In welfare economics, the compensation principle refers to a decision rule used to select between
pairs of alternative feasible social states. One of these states is the hypothetical point of departure ("the
original state"). According to the compensation principle, if the prospective gainers could compensate
(any) prospective losers and leave no one worse off, the other state is to be selected (Chipman, 1987, p.
524). An example of a compensation principle is the Pareto criterion in which a change in states entails
that such compensation is not merely feasible but required. Two variants are:

The Pareto principle, which requires any change such that all gain.
The (strong) Pareto criterion, which requires any change such that at least one gains and no one
loses from the change.

In non-hypothetical contexts such that the compensation occurs (say in the marketplace), invoking the
compensation principle is unnecessary to effect the change. But its use is more controversial and complex
with some losers (where full compensation is feasible but not made) and in selecting among more than
two feasible social states. In its specifics, it is also more controversial where the range of the decision rule
itself is at issue.

Uses for the compensation principle include:

comparisons between the welfare properties of perfect competition and imperfect competition
the Pareto principle in social choice theory
Cost-benefit analysis.

Definition of Hicks-Kaldor Criterion: The Hicks-Kaldor criterion is used for whether a cost-benefit
analysis supports a public project. The Hicks-Kaldor criterion is that the gainers from the project could in
principle compensate the losers. That is, that total gains from the project exceed the losses. The Hicks-

Page 45
MICROECONOMICS ANALYSIS

Kaldor criterion does not go so far as the Pareto criterion, according to which the gainers would in fact
have to compensate the losers. (Econterms)

QNO24:- Write short note on Independence axiom.

ANS: - An axiom P is independent if there are no other axioms Q such that Q implies P.

In many cases independence is desired, either to reach the conclusion of a reduced set of axioms, or to be
able to replace an independent axiom to create a more concise system (for example, the parallel postulate
is independent of Euclid's Axioms, and can provide interesting results when a negated or manipulated
form of the postulate is put into its place).

Proving Independence

If the original axioms Q are not consistent, then no new axiom is independent. If they are consistent, then
P can be shown independent of them if adding P to them, or adding the negation of P, both yield
consistent sets of axioms. For example, Euclid's Axioms, with the parallel postulate included, yields
Euclidean geometry, and with the parallel postulate negated, yields non-Euclidean (spherical or
hyperbolic) geometry. Both of these are consistent systems, showing that the parallel postulate is
independent of the other axioms of geometry.

Proving independence is often very difficult. Forcing is one commonly-used technique.

QNO25:- Differentiate between Risk pooling and Risk spreading.

ANS: - A risk pool is one of the forms of risk management mostly practiced by insurance companies.
Under this system, insurance companies come together to form a pool, which can provide protection to
insurance companies against catastrophic risks such as floods, earthquakes etc. The term is also used to
describe the pooling of similar risks that underlies the concept of insurance. While risk pooling is
necessary for insurance to work, not all risks can be effectively pooled. In particular, it is difficult to pool
dissimilar risks in a voluntary insurance market, unless there is a subsidy available to encourage
participation.

Risk pooling is an important concept in supply chain management. Risk pooling suggests that demand
variability is reduced if one aggregates demand across locations because as demand is aggregated across
different locations, it becomes more likely that high demand from one customer will be offset by low
demand from another. This reduction in variability allows a decrease in safety stock and therefore reduces
average inventory.

For example: in the centralized distribution system, the warehouse serves all customers, which leads to a
reduction in variability measured by either the standard deviation or the coefficient of variation.

The three critical points to risk pooling are:

1. Centralized inventory saves safety stock and average inventory in the system.
2. When demands from markets are negatively correlated, the higher the coefficient of variation, the
greater the benefit obtained from centralized systems; that is, the greater the benefit from risk
pooling.

Page 46
MICROECONOMICS ANALYSIS

3. The benefits from risk pooling depend directly on the relative market behavior. This is explained
as follows: If we compare two markets and when demands from both markets are more or less
than the average demand, we say that the demands from the market are positively correlated.
Thus the benefits derived from risk pooling decreases as the correlation between demands from
the two markets becomes more positive.

In government

Intergovernmental risk pools (IRPs) operate under the same general principle, except that they are made
up of public entities, such as government agencies. Thus, IRPs provide alternative risk financing and
transfer mechanisms to their members, through which particular types of risk are underwritten with
contributions (premiums), with losses and expenses shared in agreed ratios. In other words,
Intergovernmental Risk Pools are a cooperative group of governmental entities joining together to finance
an exposure, liability or risk.

Intergovernmental risk pools may include, but are not limited to, authorities, joint power authorities,
associations, agencies, trusts, and other risk pools.

Spread risk is risk (usually market risk or earnings risk) due to exposure to some spread. It often arises
with a long-short position or with derivatives. A synonym for spread risk is basis risk.

Suppose a bank lends at prime and finances itself at Libor. It faces spread risk due to the possibility that
the prime-Libor spread might narrow. A bond trader might hedge a long position in corporate bonds by
shorting Treasury bonds. The hedge eliminates exposure to changes in Treasury yields, but the trader
remains exposed to changes in the spread between corporate and Treasury yields. He too is taking spread
risk. See the article Interest Rate Risk for more on basis risk in fixed income markets.

If futures are used to hedge a long or short position in an underlier, residual risk will remain due to the
spread between the futures price and the underliers spot price. That spread is called the futures' basis.

QNO26:- What is adverse selection problem? Point out the insight you gain from its formulation.
Does market signaling alleviate the problem of adverse selection? Give reason in support of your
answer.

ANS: - Adverse selection, anti-selection, or negative selection is a term used in economics, insurance,
risk management, and statistics. It refers to a market process in which undesired results occur when
buyers and sellers have asymmetric information (access to different information); the "bad" products or
services are more likely to be selected. For example, a bank that sets one price for all of its chequing
account customers runs the risk of being adversely selected against by its low-balance, high-activity (and
hence least profitable) customers. Two ways to model adverse selection are to employ signaling games
and screening games.

Insurance

The term adverse selection was originally used in insurance. It describes a situation where in an
individual's demand for insurance (the propensity to buy insurance and/or the quantity purchased) is
positively correlated with the individual's risk of loss (like higher risks buy more insurance), and the
insurer is unable to allow for this correlation in the price of insurance. This may be because of private
information known only to the individual (information asymmetry), or because of regulations or social
norms which prevent the insurer from using certain categories of known information to set prices (for

Page 47
MICROECONOMICS ANALYSIS

example, the insurer may be prohibited from using such information as gender, ethnic origin, genetic test
results, or preexisting medical conditions, the last of which amount to a 100% risk of the losses associated
with the treatment of that condition). The latter scenario is sometimes referred to as "regulatory adverse
selection."

The potentially adverse nature of this phenomenon can be illustrated by the link between smoking status
and mortality. Non-smokers, on average, are more likely to live longer, while smokers, on average, are
more likely to die younger. If insurers do not vary prices for life insurance according to smoking status,
life insurance will be a better buy for smokers than for non-smokers. So smokers may be more likely to
buy insurance, or may tend to buy larger amounts, than non-smokers, thereby raising the average
mortality of the combined policyholder group above that of the general population. From the insurer's
viewpoint, the higher mortality of the group which selects to buy insurance is adverse. The insurer raises
the price of insurance accordingly, and as a consequence, non-smokers may be less likely to buy
insurance (or may buy smaller amounts) than they would buy at a lower price reflective of their lower
risk. The reduction in insurance purchases by non-smokers is also adverse from the insurer's viewpoint,
and perhaps also from a public policy viewpoint.

Furthermore, if there is a range of increasing risk categories in the population, the increase in the
insurance price because of adverse selection may lead the lowest remaining risks to cancel or not renew
their insurance. This promotes a further increase in price, and hence the lowest remaining risks cancel
their insurance, leading to a further price increase, and so on. Eventually this "adverse selection spiral"
might, in theory, lead to the collapse of the insurance market.

To counter the effects of adverse selection, insurers (to the extent that laws permit) ask a range of
questions and may request medical or other reports on individuals who apply to buy insurance so that the
price quoted can be varied accordingly, and any unreasonably high or unpredictable risks rejected. This
risk selection process is known as underwriting. In many countries, insurance law incorporates an "utmost
good faith" or uberrima fides doctrine, which requires potential customers to answer any underwriting
questions asked by the insurer fully and honestly; if they fail to do this, the insurer may later refuse to pay
claims.

Whilst adverse selection in theory seems an obvious and inevitable consequence of economic incentives,
empirical evidence is mixed. Several studies investigating correlations between risk and insurance
purchase have failed to show the predicted positive correlation for life insurance, auto insurance, and
health insurance. On the other hand, "positive" test results for adverse selection have been reported in
health insurance, long-term care insurance and annuity markets. These "positive" results tend to be based
on demonstrating more subtle relationships between risk and purchasing behavior (such as between
mortality and whether the customer chooses a life annuity which is fixed or inflation-linked), rather than
simple correlations of risk and quantity purchased.

One reason why adverse selection may be muted in practice may be that insurers' underwriting is largely
effective. Another possible reason is the negative correlation between risk aversion (such as the
willingness to purchase insurance) and risk level (estimated ex ante based on observation of the ex post
occurrence rate of observed claims) in the population: if risk aversion is higher amongst lower risk
customers, such that persons less likely to engage in risk-increasing behavior are more likely to engage in
risk-decreasing behavior (to take affirmative steps to reduce risk), adverse selection can be reduced or
even reversed, leading to "propitious" or "advantageous" selection.

For example, there is evidence that smokers are more willing to do risky jobs than non-smokers, and this
greater willingness to accept risk might reduce insurance purchase by smokers. From a public policy

Page 48
MICROECONOMICS ANALYSIS

viewpoint, some adverse selection can also be advantageous because it may lead to a higher fraction of
total losses for the whole population being covered by insurance than if there were no adverse selection.

In studies of health insurance, an individual mandate that requires people to either purchase plans or face
a penalty is cited as a way out of the adverse selection problem by broadening the risk pool. Mandates,
like all insurance, increase moral hazard.

QNO27:- Write short notes on Hoteliing's lernma.

ANS: - Hotelling's lemma is a result in microeconomics that relates the supply of a good to the profit of
the good's producer. It was first shown by Harold Hotelling, and is widely used in the theory of the firm.
The lemma is very simple, and can be stated:

Let be a firm's net supply function in terms of a certain good's price ( ). Then:

for the profit function of the firm in terms of the good's price, assuming that and that derivative
exists.

The proof of the theorem stems from the fact that for a profit-maximizing firm, the maximum of the firm's
profit at some output is given by the minimum? of at some price, ,

namely where holds. Thus, ; QED.

The proof is also a corollary of the envelope theorem.

QNO28:- Explain how Shephard's Lemma can be used to clerive the production function from the
cost function.
ANS:-, a production function is a function that specifies the output of a firm, an industry, or an entire
economy for all combinations of inputs. This function is an assumed technological relationship, based on
the current state of engineering knowledge; it does not represent the result of economic choices, but rather
is an externally given entity that influences economic decision-making. Almost all economic theories
presuppose a production function, either on the firm level or the aggregate level. In this sense, the
production function is one of the key concepts of mainstream neoclassical theories. Some non-mainstream
economists, however, reject the very concept of an aggregate production function. Q=A.L^a.K^b.Kf^1-a-
b where Kf is foreign investment.

In micro-economics, a production function is a function that specifies the output of a firm for all
combinations of inputs. A meta-production function (sometimes met production function) compares the
practice of the existing entities converting inputs into output to determine the most efficient practice
production function of the existing entities, whether the most efficient feasible practice production or the
most efficient actual practice production. In either case, the maximum output of a technologically-
determined production process is a mathematical function of one or more inputs. Put another way, given

Page 49
MICROECONOMICS ANALYSIS

the set of all technically feasible combinations of output and inputs, only the combinations encompassing
a maximum output for a specified set of inputs would constitute the production function. Alternatively, a
production function can be defined as the specification of the minimum input requirements needed to
produce designated quantities of output, given available technology. It is usually presumed that unique
production functions can be constructed for every production technology.

By assuming that the maximum output technologically possible from a given set of inputs is achieved,
economists using a production function in analysis are abstracting from the engineering and managerial
problems inherently associated with a particular production process. The engineering and managerial
problems of technical efficiency are assumed to be solved, so that analysis can focus on the problems of
allocative efficiency. The firm is assumed to be making allocative choices concerning how much of each
input factor to use and how much output to produce, given the cost (purchase price) of each factor, the
selling price of the output, and the technological determinants represented by the production function. A
decision frame in which one or more inputs are held constant may be used; for example, (physical) capital
may be assumed to be fixed (constant) in the short run, and labour and possibly other inputs such as raw
materials variable, while in the long run, the quantities of both capital and the other factors that may be
chosen by the firm are variable. In the long run, the firm may even have a choice of technologies,
represented by various possible production functions.

The relationship of output to inputs is non-monetary; that is, a production function relates physical inputs
to physical outputs, and prices and costs are not reflected in the function. But the production function is
not a full model of the production process: it deliberately abstracts from inherent aspects of physical
production processes that some would argue are essential, including error, entropy or waste. Moreover,
production functions do not ordinarily model the business processes, either, ignoring the role of
management. (For a primer on the fundamental elements of microeconomic production theory, see
production theory basics).

The primary purpose of the production function is to address allocative efficiency in the use of factor
inputs in production and the resulting distribution of income to those factors. Under certain assumptions,
the production function can be used to derive a marginal product for each factor, which implies an ideal
division of the income generated from output into an income due to each input factor of production.

Specifying the production function

A production function can be expressed in a functional form as the right side of

where:
quantity of output
quantities of factor inputs (such as capital, labour, land or raw
materials).

If Q is not a matrix (i.e. a scalar, a vector, or even a diagonal matrix), then this form does not encompass
joint production, which is a production process that has multiple co-products. On the other hand, if f maps
from Rn to Rk then it is a joint production function expressing the determination of k different types of
output based on the joint usage of the specified quantities of the n inputs.

One formulation, unlikely to be relevant in practice, is as a linear function:

Page 50
MICROECONOMICS ANALYSIS

where and are parameters that are determined empirically.

Another is as a Cobb-Douglas production function:

The Leontief production function applies to situations in which inputs must be used in fixed proportions;
starting from those proportions, if usage of one input is increased without another being increased, output
will not change. This production function is given by

Other forms include the constant elasticity of substitution production function (CES), which is a
generalized form of the Cobb-Douglas function, and the quadratic production function. The best form of
the equation to use and the values of the parameters ( ) vary from company to company and
industry to industry. In a short run production function at least one of the 's (inputs) is fixed. In the
long run all factor inputs are variable at the discretion of management.

QNO29:- Differentiate between Monopoly and monophony.

ANS: - Ideal market conditions are not existent everywhere and there are situations where the market is
skewed either towards buyers or towards sellers. Monopoly is referred to a market condition where there
is only one producer in a particular industry and the consumers really have no option but to buy his
products or service. This is an ideal condition for the player as he can dictate the terms and set the prices
on his whim. The opposite condition is Monopsony where there are many sellers but a single buyer which
is also an imperfect market condition. It is obvious that neither monopoly nor Monopsony is ideal for
consumers. There are some similarities in monopoly and Monopsony but there are differences also that
will be talked about in this article.

Both monopoly and Monopsony are conditions that are normally not found in an economy. These are
situations that are not desirable for people as they give a free hand to one party which establishes
hegemony in the market. Take for example electricity distribution in a country under the control of
government. As consumers have no options but to use the services provided by the government, this is a
perfect example of a monopoly as government can fix the prices of electricity upon its whim (there is no
competition) and consumers have to bear the services even if they are of poor quality and not at all
satisfactory.

On the other hand, consider a poor country with lots of illiterate, unemployed people. If these people are
working as labor but only have a single buyer of their services, this is considered as a Monopsony. People
are forced to work at rates decided by the monopsonist and they also have to bear the terms and
conditions set by him. There are industries where there are several suppliers but only a single buyer. One
perfect example is defense equipments where there are many companies making these equipments but
they eventually have to sell to the government which is the only buyer.

In brief:

Page 51
MICROECONOMICS ANALYSIS

Monopoly vs Monopsony

Monopoly and Monopsony are imperfect market conditions that are just opposite of each other.

While in monopoly there is one manufacturer or service provider controlling the industry, in
Monopsony, there are several producers but a single buyer.

Both are not good for people as they allow hegemony of the producer in monopoly and that of buyer in
Monopsony.

Monopsony is commonly seen in the labor market where there are many laborers but only one buyer to
use their services.

QNO30:- Differentiate between External economies and diseconomies.

ANS:- The concepts of external economies and diseconomies (externalities) treat the subject of how the
costs and benefits that constrain and motivate a decision maker in a particular activity may deviate from
the costs or benefits that activity creates for a larger organization. Most of the economic literature on
externalities has focused on the operation of an entire economic system, with particular reference to the
effectiveness of prices, markets, competition, and profit motivation as regulators of production and
consumption.

Economic theory suggests that a system characterized by private ownership of resources and sufficient
competition will maximize total income and economic welfare. The system will establish an equilibrium
in which product prices equal their costs on their respective margins of production. Costs include an
opportunity rate of return on invested capital, which is an element of business accounting profit, and the
rewards, or rent, that especially endowed resources may command. Production costs also reflect
technological constraints, and producers employ the least costly method of producing any given output. A
further characteristic of the equilibrium is that similar resources, including capital, obtain equal earnings
or returns in all activities. If earnings were unequal, resources would enter more profitable activities and
leave less lucrative ones until earnings equality comes about. The resulting allocation of resources is also
consistent with consumers preferences. Finally, consumers demands, through their influence on market
prices and hence profits, determine the allocation of resources.

The system works in such a way that the wide diffusion of decision making which is necessary if complex
systems are to operate at all is permitted. Each decision maker only needs to have knowledge about the
things he consumes, or produces, or his occupation. That individuals can so narrow their focus permits a
division of labor and, in turn, the resulting gains of specialization. The vital mechanism (and social
institution) that facilitates such specialization is the price system, or market organization. The price
system is an information system that provides producers and consumers with the signals that guide their
behavior. Hence, the economic system is highly interdependent: the combined behavior of individual
decision makers spontaneously determines relative prices and quantities of items produced and consumed,
while relative prices are the signals, constraints, and opportunities to which individual decision makers
respond and adapt.

Such a general equilibrium system has two specific qualities: (1) Production costs of each item, on its
respective margin of production, when viewed in a social cost sense, equal the price of each item. (2) The
price of each end product accurately reflects the incremental satisfaction that consumers attach to it.
These two qualities constitute a social optimum in that national income and economic welfare are

Page 52
MICROECONOMICS ANALYSIS

maximized [see, however, WELFARE ECONOMICS]. Note that it is only optimal if the marginal social
costs of each activity equal the social benefits they create. If the social cost of an activity exceeds the
costs relevant to the decision makers in the activity, there is an external diseconomy. If the benefits of an
activity exceed its marginal cost, there is an external economy.

Due to the extreme interdependence within an economy, the behavior of a given industry can increase the
cost of other industries in ways which need not be socially undesirable. Some of these phenomena, too,
have been associated with the subject of external economies and diseconomies. One of the difficulties in
the evaluation of externalities is the problem of determining which are socially desirable or undesirable
and should be promoted or counteracted by public policy measures and which do not warrant government
interference with the private sector.

The subject of external economies and diseconomies thus treats possible mechanical shortcomings of an
economy that cause individual decision makers to operate in a fashion that thwarts the full attainment of
broad social objectives. To some students the possible wide extent of externalities is sufficient basis to
justify extensive government intervention in the private sector of the economy. To other students this
point is debatable. The resolution of these differences has been, and remains, a major unsettled issue in
economics.

QNO31:- What do you mean by a social welfare function? If you assume that such a function exists,
what properties of social optima would be considered by you? Discuss such properties.

ANS: - In economics, a social welfare function is a real-valued function that ranks conceivable social
states (alternative complete descriptions of the society) from lowest to highest. Inputs of the function
include any variables considered to affect the economic welfare of a society (Sen, 1970, p. 33). In using
welfare measures of persons in the society as inputs, the social welfare function is individualistic in form.
One use of a social welfare function is to represent prospective patterns of collective choice as to
alternative social states.

The social welfare function is analogous to an indifference-curve map for an individual, except that the
social welfare function is a mapping of individual preferences or judgments of everyone in the society as
to collective choices, which apply to all, whatever individual preferences are. One point of a social
welfare function is to determine how close the analogy is to an ordinal utility function for an individual
with at least minimal restrictions suggested by welfare economics. Kenneth Arrow proved a more basic
point for a set of seemingly reasonable conditions.

In a 1938 article Abram Bergson introduced the social welfare function. The object was "to state in
precise form the value judgments required for the derivation of the conditions of maximum economic
welfare" set out by earlier writers, including Marshall and Pigou, Pareto and Barone, and Lerner. The
function was real-valued and differentiable. It was specified to describe the society as a whole.
Arguments of the function included the quantities of different commodities produced and consumed and
of resources used in producing different commodities, including labor.

Necessary general conditions are that at the maximum value of the function:

The marginal "dollar's worth" of welfare is equal for each individual and for each commodity
The marginal "diswelfare" of each "dollar's worth" of labor is equal for each commodity
produced of each labor supplier
The marginal "dollar" cost of each unit of resources is equal to the marginal value productivity
for each commodity.

Page 53
MICROECONOMICS ANALYSIS

Bergson showed how welfare economics could describe a standard of economic efficiency despite
dispensing with interpersonally-comparable cardinal utility, the hypothesizaton of which may merely
conceal value judgments, and purely subjective ones at that.

Kenneth Arrow (1963) generalizes the analysis. Along earlier lines, his version of a social welfare
function, also called a 'constitution', maps a set of individual orderings (ordinal utility functions) for
everyone in the society to a social ordering, a rule for ranking alternative social states (say passing an
enforceable law or not, ceteris paribus). Arrow finds that nothing of behavioral significance is lost by
dropping the requirement of social orderings that are real-valued (and thus cardinal) in favor of orderings,
which are merely complete and transitive, such as a standard indifference-curve map. The earlier analysis
mapped any set of individual orderings to one social ordering, whatever it was. This social ordering
selected the top-ranked feasible alternative from the economic environment as to resource constraints.
Arrow proposed to examine mapping different sets of individual orderings to possibly different social
orderings. Here the social ordering would depend on the set of individual orderings, rather than being
imposed (invariant to them). Stunningly (relative to a course of theory from Adam Smith and Jeremy
Bentham on), Arrow proved the General Possibility Theorem that it is impossible to have a social welfare
function that satisfies a certain set of "apparently reasonable" conditions.

Cardinal social welfare functions

In the above contexts, a social welfare function provides a kind of social preference based on only
individual utility functions, whereas in others it includes cardinal measures of social welfare not
aggregated from individual utility functions. Examples of such measures are life expectancy and per
capita income for the society. The rest of this article adopts the latter definition.

The form of the social welfare function is intended to express a statement of objectives of a society. For
example, take this example of a social welfare function:

where is social welfare and is the income of individual i among n in the society. In this case,
maximising the social welfare function means maximising the total income of the people in the society,
without regard to how incomes are distributed in society. Alternatively, consider the Max-Min utility
function (based on the philosophical work of John Rawls):

Here, the social welfare of society is taken to be related to the income of the poorest person in the society,
and maximizing welfare would mean maximizing the income of the poorest person without regard for the
incomes of the others.

These two social welfare functions express very different views about how a society would need to be
organized in order to maximize welfare, with the first emphasizing total incomes and the second
emphasizing the needs of the poorest. The max-min welfare function can be seen as reflecting an extreme
form of uncertainty aversion on the part of society as a whole, since it is concerned only with the worst
conditions that a member of society could face.

Amartya Sen proposed a welfare function in 1973:

Page 54
MICROECONOMICS ANALYSIS

The average per capita income of a measured group (e.g. nation) is multiplied with where
is the Gini index, a relative inequality measure. James E. Foster (1996) proposed to use one of Atkinson's
Indexes, which is an entropy measure. Due to the relation between Atkinsons entropy measure and the
Theil index, Foster's welfare function also can be computed directly using the Theil-L Index.

The value yielded by this function has a concrete meaning. There are several possible incomes which
could be earned by a person, who randomly is selected from a population with an unequal distribution of
incomes. This welfare function marks the income, which a randomly selected person is most likely to
have. Similar to the median, this income will be smaller than the average per capita income.

Here the Theil-T index is applied. The inverse value yielded by this function has a concrete meaning as
well. There are several possible incomes to which an Euro may belong, which is randomly picked from
the sum of all unequally distributed incomes. This welfare function marks the income, which a randomly
selected Euro most likely belongs to. The inverse value of that function will be larger than the average per
capita income.

The article on the Theil index provides further information about how this index is used in order to
compute welfare functions.

QNO32:- What is the theory of Second Best?

ANS: - In welfare economics, the theory of the second best concerns what happens when one or more
optimality conditions cannot be satisfied. Canadian economist Richard Lipsey and Australian economist
Kelvin Lancaster showed in a 1956 paper that if one optimality condition in an economic model cannot be
satisfied, it is possible that the next-best solution involves changing other variables away from the ones
that are usually assumed to be optimal.

This means that in an economy with some uncorrectable market failure in one sector, actions to correct
market failures in another related sector with the intent of increasing overall economic efficiency may
actually decrease it. In theory, at least, it may be better to let two market imperfections cancel each other
out rather than making an effort to fix either one. Thus, it may be optimal for the government to intervene
in a way that is contrary to usual policy. This suggests that economists need to study the details of the
situation before jumping to the theory-based conclusion that an improvement in market perfection in one
area implies a global improvement in efficiency.

Even though the theory of the second best was developed for the Walrasian general equilibrium system, it
also applies to partial equilibrium cases. For example, consider a mining monopoly that's also a polluter:
mining leads to tailings being dumped in the river and deadly dust in the workers lungs. Suppose in
addition that there is nothing at all that can be done about the pollution. However, the government is able
to break up the monopoly.

Page 55
MICROECONOMICS ANALYSIS

The problem here is that increasing competition in this market is likely to increase production (since
competitors have such a hard time restricting production compared to a monopoly). Because pollution is
highly associated with production, pollution will most likely increase. Thus, it is not clear that eliminating
the monopoly increases efficiency. Gains from trade in coal will have been eliminated, but externalities
from pollution will have increased.

Gradual International economic integration may be considered as a second best solution, since it provides
degree of trade advanced according to stages of economic integration (gradual abolishment of customs
tariffs, non-tariff barriers such as registration rights etc. due to coherence policy of economic unions). It
seems that the first-best option (free trade) is achieved as to gains from trade when economic integration
reaches a stage of political union (EU in 2009).

QNO33:- Write short note on Production possibility frontier.

ANS:- In economics, a productionpossibility frontier (PPF), sometimes called a productionpossibility


curve, production-possibility boundary or product transformation curve, is a graph that compares the
production rates of two commodities that use the same fixed total of the factors of production.
Graphically bounding the production set, the PPF curve shows the maximum specified production level of
one commodity that results given the production level of the other. By doing so, it defines productive
efficiency in the context of that production set. A period of time is specified as well as the production
technologies. The commodity compared can either be a good or a service.

PPFs are normally drawn as bulging upwards ("concave") from the origin but can also be represented as
bulging downward or linear (straight), depending on a number of factors. A PPF can be used to represent
a number of economic concepts, such as scarcity of resources (i.e., the fundamental economic problem all
societies face), opportunity cost (or marginal rate of transformation), productive efficiency, allocative
efficiency, and economies of scale. In addition, an outward shift of the PPF results from growth of the
availability of inputs such as physical capital or labour, or technological progress in our knowledge of
how to transform inputs into outputs. Such a shift allows economic growth of an economy already
operating at its full productivity (on the PPF), which means that more of both outputs can be produced
during the specified period of time without sacrificing the output of either good. Conversely, the PPF will
shift inward if the labor force shrinks, the supply of raw materials is depleted, or a natural disaster
decreases the stock of physical capital. However, most economic contractions reflect not that less can be
produced, but that the economy has started operating below the frontiertypically both labor and
physical capital are underemployed. The combination represented by the point on the PPF where an
economy operates shows the priorities or choices of the economy, such as the choice between producing
more capital goods and fewer consumer goods, or vice versa.

QNO34:- Differentiate between Basing point prices and limit price.

ANS: - Base point pricing is the system of firms setting prices of their goods based on a base cost plus
transportation costs to a given market. Although some consider this a form of collusion between the
selling firms (it lowers the ability of buying firms to gain a competitive advantage by location or private
transportation), it is common practice in the steel and automotive industries. It allows firms to collude by
simply agreeing on a base price.

Types

1. Point Pricing (-5 to +5 range)

Page 56
MICROECONOMICS ANALYSIS

2. Rebate Pricing (-5 to +5 range)


3. Bond Pricing (+95 to +105 range)

A pricing approach that involves designating a particular geographic location as a basing point and then
charging customers as a freight cost from that location to the location of the customer. Or a pricing
method in which customers are charged freight cost from a base point; the base point may be chosen
arbitrarily, but the location of one of the company's manufacturing plant is commonly used.

A limit price is the price set by a monopolist to discourage entry into a market, and is illegal in many
countries. The limit price is the price that a potential entrant would face upon entering as long as the
incumbent firm did not decrease output. The limit price is often lower than the average cost of production
or just low enough to make entering not profitable. Such a pricing strategy is called limit pricing.

The quantity produced by the incumbent firm to act as a deterrent to entry is usually larger than would be
optimal for a monopolist, but might still produce higher economic profits than would be earned under
perfect competition.

The problem with limit pricing as strategic behavior is that once the entrant has entered the market, the
quantity used as a threat to deter entry is no longer the incumbent firm's best response. This means that
for limit pricing to be an effective deterrent to entry, the threat must in some way be made credible. A
way to achieve this is for the incumbent firm to constrain itself to produce a certain quantity whether
entry occurs or not. An example of this would be if the firm signed a union contract to employ a certain
(high) level of labor for a long period of time. Another example is to build excess production capacity as a
commitment device.

Simple Example

In a simple case, suppose industry demand for good X at market price P is given by:

Suppose there are two potential producers of good X, Firm A, and Firm B. Firm A has no fixed costs and
constant marginal cost equal to . Firm B also has no fixed costs, and has constant marginal cost
equal to , where (so that Firm B's marginal cost is greater than Firm A's).

Suppose Firm A acts as a monopolist. The profit-maximizing monopoly price charged by Firm A is then:

Since Firm B will never sell below its marginal cost, as long as , Firm B will not enter the
market when Firm A charges . That is, the market for good X is an effective monopoly if:

Suppose, on the contrary, that:

Page 57
MICROECONOMICS ANALYSIS

In this case, if Firm A charges , Firm B has an incentive to enter the market, since it can sell a
positive quantity of good X at a price above its marginal cost, and therefore make positive profits. In
order to prevent Firm B from having an incentive to enter the market, Firm A must set its price no greater
than . To maximize its profits subject to this constraint, Firm A sets price (the limit price).

QNO35:- Describe the role of reaction funcdons in Cournot's model of duopoly.


ANS: - One model of duopoly is the strategic game in which

the players are the firms


the actions of each firm are the set of possible outputs (any nonnegative amount)
the payoff of each firm is its profit.

(The name of Cournot, who wrote in the early 19th century, is associated with this model, though his
analysis is a little different from the modern one.)

This game models a situation in which each firm chooses its output independently, and the market
determines the price at which it is sold. Specifically, if firm 1 produces the output y1 and firm 2 produces
the output y2 then the price at which each unit of output is sold is P(y1 + y2), where P is the inverse
demand function.

Denote firm 1's total cost function by TC1(y) and firm 2's by TC2(y). Then firm 1's total revenue when the
pair of outputs chosen by the firms is (y1, y2) is P(y1 + y2)y1, so that its profit is

P(y1 + y2)y1 TC1(y1);

firm 2's revenue is P(y2 + y2)y2, and hence its profit is

P(y1 + y2)y2 TC2(y2).

Notice an essential difference between these specifications of the firms' revenues and those for a
competitive firm or for a monopolist. The revenue of both a competitive firm and of a monopolist
depends only on the firm's own output: for a competitive firm we assume that the firm's output does not
affect the price, and for a monopolist there are no other firms in the market. For a duopolist, however,
revenue depends on both its own output and the other firm's output.

The solution we apply to this game is that of Nash equilibrium. To think about the Nash equilibria, first
consider the nature of the firms' best response functions.

The firms' best response functions


Firm 1's best response function gives, for each possible output of firm 2, the profit-maximizing output of
firm 1. Firm 1's profit-maximizing output when firm 2's output is y2 is the output y1 that maximizes
firm 1's profit; that is, the value of y1 that maximizes

P(y1 + y2)y1 TC1(y1).

Page 58
MICROECONOMICS ANALYSIS

Differentiating with respect to y1 (treating y2 as a constant), we conclude that the profit-maximizing


output y1 satisfies

P'(y1 + y2)y1 + P(y1 + y2) MC1(y1) = 0.

We'd like to know the shape of firm 1's best response function---i.e. we'd like to know how the value of y1
that satisfies this condition depends on y2.

Consider a case in which firm 1's average cost function takes the "typical" U shape. First suppose that
y2 = 0. Then firm 1's problem is the same as that of a monopolist. Its best output satisfies the condition
MR = MC1, as illustrated in the left panel of the following figure. The corresponding point on firm 1's
best response function is shown in the right panel: when y2 = 0, firm 1's best output is b1(0).

Now increase y2. Firm 2 now absorbs some of the demand, and less is left over for firm 1: the demand
curve firm 1 faces is shifted to the left by the amount y2, as in the left panel of the following figure.
Firm 1's best output satisfies the condition that its marginal revenue, given the part of the demand
function that it faces, is equal to its marginal cost. This optimal output is indicated as b1(y2) in the left
panel of the figure; the corresponding point on firm 1's best response function is shown in the right panel.

Page 59
MICROECONOMICS ANALYSIS

As firm 2's output increases, there comes a point where there is no positive output at which firm 1 can
make a profit. The critical point is shown in the left panel of the following figure. In this case, the most
profit firm 1 can earn by producing a positive output is 0: the AR curve it faces is tangent to its AC curve.
The corresponding point on firm 1's best response function is shown in the right panel.

For larger outputs, firm 1's optimal output is zero, as shown in the following figure.

Page 60
MICROECONOMICS ANALYSIS

Firm 1's whole best response function is shown in the following figure. The way to read this figure is to
take a point on the vertical axis---a value of y2---and go across to the graph, then down to the horizontal
axis; the value of y1 on this axis is firm 1's optimal output given y2.

If firm 2's cost function is the same as firm 1's, then its best response function is symmetric with firm 1's,
as shown in the following figure.

Page 61
MICROECONOMICS ANALYSIS

Whenever a firm's average cost functions is U-shaped, its best response function has a "jump" in it, for the
same reason that a competitive firm's supply function has a "jump" in it: the firm either wants to produce
outputs close to its efficient scale of production or it wants to produce an output of zero, but it does not
want to produce intermediate outputs (for which the average cost is high).

A firm's best output does not necessarily decrease as its rival's output increases. Such a relationship seems
likely, though it is possible that for some increases in its rival's output, a firm wants to produce more
output, not less.

Nash equilibrium
To find a Nash equilibrium, we need to put together the two best response functions. Any pair (y1, y2) of
outputs at which they intersect has the property that

y1 = b1(y2) and y2 = b2(y1)

and hence is a Nash equilibrium.

The best response functions are superimposed in the following figure.

Page 62
MICROECONOMICS ANALYSIS

We see that for this pair of best response functions there is a unique Nash equilibrium, indicated by the
small purple disk. (In general, there may be more than one Nash equilibrium.)

Examples and exercises on Nash equilibrium of Cournot's model

Comparison with competitive equilibrium


In a Nash equilibrium, each firm's output maximizes its profit given the output of the other firm. As we
saw above, this implies that for a Nash equilibrium (y1*, y2*), firm 1's output y1* satisfies

P'(y1* + y2*)y1* + P(y1* + y2*) = MC1(y1*),

and firm 2's output y2* satisfies

P'(y1* + y2*)y2* + P(y1* + y2*) = MC2(y2*).

In particular, unless P'(y1* + y2*) = 0 (the demand curve is horizontal) the price P(y1* + y2*) is not equal
to either firm's marginal cost at the output the firm produces.

We conclude that the firms' outputs and the price are different in a Nash equilibrium than they are in a
competitive equilibrium. If P'(y1* + y2*) < 0, as we should expect (the demand curve slopes down), price
exceeds marginal cost, so that, as for a monopoly, the total output produced by the firms is less than the
competitive output.

An implication is that, as for a monopoly, the Nash equilibrium outcome in a Cournot duopoly is not
Pareto efficient.

Comparison with monopoly equilibrium


Let (y1*, y2*) be a Nash equilibrium, and consider the pairs (y1, y2) of outputs that yield firm 1 the same
profit as it obtains in the equilibrium. The set of such pairs is known as an isoprofit curve of firm 1.

In the equilibrium, firm 1's profit is maximal, given firm 2's output y2*. Further, for smaller outputs of
firm 2, firm 1's maximal profit is higher (when firm 2 produces less, more of the market is left over for
firm 2). In fact, for any given output y2 < y2* of firm 2, there is a range of outputs close to y1* for which

Page 63
MICROECONOMICS ANALYSIS

firm 1's profit exceeds its equilibrium profit. Thus firm 1's isoprofit curve corresponding to the profit it
makes in an equilibrium has the shape of the red curve in the following figure.

The pink shaded area in this figure is the set of pairs (y1, y2) of outputs that yield firm 1 more profit than
does the equilibrium (y1*, y2*). (Firm 1 is better off, given output y1, the lower is firm 2's output---since
as firm 2's output decreases, the price increases.)

Now consider the analogous isoprofit curve for firm 2: the set of all pairs (y1, y2) of outputs that yield
firm 2 the same profit as it obtains in the equilibrium. This curve is shown in the following figure.

If we put the two curves in the same figure we obtain the following figure.

Page 64
MICROECONOMICS ANALYSIS

The lens-shaped area shaded brown is the set of pairs (y1, y2) of outputs for which both firms' profits are
higher than they are in equilibrium. So long as the isoprofit curves are smooth, this area always exists.
That is:

The pair of Nash equilibrium outputs for the firms in Cournot's model does not maximize the firms' total
profit. In particular, the total output of the firms in a Nash equilibrium is different from the monopoly
output.

QNO36:- How does Stone Gallery utility function help in bringing together consumer demand and
a<pedicure relation to analyze the opUmal behaviour of consumer?
ANS:- Jump to: navigation, search

The Stone-Geary utility function takes the form

where is utility, is consumption of good , and and are parameters.

For , the Stone-Geary function reduces to the generalised Cobb-Douglas function.

The Stone-Geary utility function gives rise to the Linear Expenditure System, in which the demand
function equals

where is total expenditure, and is the price of good .

Page 65
MICROECONOMICS ANALYSIS

The Stone-Geary utility function was first derived by Roy C. Geary in a comment on earlier work by
Lawrence Klein and Herman Rubin. Richard Stone was the first to estimate the Linear Expenditure
System.

QNO37:- Write short note on envelope theorem.

ANS: - The envelope theorem is a theorem about optimization problems (max & min) in
microeconomics. It may be used to prove Hotelling's lemma, Shephard's lemma, and Roy's identity. It
also allows for easier computation of comparative statics in generalized economic models.

The theorem exists in two versions, a regular version (unconstrained optimization) and a generalized
version (constrained optimization). The regular version can be obtained from the general version because
unconstrained optimization is just the special case of constrained optimization with no constraints (or
constraints that are always satisfied, i.e. constraints that are identities such as or
.

The theorem gets its name from the fact that it shows that a less constrained maximization (or
minimization) problem (where some parameters are turned into variables) is the upper (or lower for min)
envelope of the original problem. For example, see cost minimization, and compare the long-run (less
constrained) and short-run (more constrained some factors of production are fixed) minimization
problems.

For the theorem to hold, the functions being dealt with must have certain well-behaved properties.
Specifically, the correspondence mapping parameter values to optimal choices must be differentiable,
with it being single-valued (and hence a function) a necessary but not sufficient condition.

The theorem is described below. Note that bold face represents a vector.

Envelope theorem

A curve in a two dimensional space is best represented by the parametric equations like x(c) and y(c). The
family of curves can be represented in the form where c is the parameter. Generally,
the envelope theorem involves one parameter but there can be more than one parameter involved as well.

The envelope of a family of curves g(x,y,c) = 0 is a curve such that at each point on the curve there is
some member of the family that touches that particular point tangentially. This forms a curve or surface
that is tangential to every curve in the family of curves forming an envelope.

Consider an arbitrary maximization (or minimization) problem where the objective function
depends on some parameters :

The function is the problem's optimal-value function it gives the maximized (or minimized)
value of the objective function as a function of its parameters .

Page 66
MICROECONOMICS ANALYSIS

Let be the (arg max) value of , expressed in terms of the parameters, that solves the optimization
problem, so that . The envelope theorem tells us how changes as a
parameter changes, namely:

That is, the derivative of with respect to is given by the partial derivative of with
respect to , holding fixed, and then evaluating at the optimal choice .

General envelope theorem

There also exists a version of the theorem, called the general envelope theorem, used in constrained
optimization problems which relates the partial derivatives of the optimal-value function to the partial
derivatives of the Lagrangian function.

We are considering the following optimization problem in formulating the theorem (max may be replaced
by min, and all results still hold):

Which gives the Lagrangian function:

Where:

is the dot product

Then the general envelope theorem is:

Note that the Lagrange multipliers are treated as constants during differentiation of the Lagrangian
function, then their values as functions of the parameters are substituted in afterwards.

Envelope theorem in generalized calculus

Page 67
MICROECONOMICS ANALYSIS

In the calculus of variations, the envelope theorem relates evolutes to single paths. This was first proved
by Jean Gaston Darboux and Ernst Zermelo (1894) and Adolf Kneser (1898). The theorem can be stated
as follows:

"When a single-parameter family of external paths from a fixed point O has an envelope, the integral
from the fixed point to any point A on the envelope equals the integral from the fixed point to any second
point B on the envelope plus the integral along the envelope to the first point on the envelope, JOA = JOB +
JBA."

QNO38:- What is Walras's law? How do you determine multimarket equilibrium with its help?

ANS:- Walras Law is a principle in general equilibrium theory asserting that budget constraints imply
that the values of excess market demands (or, conversely, excess market supplies) must sum to zero. That
is:

Walras' Law is named for the economist Lon Walras, who taught at the University of Lausanne,
although the concept was expressed earlier but in a less mathematically rigorous fashion by John Stuart
Mill in his Essays on Some Unsettled Questions of Political Economy (1844). Walras noted the
mathematically equivalent proposition that when considering any particular market, if all other markets in
an economy are in equilibrium, then that specific market must also be in equilibrium. The term "Walras'
Law" was coined by Oskar Lange to distinguish it from Say's Law. Some economic theorists also use the
term to refer to the weaker proposition that the total value of excess demand cannot exceed the total
values of excess supply.

Definitions

A market for a particular commodity is in equilibrium if, at the current prices of all commodities,
the quantity of the commodity demanded by potential buyers equals the quantity supplied by
potential sellers. For example, suppose the current market price of cherries is $1 per pound. If all
cherry farmers summed together are willing to sell a total of 500 pounds of cherries per week at
$1 per pound, and if all potential customers summed together are willing to buy 500 pounds of
cherries in total per week when faced with a price of $1 per pound, then the market for cherries is
in equilibrium because neither shortages nor surpluses of cherries exist.

An economy is in general equilibrium if every market in the economy is in equilibrium. Not only
must the market for cherries clear, but so too must all markets for all commodities (apples,
automobiles, etc.) and for all resources (labor and economic capital) and for all financial assets,
including stocks, bonds, and money.

'Excess demand' refers to a situation in which a market is not in equilibrium at a specific price
because the number of units of an item demanded exceeds the quantity of that item supplied at
that specific price. Excess demand yields an economic shortage. A negative excess demand is
synonymous with an excess supply, in which case there will be an economic surplus of the good
or resource. 'Excess demand' may be used more generally to refer to the algebraic value of
quantity demanded minus quantity supplied, whether positive or negative.

Page 68
MICROECONOMICS ANALYSIS

Walras' Law

Walras' Law implies that the sum of the values of excess demands across all markets must equal zero,
whether or not the economy is in a general equilibrium. This implies that if positive excess demand exists
in one market, negative excess demand must exist in some other market. Thus, if all markets but one are
in equilibrium, then that last market must also be in equilibrium.

This last implication is often applied in formal general equilibrium models. In particular, to characterize
general equilibrium in a model with m agents and n commodities, a modeler may impose market clearing
for n - 1 commodities and "drop the n-th market-clearing condition." In this case, the modeler should
include the budget constraints of all m agents (with equality). Imposing the budget constraints for all m
agents ensures that Walras' Law holds, rendering the n-th market-clearing condition redundant.

In the farmer example, suppose that the only commodities in the economy are cherries and apples, and
that no other markets exist. If excess demand for cherries is zero, then by Walras' Law, excess demand for
apples is also zero. If there is excess demand for cherries, then there will be a surplus (excess supply, or
negative excess demand) for apples; and the market value of the excess demand for cherries will equal the
market value of the excess supply of apples.

Walras' Law is ensured if every agent's budget constraint holds with equality. An agent's budget
constraint is an equation stating that the total market value of the agent's planned expenditures, including
saving for future consumption, must be less than or equal to the total market value of the agent's expected
revenue, including sales of financial assets such as bonds or money. When an agent's budget constraint
holds with equality, the agent neither plans to acquire goods for free (e.g., by stealing), nor does the agent
plan to give away any goods for free. If every agent's budget constraint holds with equality, then the total
market value of all agents' planned outlays for all commodities (including saving, which represents future
purchases) must equal the total market value of all agents' planned sales of all commodities and assets. It
follows that the market value of total value of excess demand in the economy must be zero, which is an
the statement of Walras' Law. Walras' Law implies that if there are n markets and n-1 of these are in
equilibrium then the last market must also be in equilibrium, a property which is essential in the proof of
the existence of equilibrium.

Implications

Labor market

Neoclassical macroeconomic reasoning concludes that because of Walras' Law, if all markets for goods
are in equilibrium, the market for labor must also be in equilibrium. Thus, by neoclassical reasoning,
Walras' Law contradicts the Keynesian conclusion that negative excess demand and consequently,
involuntary unemployment, may exist in the labor market, even when all markets for goods are in
equilibrium. The Keynesian rebuttal is that this neoclassical perspective ignores financial markets, which
may experience excess demand (such as a Keynesian liquidity trap) that permits an excess supply of labor
and consequently, temporary involuntary unemployment, even if markets for goods are in equilibrium,

QNO39:- What are the conditions of Pareto optimality? Detive these. Point out the main weakness
of Paretian analysis.

ANS: - Pareto efficiency, or Pareto optimality, is a concept in economics with applications in engineering.
The term is named after Vilfredo Pareto (18481923), an Italian economist who used the concept in his

Page 69
MICROECONOMICS ANALYSIS

studies of economic efficiency and income distribution. In a Pareto efficient economic allocation, no one
can be made better off without making at least one individual worse off. Given an initial allocation of
goods among a set of individuals, a change to a different allocation that makes at least one individual
better off without making any other individual worse off is called a Pareto improvement. An allocation is
defined as "Pareto efficient" or "Pareto optimal" when no further Pareto improvements can be made.

Pareto efficiency is a minimal notion of efficiency and does not necessarily result in a socially desirable
distribution of resources: it makes no statement about equality, or the overall well-being of a society. The
notion of Pareto efficiency can also be applied to the selection of alternatives in engineering and similar
fields. Each option is first assessed under multiple criteria and then a subset of options is identified with
the property that no other option can categorically outperform any of its members.

Pareto efficiency in short

This example of a Production-possibility frontier provides a simple example for illustrating Pareto
efficiency. Suppose that there are two agents in an economy, one that only values guns and one that only
values butter. Point A is not Pareto efficient because it is possible to produce more of either one or both
goods (Butter and Guns) without producing less of the other. Thus, moving from A to D enables you to
make one person better off without making anyone else worse off (Pareto improvement). Moving to point
B from point A, however, is not a Pareto improvement, as fewer guns are produced. Likewise, moving to
point C from point A is not a Pareto improvement, as less butter is produced. Any point on the frontier
curve is Pareto efficient.

It is commonly accepted that outcomes that are not Pareto efficient are to be avoided, and therefore Pareto
efficiency is an important criterion for evaluating economic systems and public policies. If economic
allocation in any system is not Pareto efficient, there is potential for a Pareto improvementan increase
in Pareto efficiency: through reallocation, improvements can be made to at least one participant's well-
being without reducing any other participant's well-being. It is important to note, however, that a change
from an inefficient allocation to an efficient one is not necessarily a Pareto improvement. Thus, in
practice, ensuring that nobody is disadvantaged by a change aimed at achieving Pareto efficiency may
require compensation of one or more parties. For instance, if a change in economic policy eliminates a

Page 70
MICROECONOMICS ANALYSIS

monopoly and that market subsequently becomes competitive and more efficient, the monopolist will be
made worse off. However, the loss to the monopolist will be more than offset by the gain in efficiency.
This means the monopolist can be compensated for its loss while still leaving a net gain for others in the
economy, a Pareto improvement. In real-world practice, such compensations have unintended
consequences. They can lead to incentive distortions over time as agents anticipate such compensations
and change their actions accordingly. Under certain idealized conditions, it can be shown that a system of
free markets will lead to a Pareto efficient outcome. This is called the first welfare theorem. It was first
demonstrated mathematically by economists Kenneth Arrow and Grard Debreu. However, the result
only holds under the restrictive assumptions necessary for the proof (markets exist for all possible goods
so there are no externalities, all markets are in full equilibrium, markets are perfectly competitive,
transaction costs are negligible, and market participants have perfect information). In the absence of
perfect information or complete markets, outcomes will generically be Pareto inefficient, per the
GreenwaldStiglitz theorem.

Weak Pareto efficiency

A "weak Pareto optimum" (WPO) is an allocation for which there are no possible alternative allocations
whose realization would cause every individual to gain. Thus an alternative allocation is considered to be
a Pareto improvement only if the alternative allocation is strictly preferred by all individuals. When
contrasted with weak Pareto efficiency, a standard Pareto optimum as described above may be referred to
as a "strong Pareto optimum" (SPO).

Weak Pareto-optimality is "weaker" than strong Pareto-optimality in the sense that the conditions for
WPO status are "weaker" than those for SPO status: any SPO also qualifies as a WPO, but a WPO
allocation is not necessarily an SPO.

Use in engineering

Example of a Pareto frontier. The boxed points represent feasible choices, and smaller values are
preferred to larger ones. Point C is not on the Pareto Frontier because it is dominated by both point A and
point B. Points A and B are not strictly dominated by any other, and hence do lie on the frontier.

The notion of Pareto efficiency is also useful in engineering. Given a set of choices and a way of valuing
them, the Pareto frontier or Pareto set or Pareto front is the set of choices that are Pareto efficient. By
restricting attention to the set of choices that are Pareto-efficient, a designer can make tradeoffs within
this set, rather than considering the full range of every parameter. The Pareto frontier is defined formally

Page 71
MICROECONOMICS ANALYSIS

as follows. Consider a design space with n real parameters (corresponding to the allocation of goods in
the economics interpretation), and for each design space point there are m different criteria by which to
judge that point (corresponding to the utility of the different agents in the economics interpretation). Let
be the function which assigns, to each design space point x, a criteria space point f(x).
This represents the way of valuing the designs. Now, it may be that some designs are infeasible; so let X
be a set of feasible designs in , which must be a compact set. Then the set which represents the
feasible criterion points is f(X), the image of the set X under the action of f. Call this image Y. Now
construct the Pareto frontier as a subset of Y of the feasible criterion points. It is often assumed in
engineering that the preferable values of each criterion parameter are the lesser ones (e.g. lower emissions
or lower cost), thus minimizing each dimension of the criterion vector is desired. Then compare criterion
vectors as follows: One criterion vector y strictly dominates (or "is preferred to") a vector y* if each
parameter of y is not strictly greater than the corresponding parameter of y* and at least one parameter is
strictly less: that is, for each i and for some i. This is written as to
mean that y strictly dominates y*. Then the Pareto frontier is the set of points from Y that are not strictly
dominated by any other point in Y. Formally, this defines a partial order on Y, namely the product order
on (more precisely, the induced order on Y as a subset of ), and the Pareto frontier is the set of
maximal elements with respect to this order. Algorithms for computing the Pareto frontier of a finite set
of alternatives have been studied in computer science, sometimes referred to as the maximum vector
problem or the skyline query.Kung, H.T.; Luccio, F.; Preparata, F.P. (1975). "On finding the maxima of a
set of vectors.". Journal of the ACM 22 (4): 469476. doi:10.1145/321906.321910Godfrey, Parke;
Shipley, Ryan; Gryz, Jarek (2006). "Algorithms and Analyses for Maximal Vector Computation". VLDB
Journal 16: 528. doi:10.1007/s00778-006-0029-7

QNO40:- How do you formulate an indirect utility function? Discuss the information it gives in
contrast to utility function.

ANS: - In economics, a consumer's indirect utility function gives the consumer's maximal
utility when faced with a price level and an amount of income . It represents the consumer's
preferences over market conditions.

This function is called indirect because consumers usually think about their preferences in terms of what
they consume rather than prices. A consumer's indirect utility can be computed from its utility
function by first computing the most preferred bundle by solving the utility maximization
problem; and second, computing the utility the consumer derives from that bundle. The
indirect utility function for consumers is analogous to the profit function for firms.

Formally, the indirect utility function is:

Continuous on Rn+++ R+;


Decreasing in prices;
Strictly increasing in income;
Homogenous with degree zero in prices and income; if prices and income are all multiplied by a
given constant the same bundle of consumption represents a maximum, so optimal utility does
not change.
quasi convex in (p,w);

Moreover,

Page 72
MICROECONOMICS ANALYSIS

Roy's identity: If v(p,w) is differentiable at (p0, w0) and (v(p,w))/w0, then

-(v(p0,w0)/(pi))/(v(p0,w0)/w)=xi (p0,w0), i=1,,n.

QNO41:- Write short nole on Consumer's surplus.

ANS:- Consumer surplus is the difference between the maximum price a consumer is willing to pay and
the actual price they do pay. If a consumer would be willing to pay more than the current asking price,
then they are getting more benefit from the purchased product than they spent to buy it. An example of a
good with generally high consumer surplus is drinking water. People would pay very high prices for
drinking water, as they need it to survive. The difference in the price that they would pay, if they had to,
and the amount that they pay now is their consumer surplus. Note that the utility of the first few liters of
drinking water is very high (as it prevents death), so the first few liters would likely have more consumer
surplus than subsequent liters.

The maximum amount a consumer would be willing to pay for a given quantity of a good is the sum of
the maximum price he would be willing to pay for the first unit, the (lower) maximum price he would be
willing to pay for the second unit, etc. Typically these prices are decreasing; they are given by the
individual demand curve. For a given price the consumer buys the amount for which the consumer
surplus is highest, where consumer surplus is the sum, over all units, of the excess of the maximum
willingness to pay over the equilibrium (market) price. The consumer's surplus is highest at the largest
number of units for which, even for the last unit, the maximum willingness to pay is not below the market
price

The aggregate consumers' surplus is the sum of the consumer's surplus for all individual consumers. This
can be represented graphically as shown in the above graph of the market demand and supply curves.

Calculation from supply and demand

The consumer surplus (individual or aggregated) is the area under the (individual or aggregated) demand
curve and above a horizontal line at the actual price (in the aggregated case: the equilibrium price). If the
demand curve is a straight line, the consumer surplus is the area of a triangle:

Where Pmkt is the equilibrium price (where supply equals demand), Qmkt is the total quantity purchased at
the equilibrium price and Pmax is the price at which the quantity purchased would fall to 0 (that is, where
the demand curve intercepts the price axis). For more general demand and supply functions, these areas
are not triangles but can still be found using integral calculus. Consumer surplus is thus the definite
integral of the demand function with respect to price, from the market price to the maximum reservation
price (i.e. the price-intercept of the demand function):

where

The graph shows, that if we see a rise in the equilibrium price and a fall in the equilibrium quantity, then
consumer surplus falls.

Page 73
MICROECONOMICS ANALYSIS

QNO42:- Differmtiate between Short-run and Long-run cost functions.

ANS:- Long run and short run cost functions

In the long run, the firm can vary all its inputs. In the short run, some of these inputs are fixed. Since the
firm is constrained in the short run, and not constrained in the long run, the long run cost TC(y) of
producing any given output y is no greater than the short run cost STC(y) of producing that output:

TC(y) STC(y) for all y.

Now consider the case in which in the short run exactly one of the firm's inputs is fixed. For concreteness,
suppose that the firm uses two inputs, and the amount of input 2 is fixed at k. For many (but not all)
production functions, there is some level of output, say y0, such that the firm would choose to use k units
of input 2 to produce y0, even if it were free to choose any amount it wanted. In such a case, for this level
of output the short run total cost when the firm is constrained to use k units of input 2 is equal to the long
run total cost: STCk(y0) = TC(y0). We generally assume that for any level at which input 2 is fixed, there
is some level of output for which that amount of input 2 is appropriate, so that for any value of k,

TC(y) = STCk(y) for some y.

(There are production functions for which this relation is not true, however: see the example of a
production function in which the inputs are perfect substitutes.)

For a total cost function with the typical shape, the following figure shows the relations between STC and
TC.

Examples of long run and short run cost functions

Long run and short run average cost functions


Given the relation between the short and long run total costs, the short and long run average and marginal
cost functions have the forms shown in the following figure.

Page 74
MICROECONOMICS ANALYSIS

Note:

The SMC goes through the minimum of the SAC and the LMC goes through the minimum of the
LAC.
When SAC = LAC we must have SMC = LMC (since slopes of total cost functions are the same
there).

In the case that the production function has CRTS, the LAC is horizontal, as in the following figure.

Page 75

Das könnte Ihnen auch gefallen