Sie sind auf Seite 1von 4

Are You Managing Database Complexity …

Or Is It Managing You?

A Look At the Warning Signals of Complexity


And Ways to Break the Complexity Cycle

By Esley Gustafson
Technical Sales
dbDoctor, Inc.

dbDoctor Inc. Tel: 303.754.3200


10333 East Dry Creek Road, Suite 110 Fax: 303.662.0425
Englewood, CO 801112 www.dbDoctor.net
Managing Database Complexity

The last thing that a DBA or IT Manager needs to hear explained one more time is that Oracle
databases are complex. That complexity is inherent to the task at hand, and “comes with the territory”.

The operative question, though, is how can you tell whether you are effectively managing that
complexity, or whether it is managing you? In this white paper, we will provide both some warning
signals that complexity may be getting the upper hand, as well as some tips for getting out in front of
the complexity curve.

Top 10 Clues Your Database Issues are Creating Business Problems

Database problems can be very far reaching and impact many areas of your business. Database issues
can be intertwined with other infrastructure or operating issues and can certainly manifest themselves
in many ways – often making them difficult to isolate and diagnose. Here are a few clues to help you
determine if database complexity may be impacting your business.

1. Do you hear about problems with your core applications after the fact?
Many times the end users of applications are the only alerting mechanism your IT staff has. An
application goes down or is suffering from slow performance, and the application user calls the system
or database administrator. The problem is that it is already too late to avoid the cost associated with
the application unavailability. Now, all eyes are on you and your administrators to find and fix the
problem, as the meter is running.

2. Are you or your staff often paged in the middle of the night?
This can be one of the primary indicators that you and your staff are caught in reactive management
cycle. Breaking this cycle requires that you and your staff can anticipate problems prior to a threshold
being crossed at 2:00 a.m.

3. Are your DBA’s caught in a fire -drill cycle?


Many organizations suffer from a very predictable reactive cycle – not enough time, which leads to
lack of proactive planning and maintenance, fire drills, and yet further distraction from proactive work
and other value-added projects. That spells lower performance and higher cost. When it leads to DBA
burnout or turnover, complexity is definitely getting the upper hand!

4. How often are you faced with inter/intra-departmental finger pointing?


The application group may blame the database, the database group blames the application, and the
software vendor blames you. Meanwhile your business is being affected. This finger pointing means
that your operation doesn’t have the necessary information to cut through the complex, inter-
dependent system relationships in a crisp, timely manner.

5. Are you dragging on timing of development projects?


Reactive management of systems often leads to DBAs being taken away from strategic projects to
tend to the day’s emergency. Since DBAs are vital to many IT projects, such as application or
database upgrades and implementations, project slippage is a key indicator that your operation is tied
up in a cycle of complexity and reactiveness.

dbDoctor Inc. Tel: 303.754.3200


10333 East Dry Creek Road, Suite 110 Fax: 303.662.0425
Englewood, CO 801112 www.dbDoctor.net
6. Have you had problems with database upgrades or with change management?
Let’s face it – change only adds to the complexity of the database environment. Many times a database
will perform well in a test or development context, but once in production performance may
deteriorate over time. Also “engineering in place” or other sorts of undocumented change further
aggravate the situation. Sorting this out with traditional approaches is challenging at best.

7. Are you confident about your ability to recover after a database failure?
How do you know that you can recover? How fast? When is the last time that you had an architecture
audit? Is your recovery process complete and accurate? Knowing this before the fact can significantly
reduce the cost and complexity of a database failure.

8. Are you presented adequate information when considering hardware requests?


Purchasing excess hardware is one way that many organizations compensate for performance and
availability problems. This can be the result of having a limited range of input and suggestions when
developing requirements or, simply, the result of an administrator’s individual bias or desires.

9. Do you feel like you make the same mistakes over and over?
This is a common occurrence. After all, the database is complex. Further, it is common that many
people touch the database … and those people can have highly variable background or specific
knowledge of the issue at hand. Finally, given the session level nature of traditional tools, historical
knowledge retention tends to be limited. The self-fulfilling prophecy here is a painful and costly one.

10. Do you understand current monitoring practices in your operation?


Monitoring is critical to keeping the impact of complexity in check. If you don’t know – for a fact –
what monitoring is actually being conducted, then you are very possibly in the same boat that way too
many companies are in today: sporadic, after-the-fact monitoring. Given the expense, complexity, and
system impact of many traditional monitoring tools, far too many companies have elected to wait for
accidents to happen and then hope for a swift recovery. But, this amounts to giving into complexity –
which can have severe cost and reliability consequences!

So … What is a Manager or DBA to Do?

Getting the upper hand on complexity requires that you gather information that is concise, consistent,
and actionable. This allows you to build a knowledge base across systems and across time. When it
comes to databases, there are three areas that an IT manager should be concerned with: enforcing a
best practices orientation, planning for growth, and monitoring of longer term performance trends.

Employing Best Practices


Database best practices are crucial, especially as relates to the layout and design of your database.
Sometimes, layout issues manifest themselves as performance problems. If so, there will be a natural
tendency to address those visible performance problems quickly. However, recoverability and security
can also be significantly affected by layout and design. These risks are frequently less obvious on a
day-to-day basis and are usually only noticed after a database failure or breech has occurred. The
layout and design of database should be analyzed and discussed not only when the database is created,
but also periodically audited to determine if any changes have been made and are appropriate.
Changes to a database should be tracked as part of a formal change management plan, and a history of

dbDoctor Inc. Tel: 303.754.3200


10333 East Dry Creek Road, Suite 110 Fax: 303.662.0425
Englewood, CO 801112 www.dbDoctor.net
changes made to the database should be kept. This allows managers to observe and measure the
impact of the changes on performance. This also allow you to restore a database to a previous state, in
the event of performance degrading due to changes.

Planning for Growth


Long-term capacity planning typical involves ensuring there is sufficient physical capacity for a
database to grow. This usually involves storage, memory and CPUs. Long term capacity planning
should be based on documented information about past growth, historical performance and future
requirements. The days of being able to purchase hardware ad hoc have ended, so now many IT
managers must present empirical proof for new hardware purchases. IT managers must also ensure
that hardware isn’t being requested to conceal or solve a performance problem.

Monitoring Long Term Performance Trends


As you effectively employ best practice and plan for growth you will be caught up in fewer and fewer
fire drills. However, it is still important that you monitor your database to ensure it is serving data in
an acceptable manner as the environment evolves and changes. The traditional model for monitoring
is based on short term thresholds and alerts, fostering a reactive management cycle. Administrators
have to spend time trudging through endless minutia of real time data. And everyone is tied up in the
fire drill spiral.

For monitoring to truly help reduce complexity, the focus must be on the long term, systemic view of
performance. By consistently collecting and assessing performance trend information, you can begin
to build a knowledge base across systems and over time. This allows you to identify a historical
performance signature of your system. This signature will provide a reference to evaluate current
performance and make predictions about future performance. But collecting, storing and analyzing
trended data requires time and effort. We highly recommend that this process be automated to the
maximum extent possible – so that administrators and managers can focus their efforts on thoughtful
conclusions and actions … and ultimately on other more strategic initiatives as the complexity cycle is
brought under control.

dbDoctor Inc. Tel: 303.754.3200


10333 East Dry Creek Road, Suite 110 Fax: 303.662.0425
Englewood, CO 801112 www.dbDoctor.net

Das könnte Ihnen auch gefallen