Sie sind auf Seite 1von 18

Log Shipping

LS

1.
2.
3.
4.
5.
6.

1. Log Shipping Configuration?


A:
Permissions
To setup a log-shipping you must have sysadmin rights on the server.
Minimum Requirements
SQL Server 2005 or later
Standard, Workgroup or Enterprise editions must be installed on all server instances
involved in log shipping.
The servers involved in log shipping should have the same case sensitivity settings.
The database must use the full recovery or bulk-logged recovery model
A shared folder for copying T-Log backup files
SQL Server Agent Service must be configured properly
In addition, you should use the same version of SQL Server on both ends. It is possible to
Log Ship from SQL 2005 to SQL 2008, but you can not do it the opposite way. Also, since
Log Shipping will be primarly used for failover if you have the same versions on each end
and there is a need to failover you at least know you are running the same version of SQL
Server.
Steps to Configure Log-Shipping:
1. Make sure your database is in full or bulk-logged recovery model. You can change the
database recovery model using the below query. You can check the database recovery
model by querying sys.databases
SELECT name, recovery_model_desc FROM sys.databases WHERE name = 'jugal'
USE [master]
GO
ALTER DATABASE [jugal] SET RECOVERY FULL WITH NO_WAIT
GO
2. On the primary server, right click on the database in SSMS and select Properties. Then
select the Transaction Log Shipping Page. Check the "Enable this as primary
database in a log shipping configuration" check box.

3. The next step is to configure and schedule a transaction log backup. Click on Backup
Settings to do this.

If you are creating backups on a network share enter the network path or for the local
machine you can specify the local folder path. The backup compression feature was
introduced in SQL Server 2008 Enterprise edition. While configuring log shipping, we can
control the backup compression behavior of log backups by specifying the compression
option. When this step is completed it will create the backup job on the Primary Server.

4. In this step we will configure the secondary instance and database. Click on
the Add button to configure the Secondary Server instance and database. You can add
multiple servers if you want to setup one to many server log-shipping.

When you click the Add button it will take you to the below screen where you have to
configure the Secondary Server and database. Click on the Connect button to connect to
the secondary server. Once you connect to the secondary server you can access the three
tabs as shown below.
Initialize Secondary Database tab
In this step you can specify how to create the data on the secondary server. You have three
options: create a backup and restore it, use an existing backup and restore or do nothing
because you have manually restored the database and have put it into the correct state to
receive additional backups.

Copy Files Tab


In this tab you have to specify the path of the Destination Shared Folder where the Log
Shipping Copy job will copy the T-Log backup files. This step will create the Copy job on the
secondary server.

Restore Transaction Log Tab


Here you have to specify the database restoring state information and restore schedule.
This will create the restore job on the secondary server.

5. In this step we will configure Log Shipping Monitoring which will notify us in case of any
failure. Please note Log Shipping monitoring configuration is optional.

Click on Settings button which will take you to the Log Shipping Monitor
Settings screen. Click on Connect button to setup a monitor server. Monitoring can be
done from the source server, target server or a separate SQL Server instance. We can
configure alerts on source / destination server if respective jobs fail. Lastly we can also
configure how long job history records are retained in the MSDB database. Please note that
you cannot add a monitor instance once log shipping is configured.

6. Click on the OK button to finish the Log Shipping configuration and it will show you the
below screen.

2. How to Reverse Log Shipping Roles


Reversing log shipping is an often overlooked practice. When DBAs need to fail over to a
secondary log shipping server, they tend to worry about getting log shipping back up later.
This is especially true in the case of very large databases. If you're using log shipping as
your primary disaster recovery solution and you need to fail over to the secondary log
shipping server, you should get log shipping running as quickly as possible. With no disaster
recovery failover in place, you might be running exposed.
Reversing log shipping is simple. It doesnt require reinitializing the database with a full
backup if performed carefully. However, its crucial that you remember the following:

You need to preserve the log sequence number (LSN) chain.

You need to perform the final log backup using the NORECOVERY option. Backing up the log
with this option puts the database in a state that allows log backups to be restored and
ensures that the databases LSN chain doesnt deviate.

The primary log shipping server must still be accessible to use this technique.
To fail over to a secondary log shipping server, follow this 10-step process:

1. Disable all backup jobs that might back up the database on both log shipping partners.

2. Disable the log shipping jobs.


3. Run each log shipping job in order (i.e., backup, copy, and restore).
4. Drop log shipping.
5. Manually back up the log of the primary database using the NORECOVERY option. Use the
command
BACKUP LOG [DatabaseName]
TO DISK = 'BackupFilePathname'
WITH NORECOVERY;
where DatabaseName is the name of the database whose log you want to back up and
BackupFilePathname is the backup files pathname (e.g., Z:\SQLServerBackups\TLog.bck).
6. Restore the log backup on the secondary database using the RECOVERY option, and bring
the secondary database online. The primary and secondary databases have now switched
positions.
7. Back up the log of the new primary database (optional).
8. Restore the log on the new secondary database using the NORECOVERY option (optional).
9. Reconfigure log shipping.
10. Re-enable any backup jobs that were disabled.
Note that step 7 and step 8 are listed as optional because theyre not required for
establishing log shipping. However, I recommend performing these steps to ensure that the
log shipping configuration will proceed without any problems.

With a few minor adjustments, this 10-step process works with multiple secondary log
shipping databases. You perform the same basic steps, keeping in mind that the other
secondary databases will still be secondary databases after the failover. After you back up
the log on the new primary database, you should use the NORECOVERY option to restore
that backup on all the planned secondary databases. You can then add them as secondary
databases to the new primary database.
3. Is it possible configuration of log shipping without domain?
You could try to move the logs using FTP then you don't need a domain account for the copy
just FTP access

4. What is TUF?
A:
TOUF file is known as transaction undo file
This file is created when logshipping is configured in SQL Server
This is consists of list of uncommitted transactions while backup is going on the primary
server in logshipping.
if this is deleted you have to reconfigure the logshipping as the secondary server.
This file is located in the path where transaction log files are saved.
.TUF file is the Transaction Undo File, which is created when performing log shipping to a
server in Standby mode.

When the database is in Standby mode the database recovery is done when the log is
restored; and this mode also creates a file on destination server with .TUF extension which
is the transaction undo file.
This file contains information on all the modifications performed at the time backup is taken.
The file plays a important role in Standby mode the reason being very obvious while
restoring the log backup all uncommited transactions are recorded to the undo file with only
commited transactions written to disk which enables the users to read the database. So
when we restore next transaction log backup; SQL server will fetch all the uncommited
transactions from undo file and check with the new transaction log backup
whether commited or not.
If found to be commited the transactions will be written to disk else it will be stored in undo
file until it gets commited or rolledback.
5. What is TUF file?

The transaction undo file contains modifications that were not committed on the source
database but were in progress when the transaction log was backed up AND when the
log was restored to another database, you left the database in a state that allowed
addition transaction log backups to be restored to it (at some point in the future. When
another transaction log is restored, SQL Server uses data from the undo file and the
transaction log to continue restoring the incomplete transactions (assuming that they
are were completed in the next transaction log file). Following the restore, the undo file
will be re-written with any transactions that, at that point, are incomplete.
TUF file is known as "Transaction Undo File"
This file is created when LogShipping is configured in SQL Server.
This file consists of list of uncommitted transactions while backup is going on the primary
server in Log Shipping.
If this file is deleted you have to reconfigure the LogShipping as the secondary server.
This file is located in the path where transaction log files are saved.

TUF File: Its Transaction Undo File. It Generated only when you Have Configured
Log Shipping With Stand by Option. Since In Stand by Log Shipping Secondary Database is
Available to User. So TUF Keep Pending Transaction Which are in Log File Came from Primary
So That when Next Log Backup Will Come From Primary They Can Be Synchronized At
Secondary.
.WRK : This Extension Is Given To A File Which is Being Copied From Primary
Backup Location to Secondary and Once Copy Process has been completed these file are
renamed with .trn file.
6. Today I faced an issue where one of secondary server box is now not available due to some circumstances, now I have to delete
this secondary server Name and Database entry from primary servers database. If we go through log shipping wizard from
Database property page and try to remove secondary server it will ask to connect secondary server but in my case secondary
server is now not available with us. To resolve this, here is a script to delete secondary server entry from primary servers database
is: ( in this case there is no need to connect secondary server)
EXEC Master.dbo.sp_delete_log_shipping_primary_secondary
@primary_database = NVirendraTest,
@secondary_server = NVIRENDRA_PC,
@primary_database =NLSVirendraTest;
GO

INTERVIEW Qustions on LOG SHOPPING...


LOG SHPPING RELEATED:
Question : IS it possible to log ship database between SQL 2000 & SQL 2005?
Answer: No, thats impossible, In SQL 2005 transaction log architecture is changed
compared to SQL 2000 and hence you wont be able to restore log backups from SQL 2000
to SQL 2005 or vice versa.

Question: How to failover in SQL 2005 Log Shipping?


Answer: I can better ask to check out the link Failover in SQL 2005 Log Shipping, Deepak
written this article clearly.

Question:Im getting the below error message in restoration job on secondary


server, WHY?
[Microsoft SQL-DMO (ODBC SQLState: 42000)]
Error 4305: [Microsoft][ODBC SQL Server Driver][SQL Server]The log in this backup set
begins at LSN 7000000026200001, which is too late to apply to the database. An earlier
log backup that includes LSN 6000000015100001 can be restored.
[Microsoft][ODBC SQL Server Driver][SQL Server]RESTORE LOG is terminating abnormally.
Answer:Was your sql server or agent restarted Yday in either source or destination ?
because the error states there is a mismatch in LSN. A particular tran log was not applied in
the destination server hence the subsequent tran logs cannot be applied as a result !
You can check log shipping monitor \ log shipping tables to check the which transaction log
is last applied to secondary db, if the next consecutive transaction logs are available in the
secondary server share folder you manually RESTORE the logs with NORECOVERY option,
Once you restored all the logs automatically from the next cycle the job will work fine.
Incase if you are not able to find the next transaction log in secondary server shared folder,
you need to reconfigure log shipping. Try the below tasks to re-establish log shipping again.

Disable all the log shipping jobs in source and destination servers
Take a full backup in source and restore it in secondary server using the With Standby
option
Enable all the jobs you disabled previously in step1

Question: Is it possible load balance in log shipping?


Answer:Yes ofcourse its possible in log shipping, while configuring log shipping you have
the option to choose standby or no recovery mode, there you select STANDBY option to
make the secondary database readonly. For SQL 2005 log shipping configuration check out
the link 10 Steps to configure Log Shipping

Question: Can I take full backup of the log shipped database in primary server??
Answer: In SQL Server 2000 you wont be able to take full backup of log shipped database,
because this will break the LSN chain and it directly affects the log shipping.
In SQL Server 2005, yes its possible. You can take full backup of log shipped database and
this wont affect the log shipping.

Question : Can I shrink log shipped database log file??

Answer: Yes ofcourse you can shrink the log file, but you shouldnt use WITH TRUNCATE
option. If you use this option obviously log shipping will be disturbed.

Question : Can I take full backup of the log shipped database in secondary
server??
Answer: No chance , you wont be able to execute BACKUP command against a log shipped
database in secondary server.

Question: Ive configured Log shipping successfully on standby mode, but in the
restoration job Im getting the below error. What I do to avoid this in future??
Message
2006-07-31 09:40:54.33 *** Error: Could not apply log backup file C:\Program Files\Microsoft
SQL Server\MSSQL.1\MSSQL\Backup\LogShip\TEST_20060731131501.trn to secondary
database TEST.(Microsoft.SqlServer.Management.LogShipping) ***
2006-07-31 09:40:54.33 *** Error: Exclusive access could not be obtained because the
database is in use.
RESTORE LOG is terminating abnormally.(.Net SqlClient Data Provider) ***
Answer: To restore transaction logs to the secondary db, SQL Server needs exclussive
access on the database. When you configure it in standby mode, users will be able to access
the database and runs query against the secondary db. Hence If the scheduled restore jobs
runs at that time, the db will have a lock and it wont allow SQL Server to restore the tlogs.
To avoid this you need to check Disconnect users in the database when restoring backups
options in log shipping configuration wizard. Check the link 10 Steps to configure Log Shipping.

Question : Can you tell me the pre-requisites for configuring log shipping??
Answer : Check out the link Pre-requisites for Log Shipping.

Question : Suddenly Im getting the error below, How can I rectify this???
[Microsoft SQL-DMO (ODBC SQLState: 42000)] Error 4323: [Microsoft][ODBC SQL Server
Driver][SQL Server]The database is marked suspect. Transaction logs cannot be restored.
Use RESTORE DATABASE to recover the database.
[Microsoft][ODBC SQL Server Driver][SQL Server]RESTORE LOG is terminating abnormally
Answer : We had the same issue some time ago, this was related to a new file being created
in a filegroup on the source. Dont know if this applies to your case, but restoring a backup of
this new file on the secondary server solved the problem.

Question : Is it possible to log ship database from SQL server 2005 to SQL server
2008 and vice versa?
Answer : Yes you can log ship database from SQL server 2005 to SQL Server 2008 this will
work. However log shipping from SQL Server 2008 to SQL Server 2005 is not possible
because you wont be able to restore SQL server 2008 backup to SQL Server 2005
(downgrading version)
Q. Can we hot add CPU to sql server?
Ans:
Yes! Adding CPUs can occur physically by adding new hardware, logically by online hardware partitioning, or
virtually through a virtualization layer. Starting with SQL Server 2008, SQL Server supports hot add CPU.

Requires hardware that supports hot add CPU.


Requires the 64-bit edition of Windows Server 2008 Datacenter or the Windows Server 2008 Enterprise
Edition for Itanium-Based Systems operating system.
Requires SQL Server Enterprise.
SQL Server cannot be configured to use soft NUMA

Once the CPU is added just run RECONFIGURE then sql server recognizes the newly added CPU.
Q: How can we check whether the port number is connecting or not on a Server?
Ans:
TELNET <HOSTNAME> PORTNUMBER
TELNET PAXT3DEVSQL24 1433
TELNET PAXT3DEVSQL24 1434
Common Ports:
MSSQL Server: 1433
HTTP TCP 80
HTTPS TCP 443
Q: What is the port numbers used for SQL Server services?
Ans:

The default SQL Server port is 1433 but only if its a default install. Named instances get a random port
number.

The browser service runs on port UDP 1434.

Reporting services is a web service so its port 80, or 443 if its SSL enabled.

Analysis service is on 2382 but only if its a default install. Named instances get a random port number.
Q: Start SQL Server in different modes?
Ans:
Single User Mode (-m) : sqlcmd m d master S PAXT3DEVSQL11 c U sa P *******
DAC (-A): sqlcmd A d master S PAXT3DEVSQL11 c U sa P *******
Emergency: ALTER DATABASE test_db SET EMERGENCY
Q: How to recover a database that is in suspect stage?
Ans:
ALTER DATABASE test_db SET EMERGENCY
After you execute this statement SQL Server will shutdown the database and restart it without recovering it. This
will allow you to view / query database objects, but the database will be in read-only mode. Any attempt to
modify data will result in an error similar to the following:
Msg 3908, Level 16, State 1, Line 1 Could not run BEGIN TRANSACTION in database test ..etc
ALTER DATABASE test SET SINGLE_USER
GO
DBCC CHECKDB (test, REPAIR_ALLOW_DATA_LOSS) GO
If DBCC CHECKDB statement above succeeds the database is brought back online (but youll have to place it in
multi-user mode before your users can connect to it). Before you turn the database over to your users you should
run other statements to ensure its transactional consistency. If DBCC CHECKDB fails then there is no way to repair
the database you must restore it from a backup.
Q. Can we uninstall/rollback a service packs from SQL Server 2005?
Ans:
No not possible for SQL Server 2005. To rollback a SP you have to uninstall entire product and reinstall it.
For Sql Server 2008 you can uninstall a SP from Add/Remove programs.
Some people are saying that we can do it by backup and replace the resource db. But I am not sure about that.
Q. What is a deadlock and what is a live lock? How will you go about resolving deadlocks?
Ans:
Deadlock is a situation when two processes, each having a lock on one piece of data, attempt to acquire a lock on
the others piece. Each process would wait indefinitely for the other to release the lock, unless one of the user
processes is terminated. SQL Server detects deadlocks and terminates one users process.
A livelock is one, where a request for an exclusive lock is repeatedly denied because a series of overlapping shared
locks keeps interfering. SQL Server detects the situation after four denials and refuses further shared locks. A
livelock also occurs when read transactions monopolize a table or page, forcing a write transaction to wait
indefinitely.
Q. SQL Server is not responding. What is action plan?
Ans:
Connect using DAC via CMD or SSMS
Connect via CMD
SQLCMD -A U myadminlogin P mypassword -SMyServer dmaster
Once you connect to the master database run the diagnostic quires to find the problem
Correct the issue and restart the server

Find the errors from sql log using


SQLCMD A SmyServer qExec xp_readerrorlog oC:\logout.txt
A long running query blocking all processes and not allowing new connections
Write a query and put the script file on hard disk Ex: D:\Scripts\BlockingQuery.sql
use master;
select p.spid, t.text
from sysprocesses p
CROSS APPLY sys.dm_exec_sql_text (sql_handle) t
where p.blocked = 0
and p.spid in
( select p1.blocked
from sysprocesses p1
where p1.blocked > 0
and p1.waittime > 50 )
From command prompt run the script on sql server and get the result to a text file
SQLCMD -A SMyServer -iC:\SQLScripts\GetBlockers.sql -oC:\SQLScripts\blockers.txt
Recently added some data files to temp db and after that SQL Server is not responding
This can occur when you specify new files in a directory to which the SQL Server service account does not have
access.
Start the sql server in minimal configuration mode using the startup parameter f. When we specify f the sql
server creates new tempdb files at default file locations and ignore the current tempdb data files configuration.
Take care when using f as it keep the server in single user mode.
Once the server is started change the tempdb configuration settings and restart the server in full mode by
removing the flag -f
A database stays in a SUSPECT or RECOVERY_PENDING State
Try to resolve this using CheckDB and any other DBCC commands if you can.
Last and final option is put the db in emergency mode and run CHECKDB with repair_allow_data_loss
(Note: Try to avoid this unless you dont have any option as you may lose large amounts of data)
Q. What is your experience with third party applications and why would you use them?
Ans:
I have used some of the 3 Party tools:

SQL CHECK Idera Monitoring server activities and memory levels

SQL DOC 2 RedGate Documenting the databases

SQL Backup 5 RedGate Automating the Backup Process

SQL Prompt RedGate Provides IntelliSense for SQL SERVER 2005/2000,


rd


Lite Speed 5.0 Quest Soft Backup and Restore
Benefits using Third Party Tools:

Faster backups and restores

Flexible backup and recovery options

Secure backups with encryption

Enterprise view of your backup and recovery environment

Easily identify optimal backup settings

Visibility into the transaction log and transaction log backups

Timeline view of backup history and schedules

Recover individual database objects

Encapsulate a complete database restore into a single file to speed up restore time

When we need to improve upon the functionality that SQL Server offers natively

Save time, better information or notification


Q. Why sql server is better than other databases?
Ans:
I am not going to say one is better than other, but it depends on the requirements. We have number of products in
market. But if I have the chance to choose one of them I will choose SQL SERVER because..

So

According to the 2005 Survey of Wintercorp, The largest SQL Server DW database is the 19.5 terabytes. It
is a database of a European Bank
High Security. It is offering high level of security.
Speed and Concurrency, SQL Server 2005 system is able to handles 5,000 transactions per
second and 100,000 queries a day and can scale up to 8 million new rows of data per day,
Finally more technical peoples are available for SQL SERVER when we compare to any other database.
that we can say SQL SERVER is more than enough for any type of application.

Q. Differences between SQL SERVER 2000 AND 2005?


Ans:
Security

2000: Owner = Schema, hard to remove old users at times Schema is separate.

2005: Better granularity in easily controlling security. Logins can be authenticated by certificates.
Encryption

2000: No options built in, expensive third party options with proprietary skills required to implement
properly.

2005: Encryption and key management build in.


High Availability

2000: Clustering or Log Shipping requires Enterprise Edition and Expensive hardware.

2005: Clustering, Database Mirroring or Log Shipping available in Standard Edition. Database Mirroring can
use cheap hardware.
Scalability

2000: Limited to 2GB, 4CPUs in Standard Edition. Limited 64-bit support

2005: 4 CPU, no RAM limit in Standard Edition. More 64-bit options offer chances for consolidation.
Q. What are the Hotfixes and Patches?
Ans:
Hotfixs are software patches that were applied to live i.e. still running systems. A hotfixis a single, cumulative
package that includes one or more files that are used to address a problem in a software product (i.e. a software
bug).
In a Microsoft SQL SERVER context, hotfixes are small patches designed to address specific issues, most commonly
to freshly-discovered security holes.
Ex: If a select query returning duplicate rows with aggregations the result may be wrong.
Q. Why Shrink file/ Shrink DB/ Auto Shrink is really bad?
Ans:
In the SHRINKFILE command, SQL Server isnt especially careful about where it puts the pages being moved from
the end of the file to open pages towards the beginning of the file.

The data becomes fragmented, potentially up to 100% fragmentation, this is a performance killer for your
database;

The operation is slow all pointers to / from the page / rows being moved have to be fixed up, and the
SHRINKFILE operation is single-threaded, so it can be really slow (the single-threaded nature of SHRINKFILE is
not going to change any time soon)
Recommendations:

Shrink the file by using Truncate Only: First it removes the inactive part of the log and then perform
shrink operation

Rebuild / Reorganize the indexes once the shrink is done so the Fragmentation level is decreased
Q. Which key provides the strongest encryption?
Ans:
AES (256 bit)

The longer the key, the better the encryption, so choose longer keys for more encryption. However there is a
larger performance penalty for longer keys. DES is a relatively old and weaker algorithm than AES.
AES: Advanced Encryption Standard
DES: Data Encryption Standard
Q. What is the difference between memory and disk storage?
Ans:
Memory and disk storage both refer to internal storage space in a computer. The term memory usually means
RAM (Random Access Memory). The terms disk space and storage usually refer to hard drive storage.
Q. What port do you need to open on your server firewall to enable named pipes connections?
Ans:
Port 445. Named pipes communicate across TCP port 445.
Q. What are the different log files and how to access it?
Ans:

SQL Server Error Log: The Error Log, the most important log file, is used to troubleshoot system
problems. SQL Server retains backups of the previous six logs, naming each archived log file sequentially. The
current error log file is named ERRORLOG. To view the error log, which is located in the %Program-Files
%\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG directory, open SSMS, expand a server node, expand
Management, and click SQL Server Logs

SQL Server Agent Log: SQL Servers job scheduling subsystem, SQL Server Agent, maintains a set of log
files with warning and error messages about the jobs it has run, written to the %ProgramFiles%\Microsoft SQL
Server\MSSQL.1\MSSQL\LOG directory. SQL Server will maintain up to nine SQL Server Agent error log files. The
current log file is named SQLAGENT.OUT, whereas archived files are numbered sequentially. You can view SQL
Server Agent logs by using SQL Server Management Studio (SSMS). Expand a server node, expand Management,
click SQL Server Logs, and select the check box for SQL Server Agent.

Windows Event Log: An important source of information for troubleshooting SQL Server errors, the
Windows Event log contains three useful logs. The application log records events in SQL Server and SQL Server
Agent and can be used by SQL Server Integration Services (SSIS) packages. The security log records
authentication information, and the system log records service startup and shutdown information. To view the
Windows Event log, go to Administrative Tools, Event Viewer.

SQL Server Setup Log: You might already be familiar with the SQL Server Setup log, which is located at
%ProgramFiles%\Microsoft SQL Server\90\Setup Bootstrap\LOG\Summary.txt. If the summary.txt log file shows a
component failure, you can investigate the root cause by looking at the components log, which youll find in
the %Program-Files%\Microsoft SQL Server\90\Setup Bootstrap\LOG\Files directory.

SQL Server Profiler Log: SQL Server Profiler, the primary application-tracing tool in SQL Server, captures
the systems current database activity and writes it to a file for later analysis. You can find the Profiler logs in
the log .trc file in the %ProgramFiles%\Microsoft SQL Server\MSSQL.1\MSSQL\LOG directory.
Q. Explain XP_READERRORLOG or SP_READERRORLOG
Ans:
Xp_readerrorlog or sp_readerrorlog has 7 parameters.
Xp_readerrorlog <Log_FileNo>,<Log_Type>,<Keyword-1>,<Keyword-2>,<Date1>,<Date2>,<Asc/Desc>
Log_FileNo: -1: All logs
0: Current log file

1: No1 archived log file etc


Log_Type: 1: SQL Server
2: SQL Agent
KeyWord-1: Search for the keyword
KeyWord-2: Search for combination of Keyword 1 and Keyword 2
Date1 and Date2: Retrieves data between these two dates
Asc/Desc: Order the data
Examples:
EXEC Xp_readerrorlog 0 Current SQL Server log
EXEC Xp_readerrorlog 0, 1 Current SQL Server log
EXEC Xp_readerrorlog 0, 2 Current SQL Agent log
EXEC Xp_readerrorlog -1 Entire log file
EXEC Xp_readerrorlog 0, 1, dbcc Current SQL server log with dbcc in the string
EXEC Xp_readerrorlog 1, 1, dbcc, error Archived 1 SQL server log with dbcc and error in the string
EXEC xp_readerrorlog -1, 1, dbcc, error, 2012-02-21, 2012-02-22,desc
Search entire sql server log file for string dbcc and Error within the given dates and retrieves in descending
order.
Note: Also, to increase the number of log files, add a new registry key NumErrorLogs (REG_DWORD) under below
location.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQL.X\MSSQLServer\
By default, this key is absent. Modify the value to the number of logs that you want to maintain.
Q. Can we track no of transactions / inserts / updates / deletes a Day (Without using profiler)? If yes how?
Ans:
You could use capture data change or change tracking:
http://msdn.microsoft.com/en-us/library/cc280519.aspx
Q. We have 300 SSIS packages those needs to be deployed to production, how can we make it easier / short
way to deploy all SSIS packages at once.
Ans:
I would store these as XML based files and not in the MSDB database. With the configuration files, you can point
the packages from prod to dev (and vice versa) in just a few seconds. The packages and config files are just stored
in a directory of your choice. Resources permitting, create a standalone SSIS server away from the primary SQL
Server
Q. We have a table which is 1.2 GB in size, we need to write a SP which should work with a particular point of
time data (like snapshot) (We should not use snapshot Isolation as it take other 1.2 TB size)
Ans:
You may want to add insert timestamps and update timestamps for each record. Every time a new record is
inserted, stamp it with the datetime, and also stamp it with the date time when updated. Also possibly use
partitioning to reduce index rebuilds.
Q. What is RAID levels? Which one we have to choose for SQL Server user databases?
Ans:
Check out the charts in this document. It shows how the disks are setup. It will depend on what the customer
wants to spend and level of reliability needed. Raid 5 is common, but see the topic RAID 10 versus RAID 5 in
Relational Databases, in the document below. Its a good discussion. Raid 10 (pronounced Raid one-zero) is
supposed to have the best in terms of performance and reliability, but the cost is higher.
http://en.wikipedia.org/wiki/RAID
Q. How many datafiles I can put in Tempdb? What is the effect of adding multiple data files.
Ans:
By far, the most effective configuration is to set tempdb on its own separate fast drive away from the user
databases. I would set the number of files based on # of cpus divided by 2. So, if you have 8 cpus, then set 4
tempdb files. Set the tempdb large enough with 10% data growth. I would start at a general size of 10 GB for each
size. I also would not create more than 4 files for each mdf/ldf even if there were more than 8 cpus. you can
always add more later.
http://msdn.microsoft.com/en-us/library/ms175527.aspx
http://msdn.microsoft.com/en-us/library/ms190768.aspx

Q. Lets say a user is performing a transaction on a clustered server and failover has occurred. What will
happen to the Transaction?
Ans:
If it is active/passive, there is a good chance the transaction died, but active/passive is considered by some the
better as it is not as difficult to administer. I believe that is what we have on active. Still, active/active may be
best depending on what the requirements are for the system.
Q. How you do which node is active and which is passive. What are the criteria for deciding the active node?
Ans:
Open Cluster Administrator, check the SQL Server group where you can see current owner. So current owner is the
active node and other nodes are passive.
Q. What is the common trace flags used with SQL Server?
Ans:
Deadlock Information: 1204, 1205, 1222
Network Database files: 1807
Log Record for Connections: 4013
Skip Startup Stored Procedures: 4022
Disable Locking Hints: 8755
Forces uniform extent allocations instead of mixed page allocations 1118 (SQL 2005 and 2008) To reduces TempDB
contention.
Q. What is a Trace flag? Types of Trace Flags? How to enable/disable it? How to monitor a trace flag?
Ans:
http://blogs.technet.com/b/lobapps/archive/2012/08/28/how-do-i-work-with-trace-flags.aspx
Q. What are the limitations for RAM and CPU for SQL SERVER 2008 R2?
Ans:
Feature

Standard

Enterprise

Datacenter

Max Memory

64 GB

2TB

Max Memory supported


by windows version

Max CPU (Licensed


per Socket, not core)

4
Sockets

8 Sockets

Max Memory supported


by windows version

Q. Do you know about Resource Database?


Ans:
All sys objects are physically stored in resource database and logically available on every database.
Resource database can faster the service packs or upgrades
Q. Really does resource faster the upgrades? Can you justify?
Ans:
Yes, in earlier versions upgrades requires dropping and recreating system objects now an upgrade requires a copy
of the resource file.
We are also capable of rollback the process, because it just needs to overwrite the existing with the older version
resource copy.
Q. I have my PROD sql server all system dbs are located on E drive and I need my resource db on H drive how
can you move it?
Ans:
No only resource db cannot be moved, Resource db location is always depends on Master database location, if u
want to move resource db you should also move master db.
Q. Can we take the backup for Resource DB?
Ans:
No way. The only way if you want to get a backup is use windows backup for option resource mdf and ldf files.
Q. Any idea what is the Resource db mdf and ldf file names?
Ans:


mssqlsystemresource.mdf and

mssqlsystemresource.ldf
Q. Can you elaborate the requirements specifications for SQL Server 2008?
Ans:

Q. What you do if a column of data type int is out of scope?


Ans:
I do alter column to BigInt
Q. Are you sure the data type Bigint never been out of scope?
Ans:
Yes I am sure.
Lets take few examples and see how many years will it take for BIGINT to reach its upper limit in a table:
(A) Considering only positive numbers, Max limit of BIGINT = 9,223,372,036,854,775,807
(B) Number of Seconds in a year = 31,536,000
Assume there are 50,000 records inserted per second into the table. Then the number of years it would take to
reach the BIGINT max limit is:
9,223,372,036,854,775,807 / 31,536,000 / 50,000 = 5,849,424 years
Similarly,
If we inserted 1 lakh records per second into the table then it would take 2,924,712 yrs
If we inserted 1 million (1000000) records per second into the table then it would take 292,471 yrs
If we inserted 10 million (10000000) records per second into the table then it would take 29,247 yrs
If we inserted 100 million records per second into the table then it would take 2,925 yrs
If we inserted 1000 million records per second into the table then it would take 292 yrs
By this we would have understood that it would take extremely lots of years to reach the max limit of BIGINT.

Logshipping Interview Questions


Question:What is Log Shipping
Essentially, log shipping is the process of automating the backup of database and transaction log files on a production SQL server, and
then restoring them onto a standby server. But this is not all. The key feature of log shipping is that is will automatically backup transaction logs
throughout the day (for whatever interval you specify) and automatically restore them on the standby server. This in effect keeps the two SQL
Servers in "synch". Should the production server fail, all you have to do is point the users to the new server, and you are all set. Well, its not really
that easy, but it comes close if you put enough effort into your log shipping setup.
Question : IS it possible to log ship database between SQL 2000 & SQL 2005?
Answer: No, thats impossible, In SQL 2005 transaction log architecture is changed compared to SQL 2000 and hence you won't be able
to restore tlog backups from SQL 2000 to SQL 2005 or vice versa.
Question:I'm getting the below error message in restoration job on secondary server, WHY?
[Microsoft SQL-DMO (ODBC SQLState: 42000)]
Error 4305: [Microsoft][ODBC SQL Server Driver][SQL Server]The log in this backup set
begins at LSN 7000000026200001, which is too late to apply to the database. An earlier
log backup that includes LSN 6000000015100001 can be restored.
[Microsoft][ODBC SQL Server Driver][SQL Server]RESTORE LOG is terminating abnormally.
Answer:Was your sql server or agent restarted Y'day in either source or destination ? because the error states there is a mismatch in LSN. A
particular tran log was not applied in the destination server hence the subsequent tran logs cannot be applied as a result !
You can check log shipping monitor \ log shipping tables to check the which transaction log is last applied to secondary db, if the next
consecutive transaction logs are available in the secondary server share folder you manually RESTORE the logs with NORECOVERY option,
Once you restored all the logs automatically from the next cycle the job will work fine.
Incase if you are not able to find the next transaction log in secondary server shared folder, you need to reconfigure log shipping. Try the below
tasks to re-establish log shipping again.
o
Disable all the log shipping jobs in source and destination servers
o
Take a full backup in source and restore it in secondary server using the With Standby option
o
Enable all the jobs you disabled previously in step1

Question: Is it possible load balance in log shipping?


Answer:Yes ofcourse its possible in log shipping, while configuring log shipping you have the option to choose standby or no recovery mode,
there you select STANDBY option to make the secondary database readonly.
Question: Can I take full backup of the log shipped database in primary server??
Answer: In SQL Server 2000 you won't be able to take full backup of log shipped database, because this will break the LSN chain and it directly
affects the log shipping.
In SQL Server 2005, yes its possible. You can take full backup of log shipped database and this won't affect the log shipping.
Question : Can I shrink log shipped database log file??
Answer: Yes ofcourse you can shrink the log file, but you shouldn't use WITH TRUNCATE option. If you use this option obviously log shipping
will be disturbed.
Question : Can I take full backup of the log shipped database in secondary server??
Answer: No chance , you won't be able to execute BACKUP command against a log shipped database in secondary server.
Question: I've configured Log shipping successfully on standby mode, but in the restoration job I'm getting the below error. What I do to
avoid this in future??
Message
2006-07-31 09:40:54.33 *** Error: Could not apply log backup file 'C:\Program Files\Microsoft SQL
Server\MSSQL.1\MSSQL\Backup\LogShip\TEST_20060731131501.trn' to secondary database 'TEST'.
(Microsoft.SqlServer.Management.LogShipping) ***
2006-07-31 09:40:54.33 *** Error: Exclusive access could not be obtained because the database is in use.
RESTORE LOG is terminating abnormally.(.Net SqlClient Data Provider) ***
Answer: To restore transaction logs to the secondary db, SQL Server needs exclussive access on the database. When you configure it in standby
mode, users will be able to access the database and runs query against the secondary db. Hence If the scheduled restore jobs runs at that time, the
db will have a lock and it won't allow SQL Server to restore the tlogs. To avoid this you need to check "Disconnect users in the database when
restoring backups" options in log shipping configuration wizard.
Question : Suddenly I'm getting the error below, How can I rectify this???
[Microsoft SQL-DMO (ODBC SQLState: 42000)] Error 4323: [Microsoft][ODBC SQL Server Driver][SQL Server]The database is marked
suspect. Transaction logs cannot be restored. Use RESTORE DATABASE to recover the database.
[Microsoft][ODBC SQL Server Driver][SQL Server]RESTORE LOG is terminating abnormally
Answer : We had the same issue some time ago, this was related to a new file being created in a filegroup on the source. Don't know if this
applies to your case, but restoring a backup of this new file on the secondary server solved the problem.
Question : Is it possible to log ship database from SQL server 2005 to SQL server 2008 and vice versa?
Answer : Yes you can log ship database from SQL server 2005 to SQL Server 2008 this will work. However log shipping from SQL Server 2008
to SQL Server 2005 is not possible because you wont be able to restore SQL server 2008 backup to SQL Server 2005 (downgrading version).
Question:Benefits of Log Shipping
While I have already talked about some of the benefits of log shipping, let's take a more comprehensive look:
Log shipping doesn't require expensive hardware or software. While it is great if your standby server is similar in capacity to your
production server, it is not a requirement. In addition, you can use the standby server for other tasks, helping to justify the cost of the standby
server. Just keep in mind that if you do need to fail over, that this server will have to handle not one, but two loads. I like to make my standby
server a development server. This way, I keep my developers off the production server, but don't put too much work load on the standby server.

Once log shipping has been implemented, it is relatively easy to maintain.


Assuming you have implemented log shipping correctly, it is very reliable.
The manual failover process is generally very short, typically 15 minutes or less.
Depending on how you have designed your log shipping process, very little, if any, data is lost should you have to failover. The
amount of data loss, if any, is also dependent on why your production server failed.
Implementing log shipping is not technically difficult. Almost any DBA with several months or more of SQL Server 7 experience can
successfully implement it.
Question: Problems with Log Shipping
Let's face it, log shipping is a compromise. It is not the ideal solution, but it is often a practical solution given real-world budget constraints. Some
of the problems with log shipping include:
Log shipping failover is not automatic. The DBA must still manually failover the server, which means the DBA must be
present when the failover occurs.
The users will experience some downtime. How long depends on how well you implemented log shipping, the nature of
the production server failure, your network, the standby server, and the application or applications to be failed over.
Some data can be lost, although not always. How much data is lost depends on how often you schedule log shipping and
whether or not the transaction log on the failed production server is recoverable.
The database or databases that are being failed over to the standby server cannot be used for anything else. But databases
on the standby server not being used for failover can still be used normally.
When it comes time for the actual failover, you must do one of two things to make your applications work: either rename
the standby server the same name as the failed production server (and the IP address), or re-point your user's applications to the new standby
server. In some cases, neither of these options is practical.
Question:Log Shipping Overview
Before we get into the details of how to implement log shipping, let's take a look at the big picture. Essentially, here's what you need to do in
order to implement log shipping:
Ensure you have the necessary hardware and software properly prepared to implement log shipping.
Synchronize the SQL Server login IDs between the production and standby servers.
Create two backup devices. One will be used for your database backups and the other will be used for your transaction log
backups.
On the production server, create a linked server to your standby server.
On the standby servers, create two stored procedures. One stored procedure will be used to restore the database. The other
stored procedure will be used to restore transaction logs.
On the production server, create two SQL Server jobs that will be used to perform the database and transaction log
backups. Each job will include multiple steps with scripts that will perform the backups, copy the files from the production server to the standby
server, and fire the remote stored procedures used to restore the database and log files.
Start and test the log shipping process.
Devise and test the failover process.
Monitor the log shipping process.

Das könnte Ihnen auch gefallen