SQL Server Flashcards
What connections does Microsoft SQL Server support?
Windows Authentication (via Active Directory) and SQL Server authentication (via Microsoft SQL Server username and passwords)
What are the System Database in Sql server 2005?
- Master - Stores system level information such as user accounts, configuration settings, and info on all other databases.
- Model - database is used as a template for all other databases that are created
- Msdb - Used by the SQL Server Agent for configuring alerts and scheduled jobs etc
- Tempdb - Holds all temporary tables, temporary stored procedures, and any other temporary storage requirements generated by SQL Server.
What is the difference between TRUNCATE and DELETE commands?
- TRUNCATE is a DDL command whereas DELETE is a DML command.
- TRUNCATE is much faster than DELETE.
Reason:When you type DELETE.all the data get copied into the Rollback Tablespace first.then delete operation get performed.Thatswhy when you type ROLLBACK after deleting a table ,you can get back the data(The system get it for you from the Rollback Tablespace).All this process take time.But when you type TRUNCATE,it removes data directly without copying it into the Rollback Tablespace.Thatswhy TRUNCATE is faster.Once you Truncate you cann’t get back the data.
- You cann’t rollback in TRUNCATE but in DELETE you can rollback.TRUNCATE removes the record permanently.
- In case of TRUNCATE ,Trigger doesn’t get fired.But in DML commands like DELETE .Trigger get fired.
- You cann’t use conditions(WHERE clause) in TRUNCATE.But in DELETE you can write conditions using WHERE clause.
What is denormalization and when would you go for it?
The process of adding redundant data to get rid of complex join, in order to optimize database performance. This is done to speed up database access by moving from higher to lower form of normalization.
In other words, we can define De-Nomalization as :-
De-normalization is the process of attempting to optimize the performance of a database by adding redundant data. It’s used To introduce redundancy into a table in order to incorporate data from a related table. The related table can then be eliminated. De-normalization can improve efficiency and performance by reducing complexity in a data warehouse schema.
De-normalization is an application tool in SQL server model. There are three methods for de-normalization:.
• Entity inheritance
• Role expansion
• Lookup entities.
Entity Inheritance
This method for the de-normalization should be implemented when one entity is named as another entity. This will do with the help of inheritance. Inheritance means parent child relations of entity. This will be do with making the foreign key and candidate key. This is also in notice that creation of model creates a band of relationship and if you select the inheritance this property should be automatically deleted.
Role Expansion
This type of de-normalization should be created when it is surety that one entity has the relationship to another entity or it is a part of another entity. In this storage reason is removed. It is used with the help of Expand inline function. It use the shared schema is used in from of table.
Lookup Entities
This type of de-normalization is used when entity depend on the lookup table. It is work with the help of Is Look up property. This property applies on the entity. These three will give authority to user to create a genuine and tempting report model .This model is navigation experience for the customer.
The Reason for Denormalization
Only one valid reason exists for denormalizing a relational design - to enhance performance. However, there are several indicators which will help to identify systems and tables which are potential denormalization candidates.
These are:
- Many critical queries and reports exist which rely upon data from more than one table. Often times these requests need to be processed in an on-line environment.
- Repeating groups exist which need to be processed in a group instead of individually.
- Many calculations need to be applied to one or many columns before queries can be successfully answered.
- Tables need to be accessed in different ways by different users during the same timeframe.
- Many large primary keys exist which are clumsy to query and consume a large amount of disk space when carried as foreign key columns in related tables.
- Certain columns are queried a large percentage of the time causing very complex or inefficient SQL to be used.
Be aware that each new RDBMS release usually brings enhanced performance and improved access options that may reduce the need for denormalization. However, most of the popular RDBMS products on occasion will require denormalized data structures. There are many different types of denormalized tables which can resolve the performance problems caused when accessing fully normalized data. The following topics will detail the different types and give advice on when to implement each of the denormalization types.
Types of Denormalization
- *Pre-Joined Tables** used when the cost of joining is prohibitive
- *Report Tables** used when specialized critical reports are needed
- *Mirror Tables** used when tables are required concurrently by two different types of environments
- *Split Tables** used when distinct groups use different parts of a table
- *Combined Tables** used when one-to-one relationships exist
- *Redundant Data** used to reduce the number of table joins required
- *Repeating Groups** used to reduce I/O and (possibly) storage usage
- *Derivable Data** used to eliminate calculations and algorithms
- *Speed Tables** used to support hierarchies
How do you implement one-to-one, one-to-many and many-to-many relationships while designing tables?
One-to-One relationship can be implemented as a single table and rarely as two tables with primary and foreign key relationships. One-to-Many relationships are implemented by splitting the data into two tables with primary key and foreign key relationships. Many-to-Many relationships are implemented using a junction table with the keys from both the tables forming the composite primary key of the junction table. It will be a good idea to read up a database designing fundamentals text book.
What are user defined datatypes and when you should go for them?
User defined datatypes let you extend the base SQL Server datatypes by providing a descriptive name, and format to the database.
Take for example, in your database, there is a column called Flight_Num which appears in many tables. In all these tables it should be varchar(8). In this case you could create a user defined datatype called Flight_num_type of varchar(8) and use it across all your tables.
See sp_addtype, sp_droptype in books online.
What is bit datatype and what’s the information that can be stored inside a bit column?
Bit datatype is used to store boolean information like 1 or 0 (true or false). Untill SQL Server 6.5 bit datatype could hold either a 1 or 0 and there was no support for NULL. But from SQL Server 7.0 onwards, bit datatype can represent a third state, which is NULL.
CREATE INDEX myIndex ON myTable(myColumn)What type of Index will get created after executing the above statement?
Non-clustered index. Important thing to note: By default a clustered index gets created on the primary key, unless specified otherwise.
What is lock escalation? What is its purpose?
Lock escalation: In SQL Server, if one acquires a lock at higher level, it can lock more resources than what we may consume. This kind of locking has an overhead with lower concurrency. E.g.: If we select all the rows of a table and we acquire a lock on the table, we would not need to lock rows themselves but then it will block any concurrent update transactions. Based on estimates during query compilation, SQL Server recommends the locking granularity appropriately and during query execution, based on the concurrent work load, the appropriate locking granularity is applied. While locking granularity is chosen at the beginning of query execution, during the execution SQL Server may choose to escalate the lock to higher level of granularity depending on the number of locks acquired and the availability of memory at runtime. SQL Server supports escalating the locks to the table level .i.e. the locks can only be escalated from rows to table level. Locks are never escalated from rows to the parent page.
Lock escalation is when the system combines multiple locks into a higher level one. This is done to recover resources taken by the other finer granular locks. The system automatically does this. The threshold for this escalation is determined dynamically by the server.
Purpose:
- To reduce system over head by recovering locks
- Maximize the efficiency of queries
- Helps to minimize the required memory to keep track of locks.
What are the steps you will take to improve performance of a poor performing query?
This is a very open ended question and there could be a lot of reasons behind the poor performance of a query. But some general issues that you could talk about would be: No indexes, table scans, missing or out of date statistics, blocking, excess recompilations of stored procedures, procedures and triggers without SET NOCOUNT ON, poorly written query with unnecessarily complicated joins, too much normalization, excess usage of cursors and temporary tables. Some of the tools/ways that help you troubleshooting performance problems are:
SET SHOWPLAN_ALL ON, SET SHOWPLAN_TEXT ON, SET STATISTICS IO ON, SQL Server Profiler, Windows NT /2000 Performance monitor, Graphical execution plan in Query Analyzer. Download the white paper on performance tuning SQL Server from Microsoft web site. Don’t forget to check out sql-server-performance.com
What is a deadlock and what is a live lock? How will you go about resolving deadlocks?
Deadlock is a situation when two processes, each having a lock on one piece of data, attempt to acquire a lock on the other’s piece. Each process would wait indefinitely for the other to release the lock, unless one of the user processes is terminated.
SQL Server detects deadlocks and terminates one user’s process.
A livelock is one, where a request for an exclusive lock is repeatedly denied because a series of overlapping shared locks keeps interfering. SQL Server detects the situation after four denials and refuses further shared locks. A livelock also occurs when read transactions monopolize a table or page, forcing a write transaction to wait indefinitely.
Check out SET DEADLOCK_PRIORITY and “Minimizing Deadlocks” in SQL Server books online.
Also check out the article Q169960 from Microsoft knowledge base.
What is blocking and how would you troubleshoot it?
Blocking happens when one connection from an application holds a lock and a second connection requires a conflicting lock type. This forces the second connection to wait, blocked on the first. Read up the following topics in SQL Server books online: Understanding and avoiding blocking, Coding efficient transactions. Explain CREATE DATABASE syntax Many of us are used to creating databases from the Enterprise Manager or by just issuing the command: CREATE DATABAE MyDB.
What are statistics, under what circumstances they go out of date, how do you update them?
Statistics determine the selectivity of the indexes. If an indexed column has unique values then the selectivity of that index is more, as opposed to an index with non-unique values. Query optimizer uses these indexes in determining whether to choose an index or not while executing a query.
Some situations under which you should update statistics:
1) If there is significant change in the key values in the index
2) If a large amount of data in an indexed column has been added, changed, or removed (that is, if the distribution of key values has changed), or the table has been truncated using the TRUNCATE TABLE statement and then repopulated
3) Database is upgraded from a previous version. Look up SQL Server books online for the following commands: UPDATE STATISTICS, STATS_DATE, DBCC SHOW_STATISTICS, CREATE STATISTICS, DROP STATISTICS, sp_autostats, sp_createstats, sp_updatestats
What are the different ways of moving data/databases between servers and databases in SQL Server?
There are lots of options available, you have to choose your option depending upon your requirements.
Some of the options you have are:
- BACKUP/RESTORE
- dettaching and attaching databases
- replication
- DTS
- BCP
- Logshipping
- INSERT…SELECT, SELECT…INTO
- Creating INSERT scripts to generate data.
How to determine the service pack currently installed on SQL Server?
The global variable @@Version stores the build number of the sqlservr.exe, which is used to determine the service pack installed.
To know more about this process visit SQL Server service packs and versions.
What is a join and explain different types of joins.
Joins are used in queries to explain how different tables are related. Joins also let you select data from a table depending upon data from another table.
- Types of joins:
- INNER JOIN
- OUTER JOIN
- CROSS JOIN
- OUTER JOINs are further classified as LEFT OUTER JOINS
- RIGHT OUTER JOINS and FULL OUTER JOINS.
For more information see pages from books online titled: “Join Fundamentals” and “Using Joins”.
Can you have a nested transaction?
Yes, very much. Check out BEGIN TRAN, COMMIT, ROLLBACK, SAVE TRAN and @@TRANCOUNT
What is the system function to get the current user’s user id?
- USER_ID().
- USER_NAME()
- SYSTEM_USER
- SESSION_USER
- CURRENT_USER
- USER
- SUSER_SID()
- HOST_NAME().
- What is the difference between lock, block and deadlock?
Lock: DB engine locks the rows/page/table to access the data which is worked upon according to the query.
Block: When one process blocks the resources of another process then blocking happens.
Blocking can be identified by using
- SELECT * FROM sys.dm_exec_requests where blocked <> 0
- SELECT * FROM master..sysprocesses where blocked <> 0
Deadlock: When something happens as follows: Error 1205 is reported by SQL Server for deadlock.
Explain different isolation levels
An isolation level determines the degree of isolation of data between concurrent transactions. The default SQL Server isolation level is Read Committed. Here are the other isolation levels (in the ascending order of isolation):
- Read Uncommitted
- Read Committed
- Repeatable Read
- Serializable.
What is lock escalation?
Lock escalation is the process of converting a lot of low level locks (like row locks, page locks) into higher level locks (like table locks). Every lock is a memory structure too many locks would mean, more memory being occupied by locks. To prevent this from happening, SQL Server escalates the many fine-grain locks to fewer coarse-grain locks. Lock escalation threshold was definable in SQL Server 6.5, but from SQL Server 7.0 onwards it’s dynamically managed by SQL Server.
What are constraints? Explain different types of constraints.
Constraints enable the RDBMS enforce the integrity of the database automatically, without needing you to create triggers, rule or defaults.
Types of constraints:
- NOT NULL
- CHECK
- UNIQUE
- PRIMARY KEY
- FOREIGN KEY
For an explanation of these constraints see books online for the pages titled: “Constraints” and “CREATE TABLE”, “ALTER TABLE”
Whar is an index? What are the types of indexes? How many clustered indexes can be created on a table? I create a separate index on each column of a table. what are the advantages and disadvantages of this approach?
Indexes in SQL Server are similar to the indexes in books. They help SQL Server retrieve the data quicker.
Indexes are of two types. Clustered indexes and non-clustered indexes. When you craete a clustered index on a table, all the rows in the table are stored in the order of the clustered index key. So, there can be only one clustered index per table. Non-clustered indexes have their own storage separate from the table data storage. Non-clustered indexes are stored as B-tree structures (so do clustered indexes), with the leaf level nodes having the index key and it’s row locater. The row located could be the RID or the Clustered index key, depending up on the absence or presence of clustered index on the table.
If you create an index on each column of a table, it improves the query performance, as the query optimizer can choose from all the existing indexes to come up with an efficient execution plan. At the same t ime, data modification operations (such as INSERT, UPDATE, DELETE) will become slow, as every time data changes in the table, all the indexes need to be updated.
Another disadvantage is that, indexes need disk space, the more indexes you have, more disk space is used.
What are the steps you will take, if you are tasked with securing an SQL Server?
Again this is another open ended question. Here are some things you could talk about: Preferring NT authentication, using server, databse and application roles to control access to the data, securing the physical database files using NTFS permissions, using an unguessable SA password, restricting physical access to the SQL Server, renaming the Administrator account on the SQL Server computer, disabling the Guest account, enabling auditing, using multiprotocol encryption, setting up SSL, setting up firewalls, isolating SQL Server from the web server etc.
Read the white paper on SQL Server security from Microsoft website. Also check out My SQL Server security best practices
What is a deadlock and what is a live lock? How will you go about resolving deadlocks?
Deadlock is a situation when two processes, each having a lock on one piece of data, attempt to acquire a lock on the other’s piece. Each process would wait indefinitely for the other to release the lock, unless one of the user processes is terminated. SQL Server detects deadlocks and terminates one user’s process.
A livelock is one, where a request for an exclusive lock is repeatedly denied because a series of overlapping shared locks keeps interfering. SQL Server detects the situation after four denials and refuses further shared locks. A livelock also occurs when read transactions monopolize a table or page, forcing a write transaction to wait indefinitely.
Check out SET DEADLOCK_PRIORITY and “Minimizing Deadlocks” in SQL Server books online. Also check out the article Q169960 from Microsoft knowledge base.
What is blocking and how would you troubleshoot it?
Blocking happens when one connection from an application holds a lock and a second connection requires a conflicting lock type. This forces the second connection to wait, blocked on the first.
Read up the following topics in SQL Server books online: Understanding and avoiding blocking, Coding efficient transactions.
Explain CREATE DATABASE syntax
Many of us are used to craeting databases from the Enterprise Manager or by just issuing the command: CREATE DATABAE MyDB. But what if you have to create a database with two filegroups, one on drive C and the other on drive D with log on drive E with an initial size of 600 MB and with a growth factor of 15%? That’s why being a DBA you should be familiar with the CREATE DATABASE syntax.
As a part of your job, what are the DBCC commands that you commonly use for database maintenance?
- DBCC CHECKDB
- DBCC CHECKTABLE
- DBCC CHECKCATALOG
- DBCC CHECKALLOC
- DBCC SHOWCONTIG
- DBCC SHRINKDATABASE
- DBCC SHRINKFILE etc.
But there are a whole load of DBCC commands which are very useful for DBAs. Check out SQL Server books online for more information.
What are statistics, under what circumstances they go out of date, how do you update them?
Statistics determine the selectivity of the indexes. If an indexed column has unique values then the selectivity of that index is more, as opposed to an index with non-unique values. Query optimizer uses these indexes in determining whether to choose an index or not while executing a query.
Some situations under which you should update statistics:
- If there is significant change in the key values in the index
- If a large amount of data in an indexed column has been added, changed, or removed (that is, if the distribution of key values has changed), or the table has been truncated using the TRUNCATE TABLE statement and then repopulated
- Database is upgraded from a previous version
Look up SQL Server books online for the following commands: UPDATE STATISTICS, STATS_DATE, DBCC SHOW_STATISTICS, CREATE STATISTICS, DROP STATISTICS, sp_autostats, sp_createstats, sp_updatestats
What are the different ways of moving data/databases between servers and databases in SQL Server?
There are lots of options available, you have to choose your option depending upon your requirements.
Some of the options you have are:
- BACKUP/RESTORE
- Dettaching and attaching databases
- Rreplication
- DTS
- BCP
- Logshipping
- INSERT…SELECT, SELECT…INTO
- Creating INSERT scripts to generate data.
What is database replicaion? What are the different types of replication you can set up in SQL Server?
Replication is the process of copying/moving data between databases on the same or different servers. SQL Server supports the following types of replication scenarios:
- Snapshot replication
- Transactional replication (with immediate updating subscribers, with queued updating subscribers)
- Merge replication
See SQL Server books online for indepth coverage on replication. Be prepared to explain how different replication agents function, what are the main system tables used in replication etc.
What are cursors? Explain different types of cursors. What are the disadvantages of cursors? How can you avoid cursors?
Cursors allow row-by-row prcessing of the resultsets.
Types of cursors: Static, Dynamic, Forward-only, Keyset-driven. See books online for more information.
Disadvantages of cursors: Each time you fetch a row from the cursor, it results in a network roundtrip, where as a normal SELECT query makes only one rowundtrip, however large the resultset is. Cursors are also costly because they require more resources and temporary storage (results in more IO operations). Furthere, there are restrictions on the SELECT statements that can be used with some types of cursors.
Most of the times, set based operations can be used instead of cursors. Here is an example:
If you have to give a flat hike to your employees using the following criteria:
Salary between 30000 and 40000 – 5000 hike
Salary between 40000 and 55000 – 7000 hike
Salary between 55000 and 65000 – 9000 hike
In this situation many developers tend to use a cursor, determine each employee’s salary and update his salary according to the above formula. But the same can be achieved by multiple update statements or can be combined in a single UPDATE statement as shown below:
UPDATE tbl_emp SET salary =
CASE WHEN salary BETWEEN 30000 AND 40000 THEN salary + 5000
WHEN salary BETWEEN 40000 AND 55000 THEN salary + 7000
WHEN salary BETWEEN 55000 AND 65000 THEN salary + 10000
END
Another situation in which developers tend to use cursors: You need to call a stored procedure when a column in a particular row meets certain condition. You don’t have to use cursors for this. This can be achieved using WHILE loop, as long as there is a unique key to identify each row. For examples of using WHILE loop for row by row processing, check out the ‘My code library’ section of my site or search for WHILE.
Write down the general syntax for a SELECT statements covering all the options.
Here’s the basic syntax: (Also checkout SELECT in books online for advanced syntax).
SELECT select_list
[INTO new_table_]
FROM table_source
[WHERE search_condition]
[GROUP BY group_by_expression]
[HAVING search_condition]
[ORDER BY order_expression [ASC | DESC] ]
(Pre 2005) What is an extended stored procedure? Can you instantiate a COM object by using T-SQL?
An extended stored procedure is a function within a DLL (written in a programming language like C, C++ using Open Data Services (ODS) API) that can be called from T-SQL, just the way we call normal stored procedures using the EXEC statement. See books online to learn how to create extended stored procedures and how to add them to SQL Server.
Yes, you can instantiate a COM (written in languages like VB, VC++) object from T-SQL by using sp_OACreate stored procedure. Also see books online for sp_OAMethod, sp_OAGetProperty, sp_OASetProperty, sp_OADestroy. For an example of creating a COM object in VB and calling it from T-SQL, see ‘My code library’ section of this site.
But can use CLR now in 2005+
Can you explain your skill set?
◦Employers look for the following:
■DBA (Maintenance, Security, Upgrades, Performance Tuning, etc.)
■Database developer (T-SQL, SSIS, Analysis Services, Reporting Services, Crystal Reports, Service Broker, etc.)
■Communication skills (oral and written)
◦DBA’s opportunity
■This is your 30 second elevator pitch outlining your technical expertise and how you can benefit the organization
Can you explain the environments you have worked in related to the following items:
- SQL Server versions
- SQL Server technologies
- Relational engine, Reporting Services, Analysis Services, Integration Services
- Number of SQL Servers
- Number of instances
- Number of databases
- Range of size of databases
- Number of DBAs
- Number of Developers
- Hardware specs (CPU’s, memory, 64 bit, SANs)
What are the tasks that you perform on a daily basis and how have you automated them?
◦For example, daily checks could include:
- ■Check for failed processes
- ■Research errors
- ■Validate disk space is not low
- ■Validate none of the databases are offline or corrupt
- ■Perform database maintenance as available to do so
◦For example, automation could include:
- ■Setup custom scripts to query for particular issues and email the team
- ■Write error messages centrally in the application and review that data
- ■Setup Operators and Alerts on SQL Server Agent Jobs for automated job notification
How do you re-architect a process?
◦Review the current process to understand what is occurring
◦Backup the current code for rollback purposes
◦Determine what the business and technical problems are with the process
◦Document the requirements for the new process
◦Research options to address the overall business and technology needs
■For example, these could include:
■Views
■Synonyms
■Service Broker
■SSIS
■Migrate to a new platform
■Upgrade in place
◦Design and develop a new solution
◦Conduct testing (functional, load, regression, unit, etc.)
◦Run the systems in parallel
◦Sunset the existing system
◦Promote the new system
◦Additional information - Checklist to Re-Architect a SQL Server Database
What is your experience with third party applications and why would you use them?
◦Experience
■Backup tools
■Performance tools
■Code or data synchronization
■Disaster recovery\high availability
◦Why
■Need to improve upon the functionality that SQL Server offers natively
■Save time, save money, better information or notification
How do you identify and correct a SQL Server performance issue?
◦Identification - Use native tools like Profiler, Perfmon, system stored procedures, dynamic management views, custom stored procedures or third party tools
◦Analysis - Analyze the data to determine the core problems
◦Testing - Test the various options to ensure they perform better and do not cause worse performance in other portions of the application
◦Knowledge sharing - Share your experience with the team to ensure they understand the problem and solution, so the issue does not occur again
◦Additional information - MSSQLTips.com Category: Performance Tuning and Query Optimization
What are the dynamic management views and what value do they offer?
◦The DMV’s are a set of system views new to SQL Server 2005 and beyond to gain insights into particular portions of the engine
◦Here are some of the DMV’s and the associated value:
- sys.dm_exec_query_stats and sys.dm_exec_sql_text - Buffered code in SQL Server
- Additional Information: Identifying the input buffer in SQL Server 2000 vs SQL Server 2005
- sys.dm_os_buffer_descriptors
- Additional Information: Buffer Pool Space in SQL Server 2005
- sys.dm_tran_locks - Locking and blocking
- Additional Information: Locking and Blocking Scripts in SQL Server 2000 vs SQL Server 2005
- sys.dm_os_wait_stats - Wait stats
- Additional Information: Waitstats performance metrics in SQL Server 2000 vs SQL Server 2005
- sys.dm_exec_requests and sys.dm_exec_sessions - Percentage complete for a process
- Additional Information: Finding a SQL Server process percentage complete with dynamic management views
What is the process to upgrade from DTS to SSIS packages?
- ◦You can follow the steps of the migration wizard but you may need to manually upgrade portions of the package that were not upgraded by the wizard
- Additional Information: Upgrade SQL Server DTS Packages to Integration Services Packages
◦For script related tasks, these should be upgraded to new native components or VB.NET code
What are some of the features of SQL Server 2012 that you are looking into and why are they of interest?
◦AlwaysON
◦Contained Databases
◦User Defined Server Roles
◦New date and time functions
◦New FORMAT and CONCAT functions
◦New IIF and CHOOSE functions
◦New paging features with OFFSET and FETCH
◦NOTE - Many more new features do exist, this is an abbreviated list. ■Additional Information: SQL Server 2012 Denali
What are the system databases and what are their functions?
System database are used to store system information. There are five system databases each one having its own functionality.
- Master DB
- MSDB
- Model
- Resource
- Temp
Master Database: it stores all the system related information for an instance of SQL Server. It stores the metadata for the database which created in SQL Server Instances.
MSDB Database: it informs the information and activities related to SQL server agent.
Model Database: It is the template to create a new database in SQL server instance .if you have created some object in it will reflect in all database which were created after this until you won’t remove these objects from model database.
Resource Database: Resource database all the system objects views and procedures.
Temp Database: It is used to store temporary objects which create during the execution of query. SQL Server creates a free copy of temp dB whenever server starts. Backup operation is not allowed for the temp DB.
What are the files in SQL Server?
SQL Server stores the data in data files on hard disk.
What are different types of database files in SQL Server?
SQL has three types of database files:
Primary files: Stores the information of the database related to start-up, data and other files details. A database can have one primarily data base file and the file extension for this is .mdf.
Secondary files: Stores the user data. A database can have multiple secondary files which can help user data to spread across the different hard disk.
Transaction logs files: Transaction log files used to store the transactional log.
What is Dirty Read?
A dirty read occurs when two operations, say, read and write occur together giving the incorrect or unedited data. Suppose, A changed a row but did not committed the changes. B reads the uncommitted data but his view of the data may be wrong so that is Dirty Read.
Why can’t I use Outer Join in an Indexed View?
Rows can logically disappear from an indexed view based on OUTER JOIN when you insert data into a base table. This makes incrementally updating OUTER JOIN views relatively complex to implement, and the performance of the implementation would be slower than for views based on standard (INNER) JOIN.(Read More Here)
What is the Correct Order of the Logical Query Processing Phases?
The correct order of the Logical Query Processing Phases is as follows:
- FROM
- ON
- OUTER
- WHERE
- GROUP BY
- CUBE | ROLLUP
- HAVING
- SELECT
- DISTINCT
- TOP
- ORDER BY
What are Different Types of Locks?
Shared Locks: Used for operations that do not change or update data (read-only operations), such as a SELECT statement.
Update Locks: Used on resources that can be updated. It prevents a common form of deadlock that occurs when multiple sessions are reading, locking, and potentially updating resources later.
Exclusive Locks: Used for data-modification operations, such as INSERT, UPDATE, or DELETE. It ensures that multiple updates cannot be made to the same resource at the same time.
Intent Locks: Used to establish a lock hierarchy. The types of intent locks are as follows: intent shared (IS), intent exclusive (IX), and shared with intent exclusive (SIX).
Schema Locks: Used when an operation dependent on the schema of a table is executing. The types of schema locks are schema modification (Sch-M) and schema stability (Sch-S).
Bulk Update Locks: Used when bulk-copying data into a table and the TABLOCK hint is specified.
What are Pessimistic Lock and Optimistic Lock?
Optimistic Locking is a strategy where you read a record, take note of a version number and check that the version hasn’t changed before you write the record back. If the record is dirty (i.e. different version to yours), then you abort the transaction and the user can re-start it.
Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking but requires you to be careful with your application design to avoid Deadlocks.
When is the use of UPDATE_STATISTICS command?
This command is basically used when a large amount of data is processed. If a large amount of deletions, modifications or Bulk Copy into the tables has occurred, it has to update the indexes to take these changes into account. UPDATE_STATISTICS updates the indexes on these tables accordingly.
What is Connection Pooling and why it is Used?
To minimize the cost of opening and closing connections, ADO.NET uses an optimization technique called connection pooling.
The pooler maintains ownership of the physical connection. It manages connections by keeping alive a set of active connections for each given connection configuration. Whenever a user calls Open on a connection, the pooler looks for an available connection in the pool. If a pooled connection is available, it returns it to the caller instead of opening a new connection. When the application calls Close on the connection, the pooler returns it to the pooled set of active connections instead of closing it. Once the connection is returned to the pool, it is ready to be reused on the next Open call.
Types of Sub-query
- Single-row sub-query, where the sub-query returns only one row.
- Multiple-row sub-query, where the sub-query returns multiple rows, and
- Multiple column sub-query, where the sub-query returns multiple columns
Which Command using Query Analyzer will give you the Version of SQL Server and Operating System?
SELECT SERVERPROPERTY(‘Edition’) AS Edition,
SERVERPROPERTY(‘ProductLevel’) AS ProductLevel,
SERVERPROPERTY(‘ProductVersion’) AS ProductVersion
GO
Can a Stored Procedure call itself or a Recursive Stored Procedure? How many levels of SP nesting is possible?
Yes. As T-SQL supports recursion, you can write stored procedures that call themselves. Recursion can be defined as a method of problem solving wherein the solution is arrived at by repetitively applying it to subsets of the problem. A common application of recursive logic is to perform numeric computations that lend themselves to repetitive evaluation by the same processing steps.
Stored procedures are nested when one stored procedure calls another or executes managed code by referencing a CLR routine, type, or aggregate. You can nest stored procedures up to 32 levels.
Any reference to managed code from a Transact-SQL stored procedure counts as one level against the 32-level nesting limit. Methods invoked from within managed code do not count against this limit. (Read more here) (Courtesy: Vinod Kumar)
What is Log Shipping?
Log shipping is the process of automating the backup of database and transaction log files on a production SQL server and then restoring them onto a standby server. All Editions (except Express Edition) supports log shipping. In log shipping, the transactional log file from one server is automatically updated into the backup database on the other server. If one server fails, the other server will have the same db and can be used this as the Disaster Recovery plan. The key feature of log shipping is that it will automatically backup transaction logs throughout the day and automatically restore them on the standby server at defined intervals. (Courtney: Rhys)
Name 3 ways to get an Accurate Count of the Number of Records in a Table?
- SELECT * FROM table1
- SELECT COUNT(*) FROM table1
- SELECT rows FROM sysindexes WHERE id = OBJECT_ID(table1) AND indid < 2
What does it mean to have QUOTED_IDENTIFIER ON? What are the Implications of having it OFF?
When SET QUOTED_IDENTIFIER is ON, identifiers can be delimited by double quotation marks, and literals must be delimited by single quotation marks.
When SET QUOTED_IDENTIFIER is OFF, identifiers cannot be quoted and must follow all T-SQL rules for identifiers. (Read more here)
What is the STUFF Function and How Does it Differ from the REPLACE Function?
STUFF function is used to overwrite existing characters using this syntax: STUFF (string_expression, start, length, replacement_characters), where string_expression is the string that will have characters substituted, start is the starting position, length is the number of characters in the string that are substituted, and replacement_characters are the new characters interjected into the string.
REPLACE function is used to replace existing characters of all occurrences. Using the syntax REPLACE (string_expression, search_string, replacement_string), every incidence of search_string found in the string_expression will be replaced with replacement_string.
What is B-Tree?
The database server uses a B-tree structure to organize index information. B-Tree generally has following types of index pages or nodes:
- Root node: A root node contains node pointers to only one branch node.
- Branch nodes: A branch node contains pointers to leaf nodes or other branch nodes, which can be two or more.
- Leaf nodes: A leaf node contains index items and horizontal pointers to other leaf nodes, which can be many.
How to get @@ERROR and @@ROWCOUNT at the Same Time?
If @@Rowcount is checked after Error checking statement, then it will have 0 as the value of @@Recordcount as it would have been reset. And if @@Recordcount is checked before the error-checking statement, then @@Error would get reset. To get @@error and @@rowcount at the same time, include both in same statement and store them in a local variable.
SELECT @RC = @@ROWCOUNT, @ER = @@ERROR
What are the Advantages of Using Stored Procedures?
- Stored procedure can reduced network traffic and latency, boosting application performance.
- Stored procedure execution plans can be reused; they staying cached in SQL Server’s memory, reducing server overhead.
- Stored procedures help promote code reuse.
- Stored procedures can encapsulate logic. You can change stored procedure code without affecting clients.
- Stored procedures provide better security to your data.
What is a Table Called, if it has neither Cluster nor Non-cluster Index? What is it Used for?
Unindexed table or Heap. Microsoft Press Books and Book on Line (BOL) refers it as Heap.
A heap is a table that does not have a clustered index and therefore, the pages are not linked by pointers. The IAM pages are the only structures that link the pages in a table together.
Unindexed tables are good for fast storing of data. Many times, it is better to drop all the indexes from table and then do bulk of INSERTs and restore those indexes after that.
What Command do we Use to Rename a db, a Table and a Column?
To Rename db
sp_renamedb ‘oldname’ , ‘newname
If someone is using db it will not accept sp_renmaedb. In that case, first bring db to single user mode using sp_dboptions. Use sp_renamedb to rename the database. Use sp_dboptions to bring the database to multi-user mode.
e.g.
USE MASTER;
GO
EXEC sp_dboption AdventureWorks, ‘Single User’, True
GO
EXEC sp_renamedb ‘AdventureWorks’, ‘AdventureWorks_New’
GO
EXEC sp_dboption AdventureWorks, ‘Single User’, False
GO
To Rename Table
We can change the table name using sp_rename as follows:
sp_rename ‘oldTableName’ ‘newTableName’
e.g.
sp_RENAME ‘Table_First’, ‘Table_Last’
GO
To rename Column
The script for renaming any column is as follows:
sp_rename ‘TableName.[OldcolumnName]’, ‘NewColumnName’, ‘Column’
e.g.
sp_RENAME ‘Table_First.Name’, ‘NameChange’ , ‘COLUMN’
GO
What are sp_configure Commands and SET Commands?
Use sp_configure to display or change server-level settings. To change the database-level settings, use ALTER DATABASE. To change settings that affect only the current user session, use the SET statement.
e.g.
sp_CONFIGURE ‘show advanced’, 0
GO
RECONFIGURE
GO
sp_CONFIGURE
GO
You can run the following command and check the advanced global configuration settings.
sp_CONFIGURE ‘show advanced’, 1
GO
RECONFIGURE
GO
sp_CONFIGURE
GO