RDBMS Flashcards

1
Q

Who is the primary developer for MYSQL?

A

Oracle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What type of licensing does MYSQL require?

A

Open source

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the MS SQL Server Implementation Language?

A

C++

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the MYSQL Implementation Language?

A

C & C++

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which RDBMS, MS SQL Server or MYSQL, supports XML?

A

Both

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are three of the main programming languages supported by MS SQL Server?

A

C#, PHP, Python, Ruby, R, Visual Basic, JAVA, etc…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are three of the main programming languages supported by MYSQL?

A

C++, C#, JAVA, PHP, Perl, Python, Ruby, Tel, Delphi, D, etc…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a DBMS or RDBMS?

A

Database Management System
Relational Database Management System

A DBMS is a set of software programs used to manage and interact with and manage relational databases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What considerations should be given to Capacity Planning?

A

Capacity planning is about more than just knowing how much space you need for database files. You have to understand the workload and what it requires in terms of CPU, memory, and disk resources.

Storage
Planning for the amount of storage needed is fairly simple, it just requires some planning up front. It’s imperative to trend this data over time. You’re looking to see how the files change over months, possibly for up to one to two years, because usage patterns for an application can change.

Storage
When we talk about performance requirements for storage, you often hear people talk about IOPS (input/output operations per second). Reads and writes per second are the input/output operations, so we have this data available through PerfMon at the instance level, and this is what you use to define the IOPS requirements for an instance.

However, if you know reads and writes and user connections, then you can do some math and figure out IOPS per user. This is useful if you’re planning to grow the solution and add more users. You want to make sure the solution will scale, and one option you have is to take your calculated IOPS per user, based on X number of users, and then estimate instance IOPS for Y number of users. Now, we make a lot of assumptions with this calculation. We assume that the way new connections use the system is the same as it is today – that may or may not be the case in the end, you won’t know until the system is in place. When you understand this value (reads + writes / user connections = average IOPS per user), then you know how to estimate IOPS for a solution based on anticipated user connections.

You then take this information to your storage person to discuss the potential configurations available. You can calculate the maximum IOPS for a disk configuration, provided you have information about the disks (e.g. the number of disks, the speed, the size, and the RAID configuration). You can test IO throughput for a drive using CrystalDiskMark, although this may not be possible if the storage hasn’t been decided. Once it is in place, however, you should go through this testing to ensure that the IOPS for a given drive can meet the expected workload.

IOPS are just one way to look at storage performance. Understand that this data tells you how much IO is occurring, and ideally, if you know IOPS and you have the storage to meet the requirements, then latency should be minimal. But, latency is what affects performance. In order to determine what latency exists, you’ll need to use a tool like DiskSpd to benchmark the storage.

CPU
Optimizing your CPU performance is not just about the number of CPUs that you have, you also have to consider the model and the workload (e.g. data warehouse with large parallel queries vs. OLTP with serial queries). Don’t forget to consider licensing costs and limitations based on your edition of SQL Server!

Memory
Memory is relatively inexpensive and it’s our recommendation to always purchase the maximum amount of memory that a server can hold. Reading data from memory is significantly faster than reading it from disk, so the more data that fits into memory the better. Note that the entire database does not have to fit in memory. You just need the working set of data to fit into memory. Consider a 2TB database. It’s unlikely that, in an OLTP scenario, all 2TB is accessed every day. Typically only recent data is accessed – perhaps just the last 30 or 60 days. That is the data that needs to fit in memory. But of course, rarely do we see a pure OLTP environment, often it’s a mixed environment because users like to run reports over large sets of data, and there’s no data warehouse or reporting copy of the database so they have to run the reports against production. This complicates the memory requirement. Now, sometimes you need that older data in memory, but sometimes you don’t. It’s important to understand the workload; what types of queries are executing against the database?

If you are using Standard Edition verify that you have more memory in the server than the maximum memory supported. For example, with SQL Server 2014 and higher, in Standard Edition the maximum amount of memory that you can allocate to the buffer pool (via the max server memory setting) is 128GB. Therefore, you want to have more memory in the server (e.g. 160GB) so you can set max server memory at the highest possible value of 128GB, and still have memory available for the OS and other SQL Server processes. Further, with SQL Server 2016 SP1 Standard Edition you can use In-Memory OLTP, with a limit of 32GB per database. This is above the max server memory value, so if you plan to use this feature, purchase memory accordingly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly