Microsoft sql server database design optimization


















Consider the specific memory requirements for your OS, other applications, the SQL Server thread stack, and other multipage allocators. These considerations also apply to the memory requirements for SQL Server to run in a virtual machine.

Since SQL Server is designed to cache data in the buffer pool, and it will typically use as much memory as possible, it can be difficult to determine the ideal amount of RAM needed. Once you understand the environment baseline, you can reduce the max server memory by 1 GB, then see how that impacts your performance counters after any initial cache flushing subsides. If the metrics remain acceptable, reduce by another 1 GB, then monitor again, repeating as desired until you determine an ideal configuration.

For more information, see Server memory configuration options. The size and physical placement of the tempdb database can affect the performance of Operations Manager. For example, if the size that is defined for tempdb is too small, part of the system-processing load may be taken up with autogrowing tempdb to the size required to support the workload every time you restart the instance of SQL Server. To achieve optimal tempdb performance, we recommend the following configuration for tempdb in a production environment:.

To configure tempdb, you can run the following query or modify its properties in Management Studio. Depending on the degree of contention, this may also lead to SQL Server appearing unresponsive for short periods.

Another approach is to examine the Dynamic Management Views [sys. The results will show that these requests or tasks are waiting for tempdb resources, and have similar values as highlighted earlier when you execute the sys. If the previous recommendations do not significantly reduce the allocation contention and the contention is on SGAM pages, implement trace flag -T in the Startup parameters for SQL Server so that the trace flag remains in effect even after SQL Server is recycled.

Note that this trace flag affects every database on the instance of SQL Server. The default configuration of SQL Server for small to medium size deployments of Operations Manager is adequate for most needs. This option determines the computing and thread resources that are used for the query plan operators that perform the work in parallel.

Depending on whether SQL Server is set up on a symmetric multiprocessing SMP computer, a non-uniform memory access NUMA computer, or hyperthreading-enabled processors, you have to configure the max degree of parallelism option appropriately. When SQL Server runs on a computer with more than one microprocessor or CPU, it detects the best degree of parallelism, that is, the number of processors employed to run a single statement, for each parallel plan execution.

By default, its value for this option is 0, which allows SQL Server to determine the maximum degree of parallelism. The stored procedures and queries pre-defined in Operations Manager as it relates to the operational, data warehouse, and even audit database do not include the MAXDOP option, as there is no way during installation to dynamically query how many processors are presented to the operating system, nor does it attempt to hardcode the value for this setting, which could have negative consequences when the query is executed.

The max degree of parallelism configuration option does not limit the number of processors that SQL Server uses. To configure the number of processors that SQL Server uses, use the affinity mask configuration option. You can monitor the number of parallel workers by querying sys. After performing the suggested optimizations in this section, performance improved. However, a query parallelism bottleneck still persisted.

Estimating the future growth of the Operations Manager databases, specifically the operational and data warehouse databases, within the first several months after deployment is not a simple exercise. While the Operations Manager Sizing Helper is reasonable in estimating potential growth based on the formula derived by the product group from their testing in the lab, it does not take into account several factors, which can influence growth in the near term versus long term.

The initial database size, as suggested by the Sizing Helper, should be allocated to a predicted size, to reduce fragmentation and corresponding overhead, which can be specified at setup time for the Operational and Data Warehouse databases.

If during setup not enough storage space is available, the databases can be expanded later by using SQL Management Studio and then reindexed thereafter to defragment and optimize accordingly. This recommendation applies also to the ACS database. Proactive monitoring of the growth of the operational and data warehouse database should be performed on a daily or weekly cycle. This will be necessary to identify unexpected and significant growth spurts, and begin troubleshooting in order to determine if it is caused by a bug in a management pack workflow that is, discovery rule, performance or event collection rule, or monitor or alert rule or other symptom with a management pack that was not identified during testing and quality assurance phase of the release management process.

When the databases file size that has been reserved on disk becomes full, SQL Server can automatically increase the size, by a percentage or by a fixed amount.

Moreover, a maximum database size can be configured, to prevent filling up all the space available on disk. Only rely on autogrow as a contingency for unexpected growth. Autogrow introduces a performance penalty that should be considered when dealing with a highly transactional database. Performance penalties include:. If you combine the autogrow and autoshrink options, you might create unnecessary overhead.

Make sure that the thresholds that trigger the grow and shrink operations will not cause frequent up and down size changes. For example, you may run a transaction that causes the transaction log to grow by MB by the time it commits.

Some time after that the autoshrink starts and shrinks the transaction log by MB. Then, you run the same transaction and it causes the transaction log to grow by MB again. In that example, you are creating unnecessary overhead and potentially creating fragmentation of the log file, either of which can negatively affect performance.

It is recommended to configure these two settings carefully. The particular configuration really depends on your environment. In general, it is recommended to increase database size by a fixed amount in order to reduce disk fragmentation. See, for example, the following figure, where the database is configured to grow by MB each time autogrow is required. Application developers who are not familiar with designing, implementing or maintaining databases, yet need to create a database for use with their applications.

The links below are external links and provide information and guidance on understanding how to design, create and maintain databases using SQL Server, as well as avoiding common mistakes during database design.

Please keep it as clear and simple as possible. Avoid speculative discussions as well as a deep dive into underlying mechanisms or related technologies.

A well thought out database design will reduce data storage requirements by normalizing data, increase performance through organization and indexing of data, and will generally help you avoid revisiting and redesigning your database multiple times during application development. Ten Common Database Design Mistakes provides a list of ten of the more common design mistakes, as well as the consequences of these mistakes. Evaluate this plan and then identify any good indexes that the optimizer could use.

Also, identify the part of your query that takes the longest time to run and that might be better optimized. You might see a detected missing index in execution plan. Conclusion : The guidelines discussed here are for basic performance tuning. If we follow these steps, we may get good improvement on performance.

To do advanced SQL server performance tuning we would need to dig much deeper into each of the steps covered here. Contact Us. Our website uses cookies from third party services to improve your browsing experience. Read more about this and how you can control cookies by clicking "Privacy Preferences". Privacy Preferences I Agree. Privacy Preferences. When you visit our website, it may store information through your browser from specific services, usually in form of cookies.

Here you can change your privacy preferences.



0コメント

  • 1000 / 1000