Microsoft SQL Server
Sometime back I was involved in a project which was to collect information for the servers we are supporting – creating inventory of servers. Logging into each server and collecting information is bit tedious when you have hundreds of server in your environment. I have spent sometime and created a script that does this work…
This post should give you some insights into the risk that your databases are in by switching to the bulk-logged recovery model. So, what do you need to do to avoid this risk? Make sure that you run a backup immediately after the transactions you are running under the bulk-logged recovery model complete.
I describe AlwaysOn Availability Groups as a “database mirroring configuration sitting on top of a Windows Failover Cluster infrastructure.” Why do I say this? It’s because I want SQL Server DBAs to leverage what they already know on features like database mirroring and failover clustering and apply them when dealing with AlwaysOn Availability Groups.
In this post, I will explain a little more deeply the physical structure of a SQL Server data file. Unfortunately, I cannot write all the assumptions, behaviours and details about this theme, but you will read a good overview of a file structure and a good start point to understand, and read more, about indexes internals, file fragmentation, transaction log internals and so on… Before we start, let’s remember some vital concepts.
A client of ours is just getting started with Amazon Relational Database Service (RDS) and I wonder as time marches on how popular this cloud solution is going to play out for them and Amazon as a valid/useable service offering.
Recently we had a situation where in customer have asked us to move Master Database from Local Drive to SAN drive, I have outlined the steps for the task:
Continuing on from my last post, I felt like cleaning up some more data and using more SQL Server 2012 features. I happened on DATEFROMPARTS on MSDN and felt like playing with it.
One new feature in SQL 2012 that I am very excited about is TRY_CONVERT which makes data conversion much easier. When you pair TRY_CONVERT with some Case statements you can easily pipe the inconvertible data to other columns which can then be reviewed and (hopefully) fixed prior to cutting over your data. I’ve mocked up a quick demo so you can see how powerful this new feature is.
This post is to discuss the Microsoft Planning and Assessment (MAP) toolkit, which is nice little tool with a few extremely useful features.
For the business world , a good RDMS is one that can provide high availability (HA) options with minimal downtime and cheapest licenses. Vendors, usually if not always, provide a set of HA features to satisfy different needs. Usually some of these features are reserved for higher-end editions: Enterprise, Datacenter, cluster or whatever named; of course, those editions are more expensive than their “inferior” counterparts. Most important factors on choosing an edition are: