Posts Categorized: Technical Blog
I just got an Exadata Customer Advisory e-mail from Oracle, identifying “an important issue that needs your immediate attention” and pointing to MOS note 1265396.1. The only fix is to upgrade to the newly-released version 18.104.22.168.1 of the storage server software.
Welcome to Log Buffer, the weekly news blog about blogs in the datasphere.
I’m here at the PASS Community Summit 2010 in Seattle, WA. This happens to be the largest SQL Server conference in the world with regards to content, attendees and Microsoft participation. There have been several announcements made in today’s keynote, few of which were just waiting to be blogged about from the insiders since earlier this year. First of which is the release of SQL Server 2008 R2 Parallel Data Warehouse.
I was searching oracle.com for some Exadata-related wait events, and noticed some hits popping up from formerly closely-held Exadata documentation. Upon closer look, I found the full Exadata V1 user’s guide. I hope the V2 documentation will be posted soon.
For those of you who weren’t able to attend my webinar last week “Implementing Exadata: the results are in, recordings are now available here online.
One of the key features of Exadata V2 is the flash cache. Although commonly thought of as an OLTP-specific feature, it has also been marketed as a data warehouse accelerator. According to this frequently-used presentation slide, a full Exadata rack provides 21 GB/sec of disk throughput and 50GB/sec of flash throughput. was testing throughput using a simple query, making use of both smart scans and parallel execution. Here’s what the objects look like. They’re running on a quarter rack system with a stated capacity of 4.5GB/sec disk and 11GB/sec flash.
Welcome to Log Buffer, the weekly roundup of happenings in the database world. Lots to cover this week, so let’s get on with Log Buffer #204. Enjoy!
One of my favorite customers had a problem. They had to load around 20G of data into a table on MySQL database. The data loaded fine, but when he tried to build few indexes on the database, he got a mysterious error: ERROR 1114 (HY000): The table ‘really_big_table’ is full. The error was mysterious because we had around 1.5T of free space on the disk. Also, if the customer created the indexes before loading the data there was no error. This gave them a work around, but one that slow and annoying. Later, I found out that we are not the first to run into this mystery.
Following up on my earlier webinar Implementing Oracle Exadata – Strategies for Success, I’ll be giving another webinar to present the results of the Exadata implementation at LinKShare. I’ll be talking about actual performance results, our zero-downtime go-live, compression experiences, and performance tuning in an Exadata environment.
ourSquare, the location based social network, suffered from extended outage yesterday. They explained the causes in a blog post, which caused much discussion around the web. Here’s the gist of the analysis: FourSquare are using MongoDB, which is a sharded database. Data is split between nodes based on a shard key, usually the User ID or something similar. One of the shards became overly loaded. After failing to resolve the issue in other ways, FourSquare decided to add another shard to share the load. This caused the entire cluster to fail.