Find Communities by: Category | Product

Untitled.pngHow to reduce load on your Storage Tier


Use XtremCache to reduce load on your storage tier and save Money

Bitly URL: http://bit.ly/1gNyOxA

 

Tweet this document:

How to reduce load on your Storage Tier

http://bit.ly/1gNyOxA

 

Follow us on Twitter:

EMCOracle.jpeg.jpg

I had an eureka moment some time ago that I would like to share with you. It happened when we were doing a performance review of an Oracle Data Warehouse. The client wanted to know if the system was FASTVP friendly and if he could safely deploy it in his FASTVP pool and if so which policy would he need to select for his system in order to not loose any performance on the array side.

 

We analyzed the storage usage of the system and found out that of all the IO’s strangely enough almost 80% of the reads were done to the same device. This started me thinking. Because if you reuse a lot of the blocks during processes it makes sense to use server side caching to achieve better performance. But attached to that we would also achieve another goal for this customer. Roughly 80% of his reads would not touch his storage array anymore. His Service Level and chargeback model was aimed at the number of IOPS (IO per second) his storage needed to deliver. So instead of having to choose a high performance tier he could easily choose a less performant tier (Less IOPS) because the storage just needed to process less IO’s.


How does this work? We have some software called XtremCache. It is intelligent caching software that leverages server-based flash and write through-cache to accelerate performance. Because getting your data directly from the server side is infinitely faster then having to go to the array and get your blocks from there. We also have server side flash cards XtremSF to make this work. But the beauty of the software is it works on any Flash cards or SSD drives you have in the server. It even supports Oracle RAC configurations where you need to make sure the cache is coherent over instances (see this article for more detail Supporting EMC XtremCache for Oracle Real Application Clusters)


You install this software and it basically sits on top of your HBA. When a request for a block read comes it will look first inside the server side cache if it is there. If it is it will immediately serve this block, otherwise it will get it from the array and then store it in the cache. Here is an example of a cache hit and a cache miss:

 

 

R1.png  R2.png

 

 

When a write comes though it will give it immediately to the storage array. This way your data is always protected and you do not run a risk of data loss:


r3.png


Does this mean that this is useless when you do a lot of writes? Not really because when a write is done it is immediately send to the cache so if anyone after this wants to read this block it is already in cache.

 

This works great for Datafile and Temp Tablespaces in Oracle but do not use this for Redo log since this is writing only.

 

XtremCache will facilitate that a large number of your read requests will never touch your array anymore which gives you the following advantages:

 

  1. Accelerate performance of your Oracle environment read latency extremely (hence the name ).
  2. Offload large number of the Read IO’s from the storage array making sure you can utilize your array more efficiently.
  3. Place your data on a lower tiered storage in terms of IOPS but maintain the same performance as before.

 

Attached are some white papers with some more details around this technology. As you might think I advised the client to take a good look at this technology to see if this would help him lower his costs.

nosql-150.jpg

The Future DBA


What will the DBA be doing in the future?

Bitly URL: http://bit.ly/OG53Vd

 

Tweet this document:

The Future DBA - What will the DBA be doing in the future? http://bit.ly/OG53Vd

 

 

Follow us on Twitter:

EMCOracle.jpeg.jpg

Your Job as a DBA will dramatically change in the coming years and you will need to adapt to make sure you are still relevant for the business in the coming years. You will need to extend your skills outside of Oracle relational databases because a large part of the new workloads might not run on a relational database anymore.

 

When I look at the market today I see a lot of disruptive forces at play. When I started out in IT in the earlier 90’s (yes I have some grey hair around the temples starting to show) it was the beginning of the Client/Server era. We were moving away from single server workloads and let the client handle some of the work. In those days we still talked about 3G and 4G languages when programming and the age of the rise of the relational database. Most enterprise applications created in that time and after landed on a relational database (there was some experimenting with Object Databases but that never really took off). It was also the time the function of DBA became popular. The number of databases in companies started to grow and we needed special knowledge to manage the databases, Oracle of course being one of the more popular choices.

 

Now 20 years later this one size fits all model really does not fit the needs any more and many more choices have entered or are entering the arena of enterprise applications. The market and business user are driving developers towards different solutions. New internet companies have driven some really interesting database technologies like NoSQL Database and Hadoop, more and more companies are looking at in-memory databases to fulfill their near real time need. All of these technologies are not a Swiss army knife which does it all but fit a certain use case. The picture below illustrates this. The workload profiles that we need to fulfill as IT range from High Performance to High Capacity and High Service Level to Low Service Level.

 

Untitled.png

 

So I as an admin would start to brush up on how to manage Hadoop clusters (by the way not an easy job, so lots of work for smart people) and where NoSQL will add value for my business and get familiar with all these kind of disruptive data-store technologies.

 

The other thing I would be doing as a Database Administrator is looking at ways to facilitate Cloud-like self service models around Databases, Middleware, Hadoop and other technologies developers like to use. Here for instance is an interesting link to how you can create a HADOOP-as-a-Service (HaaS) environment very easily and I have attached the white paper about this infrastructure.

 

If businesses and IT do not embrace these new methods, we can be certain developers will shift their environment to something like Amazon Web Services (AWS) or anything they can get with a credit card. I heard yesterday that a company did some research on how many accounts had been created on AWS with their company credit cards - turned out they had 200 accounts at AWS!

 

So make sure you retain relevance as a DBA now and in the future and get up to speed with all this great new stuff and enable your ‘clients’ (developers) to spin up environments quickly and easily so they can be more agile towards the business.

Untitled.png

Performance management without Blame Storms


Oracle Performance

Bitly URL: http://bit.ly/1gkb4Qo

 

Tweet this document: Performance management without Blame Storms

http://bit.ly/1gkb4Qo

 

 

Follow us on Twitter:

EMCOracle.jpeg.jpg

I often speak with storage managers who are held accountable for bad application performance by their organization. This is a common phenomena in our lives. An application is not running fast enough according to the end users and the first response is to run to the storage admin and ask why the storage is slow.

 

Now I'm not saying that there are never problems in storage subsystems and storage arrays always have perfect performance, but the opposite is also not true. When a performance issue arises in an environment, there are lots of other components in the application chain that may cause issues such as, for example, the database or sub-optimal code.


 

I was thinking about this because we had a session on performance optimization for Oracle databases in our offices last week. There are many things you can do to prevent Performance problems. But why do storage administrators always have such a hard time when performance issues occur?

 

There are a number reasons for this in my opinion. First, performance is often measured subjectively and not quantified. End Users notice that something is slower but most of the time cannot quantify this. it's just slow, fix it.

 

Second, Storage is the last in the chain of application, database, server, network and storage. Water always runs to the lowest point and when everybody has looked at their environment and not seen anything out of the ordinary there is just one station left, which is the storage. Which leaves them most of the time with the job to prove the issue is not at their end.


 

Third, All management groups use different KPI’s to manage their environment. There is no way to compare % CPU usage to a number of IOPS or a buffer hit ratio in Oracle.


 

The last reason this happens is that in many organizations (historical) performance data is not available or does covers the application chain poorly. A performance problem is always a change in the status quo. If you can detect that change, you are halfway to solving your issue.


 

The result is that a huge amount of time of DBA’s and Storage admins is devoted to firefighting performance issues and almost zero time is spent on proactive performance management. Research of the Indepent Oracle User Group (IOUG) shows that diagnosing performance issues is in the top three of activities DBA’s spent time on.



IOUGperf.png


Now how do you ensure that we spend less time on performance issues? In my opinion you should at least address the following things


 

Ensure that end user performance is measurably quantified, and everybody speaks the same language: Milli(seconds)Time and number of transactions.


All units involved in the application chain should speak the same language. When talking about performance there are only two metrics that you can use for this. Number of transactions and time required to perform them. Also make sure you use the same sampling period for these metrics otherwise you get stuck with inconsistencies. If one party does a measurement every half an hour and another every 5 minutes there is no way to compare. An example of a tool that can help you with this is the OEM 12 plug-in for EMC VNX and VMAX This is a free plug-in from EMC that allows you to ensure that DBAs and storage administrators have a unified view on Oracle and the attached storage.

 

Be Careful only measuring end user performance is not a real good solution because you need to make sure you can drill down into the separate components when an issue arises.


Make sure you have historical data of your environment.


 

In addition to the correct data, it is also important to have sufficient historical data available. As I said earlier a performance issue most of the times is a change in behavior. Having historical data helps in quickly identifying these changes. When there is historical data you can also start to do capacity planning and pro-active activity planning. There are tools in the market that help you. The SRM Suite  of EMC is an example of a solution for storage platforms that can hold years of historical data and helps you diagnose issues quickly. It supports a wide range of platforms (not just EMC) and supports a wide range of metrics to pinpoint issues but also to do capacity management.


 

So when you want to be able to avoid being the centre of the Blame Storm make sure all parties measure the same intervals and the same KPI’s and make sure you have historical data so you can see trends developing. This can help you avoid getting grey hair early.

Filter Blog

By date:
By tag: