Image Image Image Image Image Image Image Image Image

Mark Connelly, Author at HIPPO | Vertoscope - Page 2 of 2

Check out the latest news on HIPPO.

Understanding how Informatica and Oracle interact: an Oracle perspective.

December 14, 2011 | By | No Comments

Monitoring Informatica from Oracle is a reasonably straight forward exercise if you know where to look and what to look for. I’ll walk through a typical scenario I encounter on a regular basis at the client sites I work at.

During the course of a day-time ETL run (during performance testing) I was called to have a look at a mapping that was “running slow”. In the main, that’s usually the extent of the information available.

The first thing I did was log in to informatica’s workflow monitor to have a look at the performance statistics. Sure enough, there was a mapping which was processing 100 rows per second.

The source rows pretty much matched the target rows, so in this scenario there’s no evidence of a bottleneck such as a joiner or aggregator. If the target has processed around the same number of rows as the source, then rows are definitely making it through the mapping. However, this doesn’t rule out transformation related bottlenecks.

SOURCE QUALIFIER

Next, let’s have a look at the source in oracle. I’m using the view v$session_wait to see what the session is waiting on at the moment I run the query.

DW_STATS@MIDDWHP>select event, wait_time, seconds_in_wait, state

2 from v$session_wait

3 where sid = 963

4 /

EVENT                                      WAIT_TIME            SECONDS_IN_WAIT            STATE
——————————————————————————————————————————–
SQL*Net message from client            0                                            0                     WAITING
Elapsed: 00:00:00.04
DW_STATS@MIDDWHP>/

EVENT                                       WAIT_TIME             SECONDS_IN_WAIT           STATE
———————————————————————————————————————————
SQL*Net message from client            0                                            0                     WAITING
Elapsed: 00:00:00.00
DW_STATS@MIDDWHP>/

EVENT                                           WAIT_TIME          SECONDS_IN_WAIT            STATE
———————————————————————————————————————————-
SQL*Net message from client            0                                             0                      WAITING

In order to identify the SID which related to the source qualifier, in this case I simply viewed what SQL each session on the database was executing until I found the relevant one. Of course, this isn’t always possible – especially if the session you are looking for is a cache build which hasn’t started building its cache yet! But for this example, it sufficed.

So we can see from above that at the time I executed the query against v$session_wait the session was waiting on SQL*Net message from client. In other words, oracle is idle, waiting for the client to tell it to do something. In this case the client is Informatica. To get a bigger picture of what is happening to the session we can go to another view – v$session_event. This view provide wait time information for the life of the session, aggregated by event. So using this view we can see what the session has spent it’s time on during its entire life – not just real time.

DW_STATS@MIDDWHP>
 1 select event, total_waits, time_waited
2 from v$session_event
3 where sid = 963
4* order by 3 desc

DW_STATS@MIDDWHP>/
EVENT                                              TOTAL_WAITS                      TIME_WAITED
————————————————————————————————————————
SQL*Net message from client                   7497                               8094790
db file scattered read                                4268                                  11449
db file sequential read                             11703                                   2685
SQL*Net more data to client                     5013                                       53

Elapsed: 00:00:00.00

DW_STATS@MIDDWHP>/
EVENT                                                TOTAL_WAITS                      TIME_WAITED
———————————————————————————————————————-
SQL*Net message from client                      7784                             8095780
db file scattered read                                   4564                                 12864
db file sequential read                                11978                                 2691
SQL*Net more data to client                        5078                                    91

So we can see that over the course of the life of this session, it has been waiting mainly on informatica to send it something. However, we can also see that the counts for db file scattered read and db file sequential read are going up – so oracle is actually doing work, not JUST waiting on informatica. The event db file sequential read is a single block access, usually indicating an index access path. The event db file scattered read is a multi-block read, usually indicating a full table scan or a fast full index scan. So just from looking at the wait events, we can start to see that the source qualifier isn’t the bottleneck. Why??

If the Source Qualifier was the bottleneck we would see much more work going on in Oracle and much less time spent on the event SQL*Net message from client. One word of caution though before we move on – there is a situation where this sort of profile from v$session_event could still signify a problem – I’m not setting out rule of thumb here. This is just one typical example, so when applying this to your own environment be careful to take into consideration all other factors.

LOOKUP

The next logical step would be to have a look at the cache build session. However, in this case we see that rows are actually reaching the target, suggesting that the cache build has actually completed and therefore couldn’t be the bottleneck in this particular scenario. However, I thought I would just show you the profile of the cache build anyway, just for interest. When I ran the v$session_event sql, as above I get the following –

EVENT                                           TOTAL_WAITS        TIME_WAITED
————————————————————————————————————————
SQL*Net message from client                     87374                     4878878
db file scattered read                                    3834                            2634
db file sequential read                                    994                              690
SQL*Net more data to client                         8637                                36

As can be seen, the cache build session actually waited a much greater percentage of it’s time on SQL*Net message from client. The reason is because of the way informatica handles its connections to oracle. When an informatica sessions initializes, it creates all of the sessions to the source, lookup and target it will require during the life of the mapping. In this example, that meant it created 3 oracle sessions – one for the source qualifier, one for the cache build and one for the target. The first statement executed against the database is the source qualifier. Only when the source qualifier returns rows will the cache build sql be fired off against the database. When the cache build finishes and the rows from the source qualifier begin their journey through the transformations on their way to the target, the cache build oracle sessions is not ended. The sessions is kept open. Therefore, it is not unusual to see a cache build session to wait for SQL*Net message from client for the majority of its life.
Let’s move on to the target then;

TARGET

Let’s execute the same sql against the target and see what we get –

DW_STATS@MIDDWHP>select event, wait_time, seconds_in_wait, state
2 from v$session_wait
3 where sid = 854
4 /
EVENT                                    WAIT_TIME      SECONDS_IN_WAIT                         STATE
————————————————————————————————————————
db file sequential read                   0                                   0                                      WAITING
Elapsed: 00:00:00.03

DW_STATS@MIDDWHP>/

EVENT                                   WAIT_TIME        SECONDS_IN_WAIT                         STATE
————————————————————————————————————————
db file sequential read                   0                                     0                                     WAITING
Elapsed: 00:00:00.01

DW_STATS@MIDDWHP>/

EVENT                                   WAIT_TIME         SECONDS_IN_WAIT                          STATE
————————————————————————————————————————
db file sequential read                  0                                      0                                       WAITING
Elapsed: 00:00:00.01

Looks like during our sample time oracle is spending it’s time on db file sequential read. As we have already stated, this wait event is single block read. But this is a target, why are we seeing reads? In this case, the sql being executed was straight INSERT statements. In order to modify a block, oracle has to read that block from disk. Perhaps this is what we are seeing? Is this just an INSERT in action? Let’s have a look at the overall life of the target session and see if that produces any more information;

DW_STATS@MIDDWHP> select event, total_waits, time_waited
2 from v$session_event
3 where sid = 854
4 order by 3 desc
5 /

EVENT                                                 TOTAL_WAITS               TIME_WAITED
————————————————————————————————————————
db file sequential read                                   2043339                                947769
SQL*Net message from client                             7374                                    2878
log file switch completion                                        45                                       102
library cache pin                                                        1                                           3
latch: cache buffers chains                                    169                                           2

So we can see that the target is spending the majority of it’s time reading blocks. The SQL*Net message from client is, again, more than likely the result of informatica’s connection handling i.e. this was the time the target was idle from the initialization of the mapping to the time when rows arrived at the target. When can check this assumption from looking at another of Oracle’s performance views – v$active_session_history (10g and above). This view holds data sampled from v$session_wait, thus providing a way to see a breakdown of the wait events during a particular time period. So if our assumption about the reason for SQL*Net message from client appears in this particular sessions history, then this event should NOT appear in the last 10 minutes (the target has been active for 2 hours at this stage). Let’s see;

DW_STATS@MIDDWHP>select event, sum(time_waited)
2 from v$active_session_history
where session_id = 854
4 and sample_time between sysdate-1/24/6 and sysdate
5 group by event
6 order by 2 desc
7 /

EVENT                                  SUM(TIME_WAITED)
——————————————————————————————
db file sequential read           6647312
–                                              0

Elapsed: 00:00:00.20

So we can see that in the last ten minutes 100% of this sessions time was spent waiting on single block reads, so our assumption is holding up.
So from the data we have collected so far we can produce information like this –

infa_thread1

Not too difficult to pick out the bottleneck now. It’s obviously the target.

But why is the target so slow? Can we breakdown that time spent in oracle even more to see exactly what it’s doing? One way we can do this is to have a look and see what objects it’s actually reading. Using the view we can get the object_id of the current object being operated on –

DW_STATS@MIDDWHP>select owner, object_name, object_type

2 from dba_objects

3 where object_id in ( select ROW_WAIT_OBJ#

4 from v$session

5 where sid = 933 )

6 /

OWNER                                 OBJECT_NAME                        OBJECT_TYPE

———————————————————————————————————————–

DW_PROD                            PK_CUST_ACCOUNT_IDX                 INDEX

Elapsed: 00:00:01.24

DW_STATS@MIDDWHP>/
OWNER                                OBJECT_NAME                          OBJECT_TYPE

————————————————————————————————————————

DW_PROD                          AK_CUST_ACCOUNT_IDX                   INDEX

Elapsed: 00:00:01.24

DW_STATS@MIDDWHP>/
OWNER                                OBJECT_NAME                          OBJECT_TYPE

————————————————————————————————————————

DW_PROD                          AK_CUST_ACCOUNT_IDX                   INDEX

Elapsed: 00:00:01.24

From the above we can see that actually, the time isn’t being spent on the table itself, it’s being spent maintaining indexes. Just to check that assumption we’ll go back to v$active_session_history –

DW_STATS@MIDDWHP>l

1 select CURRENT_OBJ#, sum(time_waited)

2 from v$active_session_history

3 where session_id = 844

4 and sample_time between sysdate-1/24/6 and sysdate

5 group by CURRENT_OBJ#

6* order by 2 desc

DW_STATS@MIDDWHP>/

CURRENT_OBJ#            SUM(TIME_WAITED)

————                           —————-

2340991                              2409213

2341005                              146570

These object_id’s correspond to the same indexes we identified through v$session. So we can now say categorically that the mapping is going only as fast as the target, so therefore, if we make the target faster, we make the mapping faster.

This issue in this case turned out to be very simple – it was large concatenated indexes on the target table. When an insert is performed oracle needs to maintain those indexes, and if those indexes are large, then the time taken to perform that maintenance can be substantial. In this case the indexes were in fact the same size as the table itself! Marking the indexes unusable provided the desired result and produced a response profile that looked like this;

infa threads 2

So from the above graphs, we can now see that the target is no longer the bottleneck in the system. Neither is the source. We’re in the position now, where the “bottleneck” is informatica. However, the above profile could be from a mapping that is absolutely fine. There are no obvious bottlenecks here. In order to identify further bottlenecks, we need to look inside informatica and at the performance metrics it produces. I’ll cover this is a later paper.

One thing to note is the rise in the Cache Build active time. The actual time taken to build the cache didn’t change, but because the overall time of the mapping went down dramatically, the cache build actually took up 10% of the overall time of the mapping, which in this case was reasonable.

Conclusion

We have seen from this simple example that it’s possible to gather information about informatica’s performance directly from oracle. By understanding how informatica handles it’s connections we can start to draw conclusions about the data we extract from oracle. This essentially allows a developer or DBA to identify where the current bottlenecks are and where expensive development or support time should be spent in the future in order to gain the maximum benefit.

When this data is readily available to developers / testers & DBA’s it can have a dramatic effect on the productivity and quality of the work produced, saving development and testing time and potentially negating the need to buy ever bigger boxes!

A Whitepaper brought to you by www.SeeTheHIPPO.com

For further information please contact stephen.barr@assertive-software.com.
Copyright Assertive Software Ltd, 2011.

The Twelve Days of Christmas from HIPPO: No. 7 – Something for Everyone!

December 14, 2011 | By | No Comments

HIPPO’s Notification Centre is a feature that everyone in the Informatica community can use to receive alerts on warning signs in their Informatica environment and applications.

There’s a range of contexts where HIPPO will generate an alert. The first is a straightforward one: you define an elapsed time threshold for an Informatica Workflow or Session and when it is exceeded then an Alert is issued by email and to your HIPPO screen.

HIPPO raises the second category of Notifications when a Resource threshold is exceeded by a Workflow or Session. HIPPO allows you to define a CPU threshold in seconds or a Memory threshold in Megabytes. If a session, or the aggregated resource consumption by a workflow, exceeds this threshold then you receive an instant alert by email or to HIPPO running on your laptop, iPhone or iPad.

Similarly HIPPO will raise alerts when a named session or workflow spills cache to disk, or when the statistics for a session indicate that the execution profile is out of balance.

HIPPO will also monitor your Integration and Repository Services and alert you immediately should an outage occur.

These are just a few examples of the use of the Notification Centre. The good news is that extending these alerts is easy for us. Remember that HIPPO’s user interface displays only one-third of the information that HIPPO captures and stores within it’s repository so there is lots of scope to extend the warning signs that HIPPO will alert you to, both within Informatica and in the infrastructure that Informatica interacts with.

So take a look at what HIPPO can do for you – there really is something for everyone!

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I am choosing twelve of my favourite new benefits that HIPPO brings to the Informatica Community. I acknowledge that the twelve days of Christmas occur after Christmas and not before but I hope that you will allow me a little poetic license here!

The Twelve Days of Christmas from HIPPO: No. 6 – Cheery Capacity Planners!

December 13, 2011 | By | No Comments

An enterprise-level Informatica environment is a serious investment that requires careful analysis to define initial capacity requirements and to plan capacity over time as new applications are implemented and existing ones grow, or reduce, in terms of data volumes. The question is how can a Capacity Planner gather the information needed to right size an Informatica environment for today and keep one eye on future needs? Is there a tool that can help?

Well there are many great enterprise-monitoring tools out there that can identify an Informatica process and the resource it consumes at the highest level. The trouble is that they see all of your workflows as simply a set of pmcmd processes running on a host. That might be enough if a broad trend is all you need because, by simple aggregation, these tools give a broad picture of resource consumption by generic Informatica processes.

But what if you need more than a broad broad brush approach? That’s where HIPPO comes in. HIPPO enables Capacity Planners to plot trends in terms of CPU, Memory usage and Data Movement by Node as well as by any logical group of processes, a grouping that we call a ‘Project’ in HIPPO.

Projects can be defined at a high level: a domain, an integration service or even a Repository. Or at an Application level: for example a Data Warehouse application made up of Informatica (and even non-Informatica) processes or a completely distinct entity, such as a web-server process. HIPPO analyses, profiles and stores the CPU, Memory and Data Movement statistics for your Projects and over time the capacity planning information stored in HIPPO becomes an ever-richer information resource to predict future capacity needs.

The strength of HIPPO comes from its ability to take the metadata within Informatica and look outwards to link this to the hardware and operating system metrics. This enables HIPPO to give you an Informatica-specific view of the trends in workload and resource consumption that are needed to plan for the challenges ahead, to free capacity where possible and to make wise investment decisions when necessary.

So make a Capacity Planner Cheery this Christmas! Tell them about HIPPO – the unique Capacity Planning tool for Informatica.

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I am choosing twelve of my favourite new benefits that HIPPO brings to the Informatica Community. I acknowledge that the twelve days of Christmas occur after Christmas and not before but I hope that you will allow me a little poetic license here!

The Twelve Days of Christmas from HIPPO: No. 5 – Sunny Support Staff!

December 11, 2011 | By | No Comments

Application Support Analysts in an Informatica Grid environment have a tough task: they often need to monitor multiple Integration services at the same time. The amount of execution history they can access is limited which means they find it hard to put overrunning jobs into context and they have limited time and access to enable them to diagnose issues and failures.

That’s why we have added some specially-created features for Support staff to HIPPO which enable Analysts to monitor their entire Informatica estate on a single screen and drill down from there to get all the details they need. The first of these is HIPPO’s Activity Monitor: a vizualisation of the current status of every task running across every Integration Service in the Informatica environment. The Activity Monitor Live screen automatically refreshes, colour-coding every task: red for failure, amber for sessions with rejected rows and green for success. Every task stays around on the screen for ten minutes after they end and, because HIPPO automatically extracts everything that is important from the Log File, you can drill down to the detailed level and examine, for example, how many rows were written to each of the targets in the session, how many rows were rejected and detailed diagnostic information for every failure so the fault can be routed the error to the relevant authority.

HIPPO’s Activity Monitor also provides an Historic View which means that you can put tonight’s overrun or failure into context: has this incident ever occurred before? (HIPPO stores all history back to when it was first installed), has the session ever run for this period of time before? What about last week, last month or last year’s execution? How about those rejected rows – why is this happening? You can also drill down into every task in the Historic View to every important metric reported for that execution – from the high level statistics all the way down to the % busy/idle for the Reader, Writer & transformation threads and the Task trends in data movement and resource consumption.

So, without leaving HIPPO, Support Analysts can enjoy a 360-degree view of the activity in their environment and can access all of the information they need to add value to the support they give to their Informatica stakeholders.

That’s why HIPPO is making Support Staff Sunny this Christmas!

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I am choosing twelve of my favourite new benefits that HIPPO brings to the Informatica Community. I acknowledge that the twelve days of Christmas occur after Christmas and not before but I hope that you will allow me a little poetic license here!

The Twelve Days of Christmas from HIPPO: No. 4 – Administrators Aglow!

December 8, 2011 | By | No Comments

Spare a thought for your Informatica Administrator: they need to combine serious technical ability with the kinds of deal-making skills that would get them fast-tracked in the Diplomatic Service!

A big problem for Administrators is how do they ensure that the resources of a centralized Informatica Grid are shared fairly among their customers: a group of under-pressure Program and Project Managers with SLA’s and delivery deadlines to meet.

To illustrate the problem let’s turn the clock back to when the plans were first made to on-board these applications. Meetings were held and capacity requirements mapped out. Often using a best guess for what would be needed plus a bit more for contingency. After all, who wants to risk going live and being unable to meet processing demand? And a recharge structure was probably agreed. Someone, somewhere would pay for the additional resource required on the Grid to handle this increased workload. Perhaps the project will pay a monthly cost or perhaps they will pay upfront for additional capacity to be added to the Grid. In both cases using estimates made well before go-live to allow for purchasing and commissioning work to take place.

What happens next? Well everyone wants to feel that they are getting a fair deal right? Program Managers are no different. But how does an Administrator calculate the aggregate Informatica resource usage for an Application and, by extension, substantiate the monthly fees paid by their internal customers? Harder still, how about the initial upfront investment that was made, has it been justified by post go-live use?

Now you know why an Administrator needs to be a Diplomat as well as a Techie!

HIPPO’s gift to the Administrator is to take the heat out of the recharge process. HIPPO aggregates CPU, Memory and Data Movement metrics by Project over time and precisely calculates the resource cost of each project per month. HIPPO can even support Peak and Off-Peak charging tariffs. And what about those over-, or under-provisioned projects? With HIPPO you know exactly how much resource you need and when you need it, which makes for smarter provisioning decisions.

That’s why HIPPO is making Administrators Aglow this Christmas!

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I am choosing twelve of my favourite new benefits that HIPPO brings to the Informatica Community. I acknowledge that the twelve days of Christmas occur after Christmas and not before but I hope that you will allow me a little poetic license here!

The Twelve Days of Christmas from HIPPO: No.3 – the Delighted Developer!

December 7, 2011 | By | No Comments

There’s something for everyone in the latest release of HIPPO and for HIPPO’s third gift this Christmas let’s open a bumper present for every Informatica Developer out there. Remember all those hours spent Log Trawling to extract the information that you need to understand why your session ran slowly, or spilt to disk, or how much Memory and CPU was used by your Session and by each Transformation? Or if your partitioning strategy did what you expected? Well that’s history now!

The new release of HIPPO has a smarter way to get to the information that you need to understand the reasons for poor performance and the opportunities that you have to make significant improvements. For instance, HIPPO will tell you how much Memory and CPU was actually available to Informatica when your job ran. What effect your session partitioning strategy has had. What else was running on the Node, Integration Service or entire Grid when your job ran and how do the resource profiles of these processes compare. What was the actual Memory used to run your Session according to the Host, not Informatica. What the resource usage and elapsed time trends are for your process execution over the past week, month or year. What’s happening in your Workflow and overall Project – what is their aggregated performance profile and where is that trend headed.

The hours that you spend crawling logs, making notes and calculations, getting frustrated by having only two weeks of history in your Informatica Repository and being unable to access and correlate Operating System metrics with Informatica are over. HIPPO gives you all of the information that you need at your fingertips, sourced from across your infrastructure for the Informatica and non-Informatica processes that make up your Application. HIPPO presents the performance profile of your non-Informatica processes together with your Informatica Transformations, Sessions, Workflows, Projects, Nodes and Grids and makes navigation between these levels easy: allowing you to move effortlessly from the profile of an individual transformation all the way up to the birds-eye view of your entire Application and Environment.

HIPPO will make Developers Delighted this Christmas by taking the legwork – and the guesswork – out of Informatica Performance profiling. Get HIPPO and turn optimizing Informatica from an Art into a Science!

Check back tomorrow for number 4 in the series.

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I am choosing twelve of my favorite new benefits that HIPPO brings to the Informatica Community. I acknowledge that the twelve days of Christmas occur after Christmas and not before but I hope that you will allow me a little poetic license here!

The Twelve Days of Christmas from HIPPO: No.2 – All Hail the Performance Czar!

December 7, 2011 | By | No Comments

For HIPPO’s second gift this Christmas, let’s look at a feature that has been created with your organization’s Performance Czar in mind but is equally useful for Informatica Developers, Testers and Administrators.

The new release of HIPPO has a unique feature that enables you to Search and Report by a wide variety of Performance metrics. So if you want to know which Session executions consumed more than 50 CPU seconds, or 250 MB of Memory, or Sessions whose Cache spilt to Disk, or scored highest in Time to First Row, then these and many more Performance statistics are available within HIPPO. You can set the Performance thresholds that make sense for your organization and then narrow your Search by Date & Time range, by Project, Node, Integration Service, Grid, Repository and Domain and you can even include design features in your Search such as the use of SQL Overrides!

Of course, we are not claiming that high scores in any of these categories is proof in itself of poor performance but what is certain is that these are the resource-intensive processes that should be top of your list for an optimization review. So if you are a Developer, a Tester or an Administrator then you can use HIPPO to rank Sessions by performance metrics and then drill down to see why they are so resource-intensive and what your options are to make them more efficient. Just make sure that you either make improvements or use HIPPO to have your explanation ready when the Performance Czar stops by to discuss the Performance Threshold report that they just ran using HIPPO!

So forget the usual tributes that you pay to your Performance Czar at this time of year and give them something different – something both they and you will find really useful – the HIPPO Performance report.

Check back tomorrow for number 3 in the series!

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I’m going to choose twelve of my favorite new benefits. It is a tradition in many parts of the world to celebrate the twelve days of Christmas and we have a Christmas carol here in the UK that associates each of the twelve days with a gift. I acknowledge that the twelve days of Christmas occur after Christmas and not before but I hope that you will allow me a little poetic license here!

The Twelve Days of Christmas from HIPPO: No.1 – A Happy DBA!

December 7, 2011 | By | No Comments

The latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I’m going to choose twelve of my favorite new benefits.

So, for the first day of Christmas, let’s start with something special for the DBA in your life! Version 3 of HIPPO has a unique feature which enables a DBA to trace an individual execution of a SQL statement, in seconds, all the way back from the database to the Session and Workflow that is responsible for it. And why will your DBA rate this their best Christmas gift ever? Well DBAs see Informatica from the database end. It isn’t straightforward to find the Session owners of long-running SQL statements initiated by Informatica processes, or worse still, orphan SQL executions spawned by long-cancelled Sessions. So they have a tough call to make: made all the harder when they cannot identify the responsible Session, Project or Developer. And what about tuning advice? Your DBA wants to be pro-active; they can see how the SQL can be improved but whom should they call? Now they can simply open HIPPO, copy and paste the SQL from their Management console straight into HIPPO’s Search screen and the responsible workflow and session are returned. Armed with this information from HIPPO, a call is made and a decision taken about the SQL process. The result – one happy DBA!

Check back tomorrow for number 2 in the series!

Footnote: it is a tradition in many parts of the world to celebrate the twelve days of Christmas and we have a Christmas carol here in the UK that associates each of the twelve days with a gift. I acknowledge that the twelve days of Christmas occur after Christmas and not before but I hope that you will allow me a little poetic license here!

Everything That You Wanted To Know About HIPPO But Were Too Polite To Ask…..

November 2, 2011 | By | No Comments

It’s great when you get asked a challenging question which actually really helps you to explain what’s unique about your product. Shailesh Chaudhri did this yesterday in a related Informatica Group. Shailesh asked “Mark, I believe HIPPO is a great product but then does it just not fetch all this information from the Informatica Repository? Why invest so much when Informatica Reporting services, connected to the repository gives nearly similar results. A few tweaks here and there and you get Dashboards created which give you the necessary information.”

In my reply I agreed with Shailesh that Informatica has some great tools and let him know that  this question arises quite regularly during webex demonstrations and conversations with prospective customers. I think that one of our existing customers from a major global bank put it best when he said that the Informatica solution and HIPPO are like two sides of a coin: the Informatica tools focus on the Informatica repository and the HIPPO solution looks outward from the Repository to what is happening in the infrastructure around Informatica.

And you know what? I think he hit the nail on the head. Only 25% of the information that HIPPO provides comes from Informatica and the remaining 75% comes from the Host CPUs, Memory, I/O, Storage and the databases that your Powercenter processes interact with. The unique thing about HIPPO is that it puts this information into the context of Informatica and of your own projects. Let’s take an example; if you have an application called Finance Data Warehouse which is made up of various Informatica processes, stored procedures and scripts then HIPPO allows you to create a logical grouping of these processes, Informatica and non-Informatica alike, and then produce a deep-grained analysis of the performance, cost and efficiency of this project and the trends of all of these key metrics. This isn’t available in the Informatica Reporting Services tool because its focus, good though it is, is  on the Informatica repository.

All of the information that HIPPO stores is held in an open data model in a database of your choice so if you would like to use Infa Reporting Services to build your own reports rather than use our browser-based reports then that’s great. We are completely open about our data model so anyone who use Reporting services on HIPPO’s Repository gets our full support!

So, thanks Shailesh, yours was a really perceptive question that gets to the heart of what’s different about HIPPO!