Image Image Image Image Image Image Image Image Image


Check out the latest news on HIPPO.

16 Aug


No Comments

HIPPO Profiler – Log Files made Easy!

August 16, 2013 | By | No Comments

Trawling Log files is no one’s idea of fun, right? Especially in the middle of a production problem when time is limited and you need to get the show back on the road. There’s simply too much information to absorb and too many ways to interpret it!

Have you ever wished that you could quickly turn that PowerCenter session log file into a powerful problem-solving tool that gets you quickly to the answers you need?

The HIPPO Profiler does just that; it turns the data locked up in PowerCenter log files into actionable information displayed in an easy to understand graphical format.

HIPPO Profiler parses the log to produce a series of color coded charts that take you straight to the heart of your session execution identifying the primary cause of failure so you know exactly where action is needed to resolve the issue and complete the session execution.

But what if your problem is poor session performance? HIPPO Profiler can help there too by unlocking the information that you need to target the underlying cause of poor performance. Color-coded indicators guide you to where performance is being impacted by caches that spill to disk or rows being rejected. HIPPO Profiler also graphically illustrates how execution time is split across your reader, writer and transformation threads so that you know where to spend your time to improve session performance.

At the transformation level we tell you which transforms are consuming most execution time and allows you to correlate them against CPU and Memory resource usage for the session by graphically displaying the CPU & Memory profile for the session during the execution time.

Let’s consider Look Up Caches, for example; Cache behavior is a common cause of poor performance. When a wrongly configured cache spills to disk then the Log Profiler highlights this in red and if a SQL override is involved then this is displayed in yellow. You’ll find Actual and Advised cache data and index memory sizes so that you can make the adjustments needed to right size those cache files. You’ll also find the actual SQL executed to build the cache, the cache build time, row count and size so that you can identify poorly performing SQL and provide the information needed by your DBA to make improvements.

That’s just one of the features that you’ll find in the HIPPO Profiler. Try the tool for free and discover why spending hours analyzing log files has been consigned to history. Use HIPPO Profiler to turn your log files into actionable information to help you to resolve failures and improve performance.

Sign up for your free subscription to use our HIPPO Profiler service by sending an email to

25 Jun


No Comments

Intelligent HIPPO’s – whatever next?

June 25, 2013 | By | No Comments

When monitoring a large scale production system, what is it that your best support analysts do that adds value? For us, that answer has always been that they leverage their knowledge of the system, it’s patterns, it’s nuances, it’s schedules. They do this by watching the system over time, building up a knowledge base of information about the system and then using that historical knowledge to identify and resolve issues – hopefully proactively!

Now HIPPO can do the same and we’re calling it Auto-Sense.

As part of HIPPO v5 we’re implementing Intelligent Heuristic Algorithms and Machine Learning techniques into HIPPO which leverage HIPPO’s vast knowledge base about your environment – it’s patterns, it’s nuances, it’s schedules. HIPPO knows more about your system and how it operates than even your best Support Analysts and now HIPPO can proactively warn you about anomalies and problems with your system without ever having to setup and manage a complex sets of rules.

· Automatically learns and monitors you scheduling patterns and warns on deviations or overruns
· Learns where to focus its gaze by monitoring and analysing patterns in failures and problem areas
· Recommends corrective action when anomalies are detected
· Identifies areas of opportunity to reduce your batch window
· ICC awareness means HIPPO can suggest architectural improvements e.g. node configuration, workload distribution, etc.
· When a problem occurs, HIPPO automatically collates and presents all relevant information on a single “Problem Dashboard” to help you resolve the issue quickly and easily

Auto-Sense is our vision of a truly autonomous monitoring system for Informatica.

HIPPO is evolving and we would love to hear your feedback about our vision for enterprise monitoring.

Get Ready for HIPPO V4!

September 27, 2012 | By | No Comments

Announcing Version 4.0 of HIPPO

Version 4 of HIPPO is available from the start of October 2012 and has some great new features.
Over the course of summer 2012 we have met with developers, administrators and support staff from twenty of the world’s largest Informatica customers to find out what we could add to HIPPO to make their lives easier.

They told us that they needed a tool which told them where they needed to focus, which jobs had failed, which had overrun, which had contended for resources with other resource-hungry jobs. They needed visibility; they were tired of going to multiple tools to understand their Informatica environment, of spending hours analyzing log files, of trying to guess when they should locate a new workflow in their schedule. Above all they needed something that could cut through the complexity of their Informatica environments to give them the answers about what is going on now as well as what has happened historically.

The result is HIPPO V4; built by us to meet the specific needs of Informatica Administrators, Developers and Support staff. HIPPO V4 is designed to cut through the complexity and get answers fast using HIPPO’s real-time analysis combined with our unique operational data warehouse containing an entire history of activity in your Informatica environment from the moment that HIPPO was installed.

Version 4 of HIPPO is available in October 2012. Read how HIPPO will make it easier for you to manage, support and improve the performance of your Informatica PowerCenter platform.

HIPPO – what you see is what you get!
Get Real Time Alerts & Notifications from HIPPO.
You have hundreds or even thousands of Informatica sessions running every day but you need to focus on which jobs are not behaving normally; which have overrun, which have failed or dropped rows or are using more CPU or Memory than usual. That’s where Exceptions, SLA and Deviation alerting from HIPPO will make your life easier.

HIPPO ensures that you are the first to know about over thirty different types of issues and failures that you define, ranging from missed SLAs, workflow, session or service failure to changes in performance levels or data volumes processed. Now HIPPO even enables you to correlate multiple failures for rapid root cause analysis and because HIPPO monitors all of the processes on your environment, including non-Informatica processes, then you can set notifications on Oracle or Teradata resource usage within your environment.

Alerts & Notifications have always been a part of HIPPO, in release 4 the feature has been hugely enhanced into a special module we call HANC – Hippo’s Active Notification Centre – with a new, easier interface to create and edit rules.

Focus on the Activity that matters in your Informatica Environment.
Focus on what you’re interested in! Activity Monitor now has the ability to focus only on failures or rejected row executions across your environment, both in real time and historically. HIPPO visually links these events to notifications so you can see at a glance which failures are most significant and must be dealt with first. Of course, Activity Monitor also gives you the big picture view of all activity in your environment so you get a full 360 degree view whenever you need it!

Be the first to know when Repository, Integration or Domain Services fail.
You need to know immediately when a Repository, Integration Service or Domian goes down. That’s why in HIPPO version 4, the Vital Signs function has been enhanced to give you the most up-to-date status possible on even the largest implementations to ensure that you can respond instantly to minimise downtime.

HIPPO links Version Control History to Session and Workflow failures.
Everyone knows that things often go wrong when changes to mappings, sessions and workflows go live. Unfortunately support staff rarely know that a change has been implemented. That’s why HIPPO v4 it’s now easy to see when the code version has changed and what the impact of those changes have been. HIPPO is the only tool on the market to correlate session behaviour to version history over time to help you identify the root cause of failure faster.

There are ten sessions running, which one is using all the memory?
HIPPO’s Timeline feature has rightly been described as a breakthrough in the ability of Informatica staff to visualise and interpret activity across their entire Informatica environment. Timeline is even better in Version 4; as well as providing you with an overview of everything that is running, or has run in your chosen time frame, you can now also visualize an individual session’s contribution to the overall system resource usage allowing you to pinpoint contention and resource constraints easily in a busy shared environment.

What’s coming next in Version 4.1 in November!

Version 4.1 is planned for November 2012 and we are already building some exciting new features in this release. Here’s what is planned in version 4.1:

Source and Target Analysis
In Version 4.1 HIPPO will identify at a glance when individual tables are being impacted by multiple sessions as sources or targets to enable you to quickly identify when database contention is causing your sessions and workflows to run more slowly than normal.
Governance Reporting
In version 4.1 HIPPO will provide a range of governance reports covering everything from naming standards to session configuration settings to ensure that you can catch coding errors before they cause sessions and workflows to fail in production.
Get advance warning of SLA failure by Real-time Batch Deviation Analysis
Watch your batch execute in real time and see expected end-times for each job. If a job is running for more than it’s predicted elapsed time then the jobs will be highlighted in red so you can immediately see the health of your processes based on historical analysis.
HIPPO offers sophisticated Filesystem Monitoring
Version 4 of HIPPO includes the ability to relate individual cache logs on disk to the session that generated them. This allows you to visualise not only filesystem usage (e.g. cache directories) but also to see which projects and which sessions are utilising this space in near-real time. Which means that you can take preventative action to stop service failure due to directory space issues.

Consolidated Domain, Node & Catalina Logs analysis from within HIPPO
Get all of your logs in one place across multiple domains and when you are notified of errors you will now receive the actual log excerpt as part of the notification from HIPPO’s Active Notification Centre.

For more information contact us at or call us in the US at +1 866 634 1033 or in the UK at +44 (0) 20 7043 1787.

HIPPO short listed for the 2012 Informatica Innovation Awards!

May 5, 2012 | By | No Comments

Some great news from Informatica. HIPPO is one of three finalists in the Enhancing the Marketplace category of the 2012 Informatica Innovation Awards. To find out more please click on this link:

SLA and KPI Reporting in HIPPO 3.3 beta release

April 24, 2012 | By | No Comments

We are very proud to let you know that HIPPO’s KPI and SLA reporting modules are now in their beta test phase. You can now define SLAs in HIPPO and they will be actively monitored against the performance of your data integration processes. Choose from a range of options including number of rows processed, start time, run time duration, workflow and session failure and Rejected rows among others.

HIPPO will actively monitor your SLAs and when they are breached will inform all stakeholders by email. And we have gone even further! HIPPO is now mashable with your in-house SLA and KPI reporting applications! We have added new data visualisations so that you can see at a glance if you are meeting your targets

This is the latest feature that we have added to HIPPO’s constantly growing range of features and follows hard on the heels of our innovative Timeline feature which overlays resource consumption with the sessions and workflows active in that timeframe.

Taking the pulse of the Informatica Marketplace

April 17, 2012 | By | No Comments

In this weekend’s London Financial Times Sarah Gordon wrote about how Tech superiority is fleeting without an innovative edge. She cited David Mitchell’s novel Cloud Atlas, once a favourite of mine too, where consumers don’t drive cars, instead they Suzuki to their destination and when they want to call a friend they sony them rather than make a call on their cell phone. The book was only published back in 2004 but the brand names already make it seem dated. In 2012 you wouldn’t sony your friends, you’d probably apple them instead.

But as Sarah goes on to write, this week brought several non-fictional reminders of how quickly market leaders in the Tech industry can find their position eroded.

Lack of innovation caused by poor support within a company’s divisions or an overly centralised structure disables the life-support systems that enable an innovative idea to be first fostered, then road tested and then aligned with the organisation’s overall objectives. This gradual ‘opening up to the light’ of an innovation is the key; stress-test the idea too early and it will fail – it needs patience and support to foster and a gradual exposure to the ultimate testing ground, the market, in order that it can adapt to customers’ needs.

It struck me that this is what the Informatica Marketplace is all about; it is a nursery for ideas. They can emerge into a supportive environment and attract attention by the ‘donut-effect’ of bringing vendors and innovators together into the same virtual neighbourhood. Some ideas will still fall by the wayside, many will be re-cycled into other ‘shapes’ that better meet customer demand and the best ideas will either take off in their own right or be considered so good by Informatica that they decide to incorporate a part or whole of that idea into their product suite. Without the Marketplace there is simply no easy route to market for vendors; they become isolated and their innovations wither. The Informatica community also loses the diversity needed to keep an exciting, dynamic and innovative culture alive.

However the jury is still out; the Informatica Marketplace is young; Informatica customers are not accustomed to buying Informatica-centric products from anyone other than Informatica, the economy is tough and it takes time to assess just how many valuable ideas will emerge from the Marketplace eco-system. But what is certain is that the Informatica World conference in May is a great chance to take the pulse of the Marketplace; how many vendors from the Marketplace attend or contribute to the conference, how much publicity does Informatica give to the Marketplace at the conference itself and most of all, how receptive are the Informatica customers who attend the conference to the stalls, talks and pitches of the Marketplace vendors who also attend.

Time will tell but so far the signs are looking good. The first Marketplace Council meeting will take place just before the conference begins and a number of vendors are exhibiting or presenting at the conference. So if you are an Informatica customer who is planning to attend Informatica World 2012 then take some time out to visit the Marketplace vendor’s stands – they’re relying on your feedback to help them develop and grow the innovations that will hopefully bring benefits to the Informatica community now and in the years to come.

The Bowels of the Cache Beast: Informatica Cache Build IO Profile

April 4, 2012 | By | No Comments

This is a short technical piece showing the IO profile of an Informatica Powercenter session building a large lookup cache which has spilled to disk. Although you should always strive to avoid caches which spill to disk, if you’re in the position where you simply can’t avoid it then you may find this useful.
Click here to read more

Staging, Statistics & Common Sense

March 1, 2012 | By | No Comments

Staging, Statistics & Common Sense

Oracle Statistics Maintenance
Strategy in an ETL environment


Statistics Maintenance strategies are a pretty dry subject, but pretty important to guarantee SLA’s and consistent performance in an Oracle environment. This paper presents a particular strategy for dealing with statistics in a Staging area where joins are being used across the staging tables which force the need for a good level of statistics to be in place.
Most ETL applications use a staging area to stage source system data before loading it into the warehouse or marts. When implemented within an oracle environment a partitioning strategy is usually employed such that data that is not required any longer can be removed from the tables with theminimum amount of effort.
However, what sort of statistics maintenance strategy have your DBA’s or ETL Architects implemented for the staging area? Have they left everything defaulted and use Oracle’s GATHER STALE option? In a large proportion of the sites I visit, this is the strategy that is most widely deployed. To be fair, in the majority of cases for the majority of the time this can work perfectly well – but there are always those cases where people get called in the middle of the night because the ETL process is “running slow”. On the client sites I work on, I usually recommend a static approach to statistics in the staging area where possible. That is, you baseline your statistics up-front, and then leave them alone.
Note that this solution is not the only solution available. Other solutions include partitioning strategies which directly match your loading strategy or dynamic sampling (although this can also cause issues). It would be great to hear how other people handle statistics maintenance in their staging environment (ifat all).
There are usually three distinct type of tables within the staging schema –
Partitioned, time series tables – these tables make up the majority of tables in a typical staging area. They are all partitioned on a date range and will all be interrogated by a date range predicate.
Non-partitioned, time series tables – usually a very small minority. Usually fall into the category of “unable to partition” due to some data specific reasons.
Non-partitioned, non-time series tables – mainly reference type tables which are used as reference data. Not usually queried by a predicate, but taken in totality for each load if required.
The staging tables usually get populated by some outside source, by either pulling or pushing the data from the source systems. This process is usually an insert only process and therefore does not rely on statistics for it’s successful execution.
With a monthly partitioning strategy used in the staging schema and for a daily batch load, partition level statistics will always be used as the query will never pull out more than a single days worth of data.
The biggest question for the staging area is – how do we keep the statistics up-to-date such that the statistics for a particular daily load are always available and reasonably accurate. This is actually more difficult than it sounds. If we were to use the generic “gather stale” option, the partitions would only be analyzed in the first quarter of the month each night, going to every other night and eventually each week because of the 10% stale setting. This obviously leaves us with a problem. Also, when are the statistics to be gathered? In order to have the statistics available for the latest day which is loaded, the statistics would have to be gathered after the staging tables have been loaded but before the ETL process starts.
For example –
Day 24 of the month, staging load has just completed and the statistics have just been collected –

1 explain plan for

2 select * from stin_finance_transactions
3 where last_modified_dt >= to_date(’24-Mar-2005 00:00:00′,’DD-MON-YYYY HH24:MI:SS’)

and last_modified_dt


Plan hash value: 2829342442

| Id | Operation | Name | Rows |

| 0 | SELECT STATEMENT | | 1205K|




Predicate Information (identified by operation id):


3 – access(“LAST_MODIFIED_DT”>=TO_DATE(‘2005-03-24 00:00:00’, ‘yyyy-mm-dd hh24:mi:ss’)

16 rows selected.

Effectively this means there are no maintenance requirements for the DW_STAGE statistics in the short / medium term. The only maintenance required will be when new partitions are added to the staging tables – which will simply require the application of the current baselines (handled by the supplied package). The possible negative implications of this approach is that the DW_STAGE schema will not support adhoc end user queries.
However, I personally believe this could be viewed as almost positive – yet another reason not to allow end-users access to complicated “partly relational” datasets and therefore reducing the possibilities of people basing business decisions on poor quality intelligence.

@Copyright 2012 Assertive Software Ltd All rights reserved

HIPPO 3.1 is here!

February 8, 2012 | By | No Comments

HIPPO version 3.1 shipped on Monday! Its easy to install, ships with its own database and web server and you’ll be up and running in 30 minutes!

HIPPO has some great new features as you will see on the Media page but for now there’s one thing that I want to draw your attention to: HIPPO’s Timeline. On the same chart we overlay resource usage with all of the workflows or sessions – you choose which view you want – active at that point in time. You can scroll through time and zoom in on particular time periods to see everything that is going on and exactly which percentage of CPU, or Memory, or Data Movement, network, etc is being used by that Workflow or Session.

style=”text-align: justify; font-family: ‘Verdana’; color: ‘black’;”>It’s a great feature that makes drilling down into overnight batch problems or finding space in your Indformatica schedule an absolute cinch! HIPPO’s Timeline – 21st century data visualization for Informatica!

The Twelve Days of Christmas from HIPPO: No.11 – Turbo-charged Informatica Administration

December 23, 2011 | By | No Comments

As we wind down for the holidays I am wrapping up this series with a bumper bundle of three gifts from HIPPO to get 2012 off to a great start. Are you ready for your next gift from HIPPO?

Gift No. 11 is a HIPPO Hub Manager Trial license. This version of HIPPO is designed for Informatica Administrators and Managers. It provides you with the tools you need to understand everything that is significant in your Informatica environment.

This means we include HIPPO’s Capacity Planning feature to trend and plan resource usage, HIPPO’s Vital Signs feature which provides a real-time monitor for the status of all of the Repositories, Domains and integration services across all of your Informatica environments, HIPPO’s File System, CPU and Memory monitoring features which enable you to see how much resource is available and how much is being consumed.

HIPPO Hub Manager also includes the Activity Monitor and allows you to set alerts via the Notification Centre on a wide variety of activity and resource thresholds. And lastly, the ability to understand the use of your environment by project to enable you to trend demand and recharge the use of your shared environment.

HIPPO’s penultimate gift this Christmas is to give the chance to all Informatica Administrators to enjoy a free trial of HIPPO Hub Manager in 2012 by visiting our website and signing up to enjoy an unrivalled ability to control and administer your Informatica environment.