Image Image Image Image Image Image Image Image Image

Mark Connelly, Author at HIPPO | Vertoscope

Check out the latest news on HIPPO.

Get Ready for HIPPO V4!

March 27, 2018 | By | No Comments

Announcing Version 4.0 of HIPPO

Version 4 of HIPPO is available  and has some great new features.
Over the course of the summer we have met with developers, administrators and support staff from twenty of the world’s largest Informatica customers to find out what we could add to HIPPO to make their lives easier.

They told us that they needed a tool which told them where they needed to focus, which jobs had failed, which had overrun, which had contended for resources with other resource-hungry jobs. They needed visibility; they were tired of going to multiple tools to understand their Informatica environment, of spending hours analyzing log files, of trying to guess when they should locate a new workflow in their schedule. Above all they needed something that could cut through the complexity of their Informatica environments to give them the answers about what is going on now as well as what has happened historically.

The result is HIPPO V4; built by us to meet the specific needs of Informatica Administrators, Developers and Support staff. HIPPO V4 is designed to cut through the complexity and get answers fast using HIPPO’s real-time analysis combined with our unique operational data warehouse containing an entire history of activity in your Informatica environment from the moment that HIPPO was installed.

Version 4 of HIPPO is available now. Read how HIPPO will make it easier for you to manage, support and improve the performance of your Informatica PowerCenter platform.

HIPPO – what you see is what you get!
Get Real Time Alerts & Notifications from HIPPO.
You have hundreds or even thousands of Informatica sessions running every day but you need to focus on which jobs are not behaving normally; which have overrun, which have failed or dropped rows or are using more CPU or Memory than usual. That’s where Exceptions, SLA and Deviation alerting from HIPPO will make your life easier.

HIPPO ensures that you are the first to know about over thirty different types of issues and failures that you define, ranging from missed SLAs, workflow, session or service failure to changes in performance levels or data volumes processed. Now HIPPO even enables you to correlate multiple failures for rapid root cause analysis and because HIPPO monitors all of the processes on your environment, including non-Informatica processes, then you can set notifications on Oracle or Teradata resource usage within your environment.

Alerts & Notifications have always been a part of HIPPO, in release 4 the feature has been hugely enhanced into a special module we call HANC – Hippo’s Active Notification Centre – with a new, easier interface to create and edit rules.

Focus on the Activity that matters in your Informatica Environment.
Focus on what you’re interested in! Activity Monitor now has the ability to focus only on failures or rejected row executions across your environment, both in real time and historically. HIPPO visually links these events to notifications so you can see at a glance which failures are most significant and must be dealt with first. Of course, Activity Monitor also gives you the big picture view of all activity in your environment so you get a full 360 degree view whenever you need it!

Be the first to know when Repository, Integration or Domain Services fail.
You need to know immediately when a Repository, Integration Service or Domian goes down. That’s why in HIPPO version 4, the Vital Signs function has been enhanced to give you the most up-to-date status possible on even the largest implementations to ensure that you can respond instantly to minimise downtime.

HIPPO links Version Control History to Session and Workflow failures.
Everyone knows that things often go wrong when changes to mappings, sessions and workflows go live. Unfortunately support staff rarely know that a change has been implemented. That’s why HIPPO v4 it’s now easy to see when the code version has changed and what the impact of those changes have been. HIPPO is the only tool on the market to correlate session behaviour to version history over time to help you identify the root cause of failure faster.

There are ten sessions running, which one is using all the memory?
HIPPO’s Timeline feature has rightly been described as a breakthrough in the ability of Informatica staff to visualise and interpret activity across their entire Informatica environment. Timeline is even better in Version 4; as well as providing you with an overview of everything that is running, or has run in your chosen time frame, you can now also visualize an individual session’s contribution to the overall system resource usage allowing you to pinpoint contention and resource constraints easily in a busy shared environment.

What’s coming next in Version 4.1

Version 4.1 is planned and we are already building some exciting new features in this release. Here’s what is planned in version 4.1:

Source and Target Analysis
In Version 4.1 HIPPO will identify at a glance when individual tables are being impacted by multiple sessions as sources or targets to enable you to quickly identify when database contention is causing your sessions and workflows to run more slowly than normal.
Governance Reporting
In version 4.1 HIPPO will provide a range of governance reports covering everything from naming standards to session configuration settings to ensure that you can catch coding errors before they cause sessions and workflows to fail in production.
Get advance warning of SLA failure by Real-time Batch Deviation Analysis
Watch your batch execute in real time and see expected end-times for each job. If a job is running for more than it’s predicted elapsed time then the jobs will be highlighted in red so you can immediately see the health of your processes based on historical analysis.
HIPPO offers sophisticated Filesystem Monitoring
Version 4 of HIPPO includes the ability to relate individual cache logs on disk to the session that generated them. This allows you to visualise not only filesystem usage (e.g. cache directories) but also to see which projects and which sessions are utilising this space in near-real time. Which means that you can take preventative action to stop service failure due to directory space issues.

Consolidated Domain, Node & Catalina Logs analysis from within HIPPO
Get all of your logs in one place across multiple domains and when you are notified of errors you will now receive the actual log excerpt as part of the notification from HIPPO’s Active Notification Centre.

For more information contact us at hippo@vertoscope.com or call us in the US at +1 866 634 1033 or in the UK at +44 (0) 20 7043 1787.

SLA and KPI Reporting in HIPPO 3.3 beta release

June 24, 2017 | By | No Comments

We are very proud to let you know that HIPPO’s KPI and SLA reporting modules are now in their beta test phase. You can now define SLAs in HIPPO and they will be actively monitored against the performance of your data integration processes. Choose from a range of options including number of rows processed, start time, run time duration, workflow and session failure and Rejected rows among others.

HIPPO will actively monitor your SLAs and when they are breached will inform all stakeholders by email. And we have gone even further! HIPPO is now mashable with your in-house SLA and KPI reporting applications! We have added new data visualisations so that you can see at a glance if you are meeting your targets

This is the latest feature that we have added to HIPPO’s constantly growing range of features and follows hard on the heels of our innovative Timeline feature which overlays resource consumption with the sessions and workflows active in that timeframe.

Taking the pulse of the Informatica Marketplace

April 17, 2017 | By | No Comments

In this weekend’s London Financial Times Sarah Gordon wrote about how Tech superiority is fleeting without an innovative edge. She cited David Mitchell’s novel Cloud Atlas, once a favourite of mine too, where consumers don’t drive cars, instead they Suzuki to their destination and when they want to call a friend they sony them rather than make a call on their cell phone. The book was only published back in 2004 but the brand names already make it seem dated. In 2012 you wouldn’t sony your friends, you’d probably apple them instead.

But as Sarah goes on to write, this week brought several non-fictional reminders of how quickly market leaders in the Tech industry can find their position eroded.

Lack of innovation caused by poor support within a company’s divisions or an overly centralised structure disables the life-support systems that enable an innovative idea to be first fostered, then road tested and then aligned with the organisation’s overall objectives. This gradual ‘opening up to the light’ of an innovation is the key; stress-test the idea too early and it will fail – it needs patience and support to foster and a gradual exposure to the ultimate testing ground, the market, in order that it can adapt to customers’ needs.

It struck me that this is what the Informatica Marketplace is all about; it is a nursery for ideas. They can emerge into a supportive environment and attract attention by the ‘donut-effect’ of bringing vendors and innovators together into the same virtual neighbourhood. Some ideas will still fall by the wayside, many will be re-cycled into other ‘shapes’ that better meet customer demand and the best ideas will either take off in their own right or be considered so good by Informatica that they decide to incorporate a part or whole of that idea into their product suite. Without the Marketplace there is simply no easy route to market for vendors; they become isolated and their innovations wither. The Informatica community also loses the diversity needed to keep an exciting, dynamic and innovative culture alive.

However the jury is still out; the Informatica Marketplace is young; Informatica customers are not accustomed to buying Informatica-centric products from anyone other than Informatica, the economy is tough and it takes time to assess just how many valuable ideas will emerge from the Marketplace eco-system. But what is certain is that the Informatica World conference in May is a great chance to take the pulse of the Marketplace; how many vendors from the Marketplace attend or contribute to the conference, how much publicity does Informatica give to the Marketplace at the conference itself and most of all, how receptive are the Informatica customers who attend the conference to the stalls, talks and pitches of the Marketplace vendors who also attend.

Time will tell but so far the signs are looking good. The first Marketplace Council meeting will take place just before the conference begins and a number of vendors are exhibiting or presenting at the conference. So if you are an Informatica customer who is planning to attend Informatica World 2012 then take some time out to visit the Marketplace vendor’s stands – they’re relying on your feedback to help them develop and grow the innovations that will hopefully bring benefits to the Informatica community now and in the years to come.

The Bowels of the Cache Beast: Informatica Cache Build IO Profile

April 4, 2012 | By | No Comments

This is a short technical piece showing the IO profile of an Informatica Powercenter session building a large lookup cache which has spilled to disk. Although you should always strive to avoid caches which spill to disk, if you’re in the position where you simply can’t avoid it then you may find this useful.
Click here to read more

Staging, Statistics & Common Sense

March 1, 2012 | By | No Comments

Staging, Statistics & Common Sense


Oracle Statistics Maintenance
Strategy in an ETL environment

STEPHEN BARR

Statistics Maintenance strategies are a pretty dry subject, but pretty important to guarantee SLA’s and consistent performance in an Oracle environment. This paper presents a particular strategy for dealing with statistics in a Staging area where joins are being used across the staging tables which force the need for a good level of statistics to be in place.
Most ETL applications use a staging area to stage source system data before loading it into the warehouse or marts. When implemented within an oracle environment a partitioning strategy is usually employed such that data that is not required any longer can be removed from the tables with theminimum amount of effort.
However, what sort of statistics maintenance strategy have your DBA’s or ETL Architects implemented for the staging area? Have they left everything defaulted and use Oracle’s GATHER STALE option? In a large proportion of the sites I visit, this is the strategy that is most widely deployed. To be fair, in the majority of cases for the majority of the time this can work perfectly well – but there are always those cases where people get called in the middle of the night because the ETL process is “running slow”. On the client sites I work on, I usually recommend a static approach to statistics in the staging area where possible. That is, you baseline your statistics up-front, and then leave them alone.
Note that this solution is not the only solution available. Other solutions include partitioning strategies which directly match your loading strategy or dynamic sampling (although this can also cause issues). It would be great to hear how other people handle statistics maintenance in their staging environment (ifat all).
There are usually three distinct type of tables within the staging schema –
Partitioned, time series tables – these tables make up the majority of tables in a typical staging area. They are all partitioned on a date range and will all be interrogated by a date range predicate.
Non-partitioned, time series tables – usually a very small minority. Usually fall into the category of “unable to partition” due to some data specific reasons.
Non-partitioned, non-time series tables – mainly reference type tables which are used as reference data. Not usually queried by a predicate, but taken in totality for each load if required.
The staging tables usually get populated by some outside source, by either pulling or pushing the data from the source systems. This process is usually an insert only process and therefore does not rely on statistics for it’s successful execution.
With a monthly partitioning strategy used in the staging schema and for a daily batch load, partition level statistics will always be used as the query will never pull out more than a single days worth of data.
The biggest question for the staging area is – how do we keep the statistics up-to-date such that the statistics for a particular daily load are always available and reasonably accurate. This is actually more difficult than it sounds. If we were to use the generic “gather stale” option, the partitions would only be analyzed in the first quarter of the month each night, going to every other night and eventually each week because of the 10% stale setting. This obviously leaves us with a problem. Also, when are the statistics to be gathered? In order to have the statistics available for the latest day which is loaded, the statistics would have to be gathered after the staging tables have been loaded but before the ETL process starts.
For example –
Day 24 of the month, staging load has just completed and the statistics have just been collected –

1 explain plan for

2 select * from stin_finance_transactions
3 where last_modified_dt >= to_date(’24-Mar-2005 00:00:00′,’DD-MON-YYYY HH24:MI:SS’)
4*

and last_modified_dt

/
Explained.
DW_STAGE@>/
PLAN_TABLE_OUTPUT
———————————————————————————-

Plan hash value: 2829342442
———————————————————————————–

| Id | Operation | Name | Rows |
———————————————————————————–

| 0 | SELECT STATEMENT | | 1205K|
| 1 | PARTITION RANGE SINGLE | | 1205K|

| 2 | TABLE ACCESS BY LOCAL INDEX ROWID| STIN_FINANCE_TRANSACTIONS | 1205K|

|* 3 | INDEX RANGE SCAN | UI_STIN_FINANCE_TRANSACTIONS | 1205K

|
———————————————————————————–

Predicate Information (identified by operation id):

—————————————————

3 – access(“LAST_MODIFIED_DT”>=TO_DATE(‘2005-03-24 00:00:00’, ‘yyyy-mm-dd hh24:mi:ss’)
AND
”LAST_MODIFIED_DT”

16 rows selected.

Effectively this means there are no maintenance requirements for the DW_STAGE statistics in the short / medium term. The only maintenance required will be when new partitions are added to the staging tables – which will simply require the application of the current baselines (handled by the supplied package). The possible negative implications of this approach is that the DW_STAGE schema will not support adhoc end user queries.
However, I personally believe this could be viewed as almost positive – yet another reason not to allow end-users access to complicated “partly relational” datasets and therefore reducing the possibilities of people basing business decisions on poor quality intelligence.

@Copyright 2012 Assertive Software Ltd All rights reserved

HIPPO 3.1 is here!

February 8, 2012 | By | No Comments

HIPPO version 3.1 shipped on Monday! Its easy to install, ships with its own database and web server and you’ll be up and running in 30 minutes!

HIPPO has some great new features as you will see on the Media page but for now there’s one thing that I want to draw your attention to: HIPPO’s Timeline. On the same chart we overlay resource usage with all of the workflows or sessions – you choose which view you want – active at that point in time. You can scroll through time and zoom in on particular time periods to see everything that is going on and exactly which percentage of CPU, or Memory, or Data Movement, network, etc is being used by that Workflow or Session.

style=”text-align: justify; font-family: ‘Verdana’; color: ‘black’;”>It’s a great feature that makes drilling down into overnight batch problems or finding space in your Indformatica schedule an absolute cinch! HIPPO’s Timeline – 21st century data visualization for Informatica!

The Twelve Days of Christmas from HIPPO: No.11 – Turbo-charged Informatica Administration

December 23, 2011 | By | No Comments

As we wind down for the holidays I am wrapping up this series with a bumper bundle of three gifts from HIPPO to get 2012 off to a great start. Are you ready for your next gift from HIPPO?

Gift No. 11 is a HIPPO Hub Manager Trial license. This version of HIPPO is designed for Informatica Administrators and Managers. It provides you with the tools you need to understand everything that is significant in your Informatica environment.

This means we include HIPPO’s Capacity Planning feature to trend and plan resource usage, HIPPO’s Vital Signs feature which provides a real-time monitor for the status of all of the Repositories, Domains and integration services across all of your Informatica environments, HIPPO’s File System, CPU and Memory monitoring features which enable you to see how much resource is available and how much is being consumed.

HIPPO Hub Manager also includes the Activity Monitor and allows you to set alerts via the Notification Centre on a wide variety of activity and resource thresholds. And lastly, the ability to understand the use of your environment by project to enable you to trend demand and recharge the use of your shared environment.

HIPPO’s penultimate gift this Christmas is to give the chance to all Informatica Administrators to enjoy a free trial of HIPPO Hub Manager in 2012 by visiting our website and signing up to enjoy an unrivalled ability to control and administer your Informatica environment.

 

 

 

The Twelve Days of Christmas from HIPPO: No.10 – Step up Developers & Testers!

December 22, 2011 | By | No Comments

As we all wind down for Christmas I thought that I would wrap up this series with a bumper bundle of three gifts from HIPPO to get 2012 off to a great start. Are you ready for your first parcel?

Gift No. 10 is a HIPPO for Projects trial license. This version of HIPPO is specifically designed to be used by Development and Testing teams and contains all of the features needed to understand workflow, mapping and session performance: including visibility of memory and CPU usage and trends from the big picture down to individual transformations, sources and targets.

Testers can set performance and resource thresholds that must be passed before go-live. HIPPO for Projects includes HIPPO’s Activity Monitor feature and the Notification Centre to alert you to events and execution issues in your environments.

Then there’s the Analysis feature in HIPPO: this is the deep dive down into the top workflow and session consumers by resource consumption, by elapsed time and by data movement. This is where your performance tuning work is likely to yield the largest benefit which is why HIPPO takes you from here to an intensive analysis of the Workflow by visualizing Execution behaviour, Workflow and Session trends and a historic analysis of workflow behaviour. HIPPO then drills down to profile individual Sessions by analysing individual Transformation behaviours, Data Movement characteristics, Task trends in terms of CPU, Cache, Data Movement, Time to First Row and thread profiles.

So Developers and Testers step forward! Unwrap your free trial of HIPPO for Projects by visiting our website to sign up for your trial in 2012 and prepare to see a step-change in the quality and performance of your Informatica applications.

The Twelve Days of Christmas from HIPPO: No. 9 – Unlock the Mystery of the Cache!

December 20, 2011 | By | No Comments

Browse many of the Informatica Developer forums and what is most striking is that many posts relate to Cache behaviour and Cache sizing. Often kind folk reply. They offer solutions to the struggling developers and the sheer range of their advice varies enormously:  “just set it to auto” to “try doing X and adjusting Y and then let me know if it had any effect” or “re-design your mapping” are common responses. Some may even contain good advice but what is clear by their variety is that there is little consensus.

Yet a shared understanding of cache behaviour is critical since it gives us the ability to gauge how much memory is actually being used, how much is actually required and to understand how it is split by cache type. This can shine a new light on an existing development project or even on a mature code-base. Understanding the make-up and nature of the code running in your environment allows you to make much better decisions – whether those decisions are capacity planning, buying new kit or just trying to squeeze more from your existing investment – knowledge is the key to making informed choices about resource usage.

That’s why HIPPO captures statistics from Informatica about the cache sizes for aggregators, joiners, sorters and lookups. HIPPO reports this alongside the actual size of the cache in memory, according to the OS, and whether that cache has split to disk. HIPPO also captures how long the cache took to build and presents this in a series of Data Visualizations that include how much memory is actually available when your session executes.

This means that you can optimize cache memory usage across the session starting in development to ensure that the session is highly performant at go-live. Then HIPPO will alert you on the need to improve performance for the mature production code as circumstances change; as more sessions run in contention; as data volumes in look-up caches grow in step with historical data growth; and so on. HIPPO then trends this cache behaviour over your chosen time span to help you plan ahead for future needs. If you are serious about your Informatica environment then you need HIPPO’s pro-active monitoring approach to sustain highly-performant applications that get the most from your infrastructure investment.

That’s why HIPPO’s ninth day of Christmas gift is the unlocking of the mystery of the Cache!

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I am choosing twelve of my favourite new benefits that HIPPO brings to the Informatica Community. I acknowledge that the twelve days of Christmas occur after Christmas!

The Twelve Days of Christmas from HIPPO: No. 8 – Interactive Investigation!

December 16, 2011 | By | No Comments

Interactive Data Visualisation is a hot topic right now so can we use it to analyse the multiple factors which affect Informatica performance?

Data Visualisation is a technique that has actually been around for a long time. Perhaps the most famous data visualisation is of Napoleon’s infamous march on Moscow produced by Charles Minard in the early 1800’s. What is so great about this visualisation is that it combines the four key elements of the story in one easily understood chart: the dates (a timeline), geography (the army’s route), the huge loss of life and the massive temperature variation. It is clear that all four elements must be on the same chart to convey the whole story.

Is there a lesson there for those of us involved with Informatica?

Well, we were presenting HIPPO to the Informatica staff at a large bank recently. They really liked HIPPO but they felt that a great addition would be an overlay of the four key elements of an Informatica batch run on the same graph. Sure, the information was already in HIPPO, but wouldn’t it be great if you could combine these key elements into a single chart and make it interactive: adding and subtracting the key elements, as you need.

It was such a great idea that we just had to run with it!

This new data visualisation in HIPPO will combine Time, CPU usage, Memory Usage and Workflow activity in a single interactive chart. This means that when you need to know why your workflow overran, or missed its SLA, then you can quickly build a picture of what was going on when your Workflow executed. Start by selecting your timeframe, then overlay the graph with all of the workflows running in contention with yours, then overlay CPU usage to assess availability during this timeframe and finally overlay Memory usage.

In a few mouse clicks you’ll understand if factors outwith your workflow such as resource shortfall or workflow contention caused the performance dip. In many cases you will have your answer right there; but if not then you can use HIPPO to drill down to what was happening inside your workflow: were data volumes unusually high? Is the target database the issue? And so on.

It has been a busy few weeks in the HIPPO workshop turning this great idea from the audience at the bank into a late Christmas present for our customers. So why don’t you benefit from their suggestion too? Take a look at HIPPO in 2012 and get a faster way to solve those performance problems.

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I am choosing twelve of my favourite new benefits that HIPPO brings to the Informatica Community. I acknowledge that the twelve days of Christmas occur after Christmas!