Image Image Image Image Image Image Image Image Image

The Twelve Days of Christmas from HIPPO: No. 5 – Sunny Support Staff!1

The Twelve Days of Christmas from HIPPO: No. 5 – Sunny Support Staff!

December 11, 2011 | By | No Comments

Application Support Analysts in an Informatica Grid environment have a tough task: they often need to monitor multiple Integration services at the same time. The amount of execution history they can access is limited which means they find it hard to put overrunning jobs into context and they have limited time and access to enable them to diagnose issues and failures.

That’s why we have added some specially-created features for Support staff to HIPPO which enable Analysts to monitor their entire Informatica estate on a single screen and drill down from there to get all the details they need. The first of these is HIPPO’s Activity Monitor: a vizualisation of the current status of every task running across every Integration Service in the Informatica environment. The Activity Monitor Live screen automatically refreshes, colour-coding every task: red for failure, amber for sessions with rejected rows and green for success. Every task stays around on the screen for ten minutes after they end and, because HIPPO automatically extracts everything that is important from the Log File, you can drill down to the detailed level and examine, for example, how many rows were written to each of the targets in the session, how many rows were rejected and detailed diagnostic information for every failure so the fault can be routed the error to the relevant authority.

HIPPO’s Activity Monitor also provides an Historic View which means that you can put tonight’s overrun or failure into context: has this incident ever occurred before? (HIPPO stores all history back to when it was first installed), has the session ever run for this period of time before? What about last week, last month or last year’s execution? How about those rejected rows – why is this happening? You can also drill down into every task in the Historic View to every important metric reported for that execution – from the high level statistics all the way down to the % busy/idle for the Reader, Writer & transformation threads and the Task trends in data movement and resource consumption.

So, without leaving HIPPO, Support Analysts can enjoy a 360-degree view of the activity in their environment and can access all of the information they need to add value to the support they give to their Informatica stakeholders.

That’s why HIPPO is making Support Staff Sunny this Christmas!

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I am choosing twelve of my favourite new benefits that HIPPO brings to the Informatica Community. I acknowledge that the twelve days of Christmas occur after Christmas and not before but I hope that you will allow me a little poetic license here!

The Twelve Days of Christmas from HIPPO: No. 4 – Administrators Aglow!

December 8, 2011 | By | No Comments

Spare a thought for your Informatica Administrator: they need to combine serious technical ability with the kinds of deal-making skills that would get them fast-tracked in the Diplomatic Service!

A big problem for Administrators is how do they ensure that the resources of a centralized Informatica Grid are shared fairly among their customers: a group of under-pressure Program and Project Managers with SLA’s and delivery deadlines to meet.

To illustrate the problem let’s turn the clock back to when the plans were first made to on-board these applications. Meetings were held and capacity requirements mapped out. Often using a best guess for what would be needed plus a bit more for contingency. After all, who wants to risk going live and being unable to meet processing demand? And a recharge structure was probably agreed. Someone, somewhere would pay for the additional resource required on the Grid to handle this increased workload. Perhaps the project will pay a monthly cost or perhaps they will pay upfront for additional capacity to be added to the Grid. In both cases using estimates made well before go-live to allow for purchasing and commissioning work to take place.

What happens next? Well everyone wants to feel that they are getting a fair deal right? Program Managers are no different. But how does an Administrator calculate the aggregate Informatica resource usage for an Application and, by extension, substantiate the monthly fees paid by their internal customers? Harder still, how about the initial upfront investment that was made, has it been justified by post go-live use?

Now you know why an Administrator needs to be a Diplomat as well as a Techie!

HIPPO’s gift to the Administrator is to take the heat out of the recharge process. HIPPO aggregates CPU, Memory and Data Movement metrics by Project over time and precisely calculates the resource cost of each project per month. HIPPO can even support Peak and Off-Peak charging tariffs. And what about those over-, or under-provisioned projects? With HIPPO you know exactly how much resource you need and when you need it, which makes for smarter provisioning decisions.

That’s why HIPPO is making Administrators Aglow this Christmas!

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I am choosing twelve of my favourite new benefits that HIPPO brings to the Informatica Community. I acknowledge that the twelve days of Christmas occur after Christmas and not before but I hope that you will allow me a little poetic license here!

The Twelve Days of Christmas from HIPPO: No.3 – the Delighted Developer!

December 7, 2011 | By | No Comments

There’s something for everyone in the latest release of HIPPO and for HIPPO’s third gift this Christmas let’s open a bumper present for every Informatica Developer out there. Remember all those hours spent Log Trawling to extract the information that you need to understand why your session ran slowly, or spilt to disk, or how much Memory and CPU was used by your Session and by each Transformation? Or if your partitioning strategy did what you expected? Well that’s history now!

The new release of HIPPO has a smarter way to get to the information that you need to understand the reasons for poor performance and the opportunities that you have to make significant improvements. For instance, HIPPO will tell you how much Memory and CPU was actually available to Informatica when your job ran. What effect your session partitioning strategy has had. What else was running on the Node, Integration Service or entire Grid when your job ran and how do the resource profiles of these processes compare. What was the actual Memory used to run your Session according to the Host, not Informatica. What the resource usage and elapsed time trends are for your process execution over the past week, month or year. What’s happening in your Workflow and overall Project – what is their aggregated performance profile and where is that trend headed.

The hours that you spend crawling logs, making notes and calculations, getting frustrated by having only two weeks of history in your Informatica Repository and being unable to access and correlate Operating System metrics with Informatica are over. HIPPO gives you all of the information that you need at your fingertips, sourced from across your infrastructure for the Informatica and non-Informatica processes that make up your Application. HIPPO presents the performance profile of your non-Informatica processes together with your Informatica Transformations, Sessions, Workflows, Projects, Nodes and Grids and makes navigation between these levels easy: allowing you to move effortlessly from the profile of an individual transformation all the way up to the birds-eye view of your entire Application and Environment.

HIPPO will make Developers Delighted this Christmas by taking the legwork – and the guesswork – out of Informatica Performance profiling. Get HIPPO and turn optimizing Informatica from an Art into a Science!

Check back tomorrow for number 4 in the series.

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I am choosing twelve of my favorite new benefits that HIPPO brings to the Informatica Community. I acknowledge that the twelve days of Christmas occur after Christmas and not before but I hope that you will allow me a little poetic license here!

The Twelve Days of Christmas from HIPPO: No.2 – All Hail the Performance Czar!

December 7, 2011 | By | No Comments

For HIPPO’s second gift this Christmas, let’s look at a feature that has been created with your organization’s Performance Czar in mind but is equally useful for Informatica Developers, Testers and Administrators.

The new release of HIPPO has a unique feature that enables you to Search and Report by a wide variety of Performance metrics. So if you want to know which Session executions consumed more than 50 CPU seconds, or 250 MB of Memory, or Sessions whose Cache spilt to Disk, or scored highest in Time to First Row, then these and many more Performance statistics are available within HIPPO. You can set the Performance thresholds that make sense for your organization and then narrow your Search by Date & Time range, by Project, Node, Integration Service, Grid, Repository and Domain and you can even include design features in your Search such as the use of SQL Overrides!

Of course, we are not claiming that high scores in any of these categories is proof in itself of poor performance but what is certain is that these are the resource-intensive processes that should be top of your list for an optimization review. So if you are a Developer, a Tester or an Administrator then you can use HIPPO to rank Sessions by performance metrics and then drill down to see why they are so resource-intensive and what your options are to make them more efficient. Just make sure that you either make improvements or use HIPPO to have your explanation ready when the Performance Czar stops by to discuss the Performance Threshold report that they just ran using HIPPO!

So forget the usual tributes that you pay to your Performance Czar at this time of year and give them something different – something both they and you will find really useful – the HIPPO Performance report.

Check back tomorrow for number 3 in the series!

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I’m going to choose twelve of my favorite new benefits. It is a tradition in many parts of the world to celebrate the twelve days of Christmas and we have a Christmas carol here in the UK that associates each of the twelve days with a gift. I acknowledge that the twelve days of Christmas occur after Christmas and not before but I hope that you will allow me a little poetic license here!

The Twelve Days of Christmas from HIPPO: No.1 – A Happy DBA!

December 7, 2011 | By | No Comments

The latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I’m going to choose twelve of my favorite new benefits.

So, for the first day of Christmas, let’s start with something special for the DBA in your life! Version 3 of HIPPO has a unique feature which enables a DBA to trace an individual execution of a SQL statement, in seconds, all the way back from the database to the Session and Workflow that is responsible for it. And why will your DBA rate this their best Christmas gift ever? Well DBAs see Informatica from the database end. It isn’t straightforward to find the Session owners of long-running SQL statements initiated by Informatica processes, or worse still, orphan SQL executions spawned by long-cancelled Sessions. So they have a tough call to make: made all the harder when they cannot identify the responsible Session, Project or Developer. And what about tuning advice? Your DBA wants to be pro-active; they can see how the SQL can be improved but whom should they call? Now they can simply open HIPPO, copy and paste the SQL from their Management console straight into HIPPO’s Search screen and the responsible workflow and session are returned. Armed with this information from HIPPO, a call is made and a decision taken about the SQL process. The result – one happy DBA!

Check back tomorrow for number 2 in the series!

Footnote: it is a tradition in many parts of the world to celebrate the twelve days of Christmas and we have a Christmas carol here in the UK that associates each of the twelve days with a gift. I acknowledge that the twelve days of Christmas occur after Christmas and not before but I hope that you will allow me a little poetic license here!

Everything That You Wanted To Know About HIPPO But Were Too Polite To Ask…..

November 2, 2011 | By | No Comments

It’s great when you get asked a challenging question which actually really helps you to explain what’s unique about your product. Shailesh Chaudhri did this yesterday in a related Informatica Group. Shailesh asked “Mark, I believe HIPPO is a great product but then does it just not fetch all this information from the Informatica Repository? Why invest so much when Informatica Reporting services, connected to the repository gives nearly similar results. A few tweaks here and there and you get Dashboards created which give you the necessary information.”

In my reply I agreed with Shailesh that Informatica has some great tools and let him know that  this question arises quite regularly during webex demonstrations and conversations with prospective customers. I think that one of our existing customers from a major global bank put it best when he said that the Informatica solution and HIPPO are like two sides of a coin: the Informatica tools focus on the Informatica repository and the HIPPO solution looks outward from the Repository to what is happening in the infrastructure around Informatica.

And you know what? I think he hit the nail on the head. Only 25% of the information that HIPPO provides comes from Informatica and the remaining 75% comes from the Host CPUs, Memory, I/O, Storage and the databases that your Powercenter processes interact with. The unique thing about HIPPO is that it puts this information into the context of Informatica and of your own projects. Let’s take an example; if you have an application called Finance Data Warehouse which is made up of various Informatica processes, stored procedures and scripts then HIPPO allows you to create a logical grouping of these processes, Informatica and non-Informatica alike, and then produce a deep-grained analysis of the performance, cost and efficiency of this project and the trends of all of these key metrics. This isn’t available in the Informatica Reporting Services tool because its focus, good though it is, is  on the Informatica repository.

All of the information that HIPPO stores is held in an open data model in a database of your choice so if you would like to use Infa Reporting Services to build your own reports rather than use our browser-based reports then that’s great. We are completely open about our data model so anyone who use Reporting services on HIPPO’s Repository gets our full support!

So, thanks Shailesh, yours was a really perceptive question that gets to the heart of what’s different about HIPPO!

02 Nov

By

No Comments

HIPPO at the Nordics Informatica Day, October 11th, Stockholm

November 2, 2011 | By | No Comments

This is the HIPPO stand at the Informatica Day in Stockholm on October the 11th. It was great to meet so many people from the Nordic region and to receive so much positive feedback about HIPPO. Thanks to everyone who stopped at our stand to find out about HIPPO and our malt whisky competition, congratulations to the four people who each one a bottle of the finest malt and commiserations to all those who lost out! Hope to see you all again next year!

02 Nov

By

No Comments

“There’s an Abundance of Data but a shortage of thinking about Data”.

November 2, 2011 | By | No Comments

This is a line from Monday’s presentation by Steve Levitt, the author of FREAKONOMICS, at TeraData Partners 11. His presentation has been a real hit if the Twitter traffic is anything to go by. The flow of tweets on #TDPUG11 dried up for a while and then there was a deluge.  This is the new metric of a quality presentation; when a presentation is so good that it ‘stops the traffic’ on Twitter!Steve’s presentation is worth a whole article in it’s own right but let’s focus on that one line – ‘there’s an abundance of Data but a shortage of thinking about Data’. I’d been reading recently about Process BI: the marrying of Business Process Management techniques and BI. It’s an idea that has been around for a while but what struck me was the thought that IT gladly helps the business develop these types of applications to improve business operational efficiency by identifying unnecessary costs or to find ways to increase customer satisfaction.

However in IT we rarely use the same techniques on our applications and processes. Yet, by doing so, we can achieve a host of direct and indirect benefits through the streamlining of information to analysts and decision makers and readying the ground for high latency, information-rich BI.

OK, I hear you say, but we can barely keep up here never mind stand back and survey what we’re doing. But before you dismiss the idea, just how far away are from being able to realise this? We have the data; an abundance, as Steve put it, in the Informatica Repository and elsewhere in the infrastructure. So how can we think about the data? Fortunately in the Informatica world, we have the tools to interpret this data.

Consider this, Fred Hargrove, who writes about these matters for Information Management, identified that to achieve and maintain successful execution of process-oriented BI, five key things are needed:

  • End-to-end business process knowledge
  • Continuous improvement mindset
  • BI capabilities of operations
  • Data governance discipline
  • Data latency reduction.

What does this mean for those of us at the Data Integration coalface? Well, put simply, we must:

  • Understand the end to end process from source system(s) to the BI reports, data feeds and the other outputs from your Data integration
  • Be committed to making things work better
  • Give operational owners the user-friendly reporting and analysis tools they need to assess problems and issues without getting lost in the data
  • Put an effective data governance program in place since if there is no assurance of the quality of the data then it cannot be made actionable
  • Right size how quickly the information is produced and how relevant it is to the current business situation by the time it is used.

Now if you’re still wondering what all of this has to do with you, your organisation, or even Informatica; or if you suspect that I might just be day-dreaming of some ideal world; let me map these pre-requisites to the toolsets of Informatica and my own company, Assertive:

  • MetaData Manager gives us the end-to-end process view
  • The Lean Integration, or Factory model, gives us the methodology and justification to implement a continuous improvement process
  • HIPPO from Assertive provides the Operational BI reporting capabilities
  • Informatica’s Data Governance tools are second to none
  • Informatica’s tools provide whatever Data Latency level you can justify

“Well, OK” I hear you say, “we accept that we have the data in the Repository, and elsewhere, and that we can have the toolset but how do we justify spending time and effort analysing how we can optimise performance when we are struggling with ever increasing workloads as it is”.

In Data Integration standing still is not an option. As I’m sure you are only too aware, data volumes are growing for all kinds of reasons, not least to support current BI trends like Predictive Analytics or social network traffic. Yet IT budgets are under challenge like never before and increasingly the business, the pre-eminent commissioner of IT initiatives, sees their internal IT department as just one of several competing options for delivery.  To even stay in the race we must keep getting better at what we do!

The challenge then is to improve our Data Integration productivity – in terms of volume, latency and quality. To do so, we need to make sure that daily routine and conventional wisdom don’t blind us to what’s in the data – we need Process BI to inform some clear thinking about our own processes. Just as importantly, we must put this Process BI into the hands of the operational process owners to action the changes needed. Otherwise significant operational improvement simply cannot be achieved within constrained budgets.

As Steve Levitt said, there’s an abundance of data but a shortage of thinking about data. We can change that in our Data Integration corner. We have the tools from Assertive and from Informatica.  We have an abundance of data and we have never had a more pressing imperative to take action. I think that it’s time to get those thinking caps on!

02 Nov

By

No Comments

Do you run, or use, a shared Informatica Grid environment and does your company charge projects & programs to use that Grid? If so then I have a question for you: is your recharge model fair & accurate?

November 2, 2011 | By | No Comments

This isn’t an academic question; this has a real bottom line impact. The setup and operation of a shared Grid environment involves considerable cost. When these costs are recharged then the Grid Administrator needs to be sure that projects are being billed for their actual share of these costs. Likewise the project teams who get recharged for their consumption need to be sure that they are getting a fair deal. When the monthly bill arrives the Project/Program managers need to be able to see the empirical evidence to back up the charges: what percentage of CPU did my project actually consume, how much of my data was processed as a percentage share of the Grid total in that time period and what are the I/O, network and storage metrics for my project vs. the project next door which is paying a different amount for their Grid usage?

The question is just as pressing for new IT projects planning to use the Grid. They need solid metrics to estimate projected ETL costs: how much Grid resources will their project consume, what will their usage costs are? Is there spare, and therefore cheaper, capacity available on the Grid or must additional capacity be purchased? And they need to know if their estimated costs match the reality after go-live.

This is a COSTLY business. At a time when cost sensitivity has never been higher it seems that sharing costs across a Grid remains a black art. Often, not only do projects not pay their fair share but also a lack of understanding of Grid resource consumption causes a fear-driven approach to capacity planning where potentially unnecessary or inappropriate expansion costs are incurred as an insurance policy against under-capacity. These are significant costs that feed all the way through to the profitability of the products and services that your business sells to its customers

Well, I have good news. We have unlocked a proven method to give an accurate share of costs across a Grid; we can enable you to price total resource usage from an individual workflow all the way to an entire project’s resource consumption and we can tell you where you have unused capacity or when & where you need to invest. There is no longer an excuse for over- or under-charging, no more lack of visibility of spare capacity on your Grid and for the biggest saving of all, no more fear-driven Grid investment decisions!

HIPPO’s hub management module gets your recharge model 100% right, 100% of the time. We produce summary reports of project usage by your chosen timeframe to enable you to aggregate project recharge totals, we produce detailed reports to enable you to drill down to the individual workflows and sessions to view their usage as an overall percentage of the whole. We can even summarise the critical failure picture for the month to help you identify which projects consume most out-of-hours support costs.

Whether you are a Grid Administrator or a member of a project team who relies on a shared Grid environment or an analyst estimating the Informatica costs of a new business initiative then consider HIPPO’s unique Hub Manager functionality to optimise your recharge process and get the most out of your Grid environment. See Hippo at  https://community.informatica.com/solutions/1433

02 Nov

By

No Comments

Solving the Metadata jigsaw puzzle

November 2, 2011 | By | No Comments

Recently Roger Nolan from Informatica wrote an interesting piece on the importance of metadata to the development process and highlighting how we can use great tools within Informatica like Metadata Manager and the Informatica Business Glossary. I’d like to extend this by considering the benefits of linking other kinds of metadata.

We’re seeing Informatica’s latest product developments helping to support a movement: a shift from IT to self-service Data Integration where the business positions itself as the driver of technology solutions.

So can we go even further with metadata to support this? Can we link the metadata sources and the tools that Roger discussed to the wealth of operational metadata from Informatica and the technology layers that surround our Data Integration workflows, sessions and mappings? This metadata would tell us more than just last night’s successes & failures but about trends in performance, resource usage, volumes processed and rejected data. Can we go further still and add the metadata about the development process itself? Can we add the ICC metadata, the ”Integration Factory”, into the mix and enable these new metadata sources to be linked to Metadata Manager’s understanding of sources, targets and mappings and their business context stored within Informatica’s Business Glossary. If we can achieve this then we’ll be rewarded with a 360-degree perspective on our Data Integration environment.

But where’s the business benefit? Let’s consider how this metadata could be used: take an everyday example of an overnight workflow running slowly at month-end. The Ops analyst is alerted. They recognize that it’s a DW process and this probably means that someone in the business is going to receive their reports late. But which reports are affected, which business users does this affect and has this overrun happened before making this problem part of a trend, perhaps caused by underlying infrastructure issues?

These questions can be answered by solving the Metadata jigsaw. This means linking operational metadata to those all-important metadata sources – Metadata Manager and the Business Glossary.

That’s why we are building a new version of HIPPO, for release at the end of 2011 to act as the linkage to enable a breakthrough in the use of metadata which extends its value beyond the design and development processes to include real time and historical operational information.

The IT department and the Business units which they serve will be linked by means of an operational metadata bridge between the two: the Business know which Informatica processes are critical to their daily reports and can monitor their activity trends via HIPPO. They can check if last night’s run was successful and what the trends for their daily report processes look like; will they overrun if data volumes continue to grow?

Likewise, when IT is faced with an overnight incident they can prioritise those failures with the most business impact and keep the relevant business consumers informed. They will do this by using the bridge between HIPPO’s Operational metadata and Informatica’s Metadata Manager and Business Glossary tools.

So if we have Informatica’s Business Glossary view linked to the Metadata Manager view of metadata about sources, targets and mappings, and with both now linked to Operational metadata by HIPPO, what’s missing?  Well what remains is the ICC, the “Integration Factory”, metadata. By doing so we will be able to link a mapping to the developer and analysts who produced it (Factory) through to the reusable components, sources and targets that it uses (Metadata Manager) to the business units and processes that rely upon it (Business Glossary) and on to the operational metadata on its performance profile and infrastructure dependencies (HIPPO).

This is what we are prototyping at the moment and we plan to release these features in the new release of HIPPO at the end of this year. Contact me to find out more.