Image Image Image Image Image Image Image Image Image

HIPPO Profiler – Log Files made Easy!1

16 Aug

By

No Comments

HIPPO Profiler – Log Files made Easy!

August 16, 2013 | By | No Comments

Trawling Log files is no one’s idea of fun, right? Especially in the middle of a production problem when time is limited and you need to get the show back on the road. There’s simply too much information to absorb and too many ways to interpret it!

Have you ever wished that you could quickly turn that PowerCenter session log file into a powerful problem-solving tool that gets you quickly to the answers you need?

The HIPPO Profiler does just that; it turns the data locked up in PowerCenter log files into actionable information displayed in an easy to understand graphical format.

HIPPO Profiler parses the log to produce a series of color coded charts that take you straight to the heart of your session execution identifying the primary cause of failure so you know exactly where action is needed to resolve the issue and complete the session execution.

But what if your problem is poor session performance? HIPPO Profiler can help there too by unlocking the information that you need to target the underlying cause of poor performance. Color-coded indicators guide you to where performance is being impacted by caches that spill to disk or rows being rejected. HIPPO Profiler also graphically illustrates how execution time is split across your reader, writer and transformation threads so that you know where to spend your time to improve session performance.

At the transformation level we tell you which transforms are consuming most execution time and allows you to correlate them against CPU and Memory resource usage for the session by graphically displaying the CPU & Memory profile for the session during the execution time.

Let’s consider Look Up Caches, for example; Cache behavior is a common cause of poor performance. When a wrongly configured cache spills to disk then the Log Profiler highlights this in red and if a SQL override is involved then this is displayed in yellow. You’ll find Actual and Advised cache data and index memory sizes so that you can make the adjustments needed to right size those cache files. You’ll also find the actual SQL executed to build the cache, the cache build time, row count and size so that you can identify poorly performing SQL and provide the information needed by your DBA to make improvements.

That’s just one of the features that you’ll find in the HIPPO Profiler. Try the tool for free and discover why spending hours analyzing log files has been consigned to history. Use HIPPO Profiler to turn your log files into actionable information to help you to resolve failures and improve performance.

Sign up for your free subscription to use our HIPPO Profiler service by sending an email to LogFilesMadeEasy@assertive-software.com.

25 Jun

By

No Comments

Intelligent HIPPO’s – whatever next?

June 25, 2013 | By | No Comments

When monitoring a large scale production system, what is it that your best support analysts do that adds value? For us, that answer has always been that they leverage their knowledge of the system, it’s patterns, it’s nuances, it’s schedules. They do this by watching the system over time, building up a knowledge base of information about the system and then using that historical knowledge to identify and resolve issues – hopefully proactively!

Now HIPPO can do the same and we’re calling it Auto-Sense.

As part of HIPPO v5 we’re implementing Intelligent Heuristic Algorithms and Machine Learning techniques into HIPPO which leverage HIPPO’s vast knowledge base about your environment – it’s patterns, it’s nuances, it’s schedules. HIPPO knows more about your system and how it operates than even your best Support Analysts and now HIPPO can proactively warn you about anomalies and problems with your system without ever having to setup and manage a complex sets of rules.

· Automatically learns and monitors you scheduling patterns and warns on deviations or overruns
· Learns where to focus its gaze by monitoring and analysing patterns in failures and problem areas
· Recommends corrective action when anomalies are detected
· Identifies areas of opportunity to reduce your batch window
· ICC awareness means HIPPO can suggest architectural improvements e.g. node configuration, workload distribution, etc.
· When a problem occurs, HIPPO automatically collates and presents all relevant information on a single “Problem Dashboard” to help you resolve the issue quickly and easily

Auto-Sense is our vision of a truly autonomous monitoring system for Informatica.

HIPPO is evolving and we would love to hear your feedback about our vision for enterprise monitoring.

HIPPO 3.1 is here!

February 8, 2012 | By | No Comments

HIPPO version 3.1 shipped on Monday! Its easy to install, ships with its own database and web server and you’ll be up and running in 30 minutes!

HIPPO has some great new features as you will see on the Media page but for now there’s one thing that I want to draw your attention to: HIPPO’s Timeline. On the same chart we overlay resource usage with all of the workflows or sessions – you choose which view you want – active at that point in time. You can scroll through time and zoom in on particular time periods to see everything that is going on and exactly which percentage of CPU, or Memory, or Data Movement, network, etc is being used by that Workflow or Session.

style=”text-align: justify; font-family: ‘Verdana’; color: ‘black’;”>It’s a great feature that makes drilling down into overnight batch problems or finding space in your Indformatica schedule an absolute cinch! HIPPO’s Timeline – 21st century data visualization for Informatica!

The Twelve Days of Christmas from HIPPO: No.11 – Turbo-charged Informatica Administration

December 23, 2011 | By | No Comments

As we wind down for the holidays I am wrapping up this series with a bumper bundle of three gifts from HIPPO to get 2012 off to a great start. Are you ready for your next gift from HIPPO?

Gift No. 11 is a HIPPO Hub Manager Trial license. This version of HIPPO is designed for Informatica Administrators and Managers. It provides you with the tools you need to understand everything that is significant in your Informatica environment.

This means we include HIPPO’s Capacity Planning feature to trend and plan resource usage, HIPPO’s Vital Signs feature which provides a real-time monitor for the status of all of the Repositories, Domains and integration services across all of your Informatica environments, HIPPO’s File System, CPU and Memory monitoring features which enable you to see how much resource is available and how much is being consumed.

HIPPO Hub Manager also includes the Activity Monitor and allows you to set alerts via the Notification Centre on a wide variety of activity and resource thresholds. And lastly, the ability to understand the use of your environment by project to enable you to trend demand and recharge the use of your shared environment.

HIPPO’s penultimate gift this Christmas is to give the chance to all Informatica Administrators to enjoy a free trial of HIPPO Hub Manager in 2012 by visiting our website and signing up to enjoy an unrivalled ability to control and administer your Informatica environment.

 

 

 

The Twelve Days of Christmas from HIPPO: No.10 – Step up Developers & Testers!

December 22, 2011 | By | No Comments

As we all wind down for Christmas I thought that I would wrap up this series with a bumper bundle of three gifts from HIPPO to get 2012 off to a great start. Are you ready for your first parcel?

Gift No. 10 is a HIPPO for Projects trial license. This version of HIPPO is specifically designed to be used by Development and Testing teams and contains all of the features needed to understand workflow, mapping and session performance: including visibility of memory and CPU usage and trends from the big picture down to individual transformations, sources and targets.

Testers can set performance and resource thresholds that must be passed before go-live. HIPPO for Projects includes HIPPO’s Activity Monitor feature and the Notification Centre to alert you to events and execution issues in your environments.

Then there’s the Analysis feature in HIPPO: this is the deep dive down into the top workflow and session consumers by resource consumption, by elapsed time and by data movement. This is where your performance tuning work is likely to yield the largest benefit which is why HIPPO takes you from here to an intensive analysis of the Workflow by visualizing Execution behaviour, Workflow and Session trends and a historic analysis of workflow behaviour. HIPPO then drills down to profile individual Sessions by analysing individual Transformation behaviours, Data Movement characteristics, Task trends in terms of CPU, Cache, Data Movement, Time to First Row and thread profiles.

So Developers and Testers step forward! Unwrap your free trial of HIPPO for Projects by visiting our website to sign up for your trial in 2012 and prepare to see a step-change in the quality and performance of your Informatica applications.

The Twelve Days of Christmas from HIPPO: No. 9 – Unlock the Mystery of the Cache!

December 20, 2011 | By | No Comments

Browse many of the Informatica Developer forums and what is most striking is that many posts relate to Cache behaviour and Cache sizing. Often kind folk reply. They offer solutions to the struggling developers and the sheer range of their advice varies enormously:  “just set it to auto” to “try doing X and adjusting Y and then let me know if it had any effect” or “re-design your mapping” are common responses. Some may even contain good advice but what is clear by their variety is that there is little consensus.

Yet a shared understanding of cache behaviour is critical since it gives us the ability to gauge how much memory is actually being used, how much is actually required and to understand how it is split by cache type. This can shine a new light on an existing development project or even on a mature code-base. Understanding the make-up and nature of the code running in your environment allows you to make much better decisions – whether those decisions are capacity planning, buying new kit or just trying to squeeze more from your existing investment – knowledge is the key to making informed choices about resource usage.

That’s why HIPPO captures statistics from Informatica about the cache sizes for aggregators, joiners, sorters and lookups. HIPPO reports this alongside the actual size of the cache in memory, according to the OS, and whether that cache has split to disk. HIPPO also captures how long the cache took to build and presents this in a series of Data Visualizations that include how much memory is actually available when your session executes.

This means that you can optimize cache memory usage across the session starting in development to ensure that the session is highly performant at go-live. Then HIPPO will alert you on the need to improve performance for the mature production code as circumstances change; as more sessions run in contention; as data volumes in look-up caches grow in step with historical data growth; and so on. HIPPO then trends this cache behaviour over your chosen time span to help you plan ahead for future needs. If you are serious about your Informatica environment then you need HIPPO’s pro-active monitoring approach to sustain highly-performant applications that get the most from your infrastructure investment.

That’s why HIPPO’s ninth day of Christmas gift is the unlocking of the mystery of the Cache!

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I am choosing twelve of my favourite new benefits that HIPPO brings to the Informatica Community. I acknowledge that the twelve days of Christmas occur after Christmas!

The Twelve Days of Christmas from HIPPO: No. 8 – Interactive Investigation!

December 16, 2011 | By | No Comments

Interactive Data Visualisation is a hot topic right now so can we use it to analyse the multiple factors which affect Informatica performance?

Data Visualisation is a technique that has actually been around for a long time. Perhaps the most famous data visualisation is of Napoleon’s infamous march on Moscow produced by Charles Minard in the early 1800’s. What is so great about this visualisation is that it combines the four key elements of the story in one easily understood chart: the dates (a timeline), geography (the army’s route), the huge loss of life and the massive temperature variation. It is clear that all four elements must be on the same chart to convey the whole story.

Is there a lesson there for those of us involved with Informatica?

Well, we were presenting HIPPO to the Informatica staff at a large bank recently. They really liked HIPPO but they felt that a great addition would be an overlay of the four key elements of an Informatica batch run on the same graph. Sure, the information was already in HIPPO, but wouldn’t it be great if you could combine these key elements into a single chart and make it interactive: adding and subtracting the key elements, as you need.

It was such a great idea that we just had to run with it!

This new data visualisation in HIPPO will combine Time, CPU usage, Memory Usage and Workflow activity in a single interactive chart. This means that when you need to know why your workflow overran, or missed its SLA, then you can quickly build a picture of what was going on when your Workflow executed. Start by selecting your timeframe, then overlay the graph with all of the workflows running in contention with yours, then overlay CPU usage to assess availability during this timeframe and finally overlay Memory usage.

In a few mouse clicks you’ll understand if factors outwith your workflow such as resource shortfall or workflow contention caused the performance dip. In many cases you will have your answer right there; but if not then you can use HIPPO to drill down to what was happening inside your workflow: were data volumes unusually high? Is the target database the issue? And so on.

It has been a busy few weeks in the HIPPO workshop turning this great idea from the audience at the bank into a late Christmas present for our customers. So why don’t you benefit from their suggestion too? Take a look at HIPPO in 2012 and get a faster way to solve those performance problems.

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I am choosing twelve of my favourite new benefits that HIPPO brings to the Informatica Community. I acknowledge that the twelve days of Christmas occur after Christmas!

Understanding how Informatica and Oracle interact: an Oracle perspective.

December 14, 2011 | By | No Comments

Monitoring Informatica from Oracle is a reasonably straight forward exercise if you know where to look and what to look for. I’ll walk through a typical scenario I encounter on a regular basis at the client sites I work at.

During the course of a day-time ETL run (during performance testing) I was called to have a look at a mapping that was “running slow”. In the main, that’s usually the extent of the information available.

The first thing I did was log in to informatica’s workflow monitor to have a look at the performance statistics. Sure enough, there was a mapping which was processing 100 rows per second.

The source rows pretty much matched the target rows, so in this scenario there’s no evidence of a bottleneck such as a joiner or aggregator. If the target has processed around the same number of rows as the source, then rows are definitely making it through the mapping. However, this doesn’t rule out transformation related bottlenecks.

SOURCE QUALIFIER

Next, let’s have a look at the source in oracle. I’m using the view v$session_wait to see what the session is waiting on at the moment I run the query.

DW_STATS@MIDDWHP>select event, wait_time, seconds_in_wait, state

2 from v$session_wait

3 where sid = 963

4 /

EVENT                                      WAIT_TIME            SECONDS_IN_WAIT            STATE
——————————————————————————————————————————–
SQL*Net message from client            0                                            0                     WAITING
Elapsed: 00:00:00.04
DW_STATS@MIDDWHP>/

EVENT                                       WAIT_TIME             SECONDS_IN_WAIT           STATE
———————————————————————————————————————————
SQL*Net message from client            0                                            0                     WAITING
Elapsed: 00:00:00.00
DW_STATS@MIDDWHP>/

EVENT                                           WAIT_TIME          SECONDS_IN_WAIT            STATE
———————————————————————————————————————————-
SQL*Net message from client            0                                             0                      WAITING

In order to identify the SID which related to the source qualifier, in this case I simply viewed what SQL each session on the database was executing until I found the relevant one. Of course, this isn’t always possible – especially if the session you are looking for is a cache build which hasn’t started building its cache yet! But for this example, it sufficed.

So we can see from above that at the time I executed the query against v$session_wait the session was waiting on SQL*Net message from client. In other words, oracle is idle, waiting for the client to tell it to do something. In this case the client is Informatica. To get a bigger picture of what is happening to the session we can go to another view – v$session_event. This view provide wait time information for the life of the session, aggregated by event. So using this view we can see what the session has spent it’s time on during its entire life – not just real time.

DW_STATS@MIDDWHP>
 1 select event, total_waits, time_waited
2 from v$session_event
3 where sid = 963
4* order by 3 desc

DW_STATS@MIDDWHP>/
EVENT                                              TOTAL_WAITS                      TIME_WAITED
————————————————————————————————————————
SQL*Net message from client                   7497                               8094790
db file scattered read                                4268                                  11449
db file sequential read                             11703                                   2685
SQL*Net more data to client                     5013                                       53

Elapsed: 00:00:00.00

DW_STATS@MIDDWHP>/
EVENT                                                TOTAL_WAITS                      TIME_WAITED
———————————————————————————————————————-
SQL*Net message from client                      7784                             8095780
db file scattered read                                   4564                                 12864
db file sequential read                                11978                                 2691
SQL*Net more data to client                        5078                                    91

So we can see that over the course of the life of this session, it has been waiting mainly on informatica to send it something. However, we can also see that the counts for db file scattered read and db file sequential read are going up – so oracle is actually doing work, not JUST waiting on informatica. The event db file sequential read is a single block access, usually indicating an index access path. The event db file scattered read is a multi-block read, usually indicating a full table scan or a fast full index scan. So just from looking at the wait events, we can start to see that the source qualifier isn’t the bottleneck. Why??

If the Source Qualifier was the bottleneck we would see much more work going on in Oracle and much less time spent on the event SQL*Net message from client. One word of caution though before we move on – there is a situation where this sort of profile from v$session_event could still signify a problem – I’m not setting out rule of thumb here. This is just one typical example, so when applying this to your own environment be careful to take into consideration all other factors.

LOOKUP

The next logical step would be to have a look at the cache build session. However, in this case we see that rows are actually reaching the target, suggesting that the cache build has actually completed and therefore couldn’t be the bottleneck in this particular scenario. However, I thought I would just show you the profile of the cache build anyway, just for interest. When I ran the v$session_event sql, as above I get the following –

EVENT                                           TOTAL_WAITS        TIME_WAITED
————————————————————————————————————————
SQL*Net message from client                     87374                     4878878
db file scattered read                                    3834                            2634
db file sequential read                                    994                              690
SQL*Net more data to client                         8637                                36

As can be seen, the cache build session actually waited a much greater percentage of it’s time on SQL*Net message from client. The reason is because of the way informatica handles its connections to oracle. When an informatica sessions initializes, it creates all of the sessions to the source, lookup and target it will require during the life of the mapping. In this example, that meant it created 3 oracle sessions – one for the source qualifier, one for the cache build and one for the target. The first statement executed against the database is the source qualifier. Only when the source qualifier returns rows will the cache build sql be fired off against the database. When the cache build finishes and the rows from the source qualifier begin their journey through the transformations on their way to the target, the cache build oracle sessions is not ended. The sessions is kept open. Therefore, it is not unusual to see a cache build session to wait for SQL*Net message from client for the majority of its life.
Let’s move on to the target then;

TARGET

Let’s execute the same sql against the target and see what we get –

DW_STATS@MIDDWHP>select event, wait_time, seconds_in_wait, state
2 from v$session_wait
3 where sid = 854
4 /
EVENT                                    WAIT_TIME      SECONDS_IN_WAIT                         STATE
————————————————————————————————————————
db file sequential read                   0                                   0                                      WAITING
Elapsed: 00:00:00.03

DW_STATS@MIDDWHP>/

EVENT                                   WAIT_TIME        SECONDS_IN_WAIT                         STATE
————————————————————————————————————————
db file sequential read                   0                                     0                                     WAITING
Elapsed: 00:00:00.01

DW_STATS@MIDDWHP>/

EVENT                                   WAIT_TIME         SECONDS_IN_WAIT                          STATE
————————————————————————————————————————
db file sequential read                  0                                      0                                       WAITING
Elapsed: 00:00:00.01

Looks like during our sample time oracle is spending it’s time on db file sequential read. As we have already stated, this wait event is single block read. But this is a target, why are we seeing reads? In this case, the sql being executed was straight INSERT statements. In order to modify a block, oracle has to read that block from disk. Perhaps this is what we are seeing? Is this just an INSERT in action? Let’s have a look at the overall life of the target session and see if that produces any more information;

DW_STATS@MIDDWHP> select event, total_waits, time_waited
2 from v$session_event
3 where sid = 854
4 order by 3 desc
5 /

EVENT                                                 TOTAL_WAITS               TIME_WAITED
————————————————————————————————————————
db file sequential read                                   2043339                                947769
SQL*Net message from client                             7374                                    2878
log file switch completion                                        45                                       102
library cache pin                                                        1                                           3
latch: cache buffers chains                                    169                                           2

So we can see that the target is spending the majority of it’s time reading blocks. The SQL*Net message from client is, again, more than likely the result of informatica’s connection handling i.e. this was the time the target was idle from the initialization of the mapping to the time when rows arrived at the target. When can check this assumption from looking at another of Oracle’s performance views – v$active_session_history (10g and above). This view holds data sampled from v$session_wait, thus providing a way to see a breakdown of the wait events during a particular time period. So if our assumption about the reason for SQL*Net message from client appears in this particular sessions history, then this event should NOT appear in the last 10 minutes (the target has been active for 2 hours at this stage). Let’s see;

DW_STATS@MIDDWHP>select event, sum(time_waited)
2 from v$active_session_history
where session_id = 854
4 and sample_time between sysdate-1/24/6 and sysdate
5 group by event
6 order by 2 desc
7 /

EVENT                                  SUM(TIME_WAITED)
——————————————————————————————
db file sequential read           6647312
–                                              0

Elapsed: 00:00:00.20

So we can see that in the last ten minutes 100% of this sessions time was spent waiting on single block reads, so our assumption is holding up.
So from the data we have collected so far we can produce information like this –

infa_thread1

Not too difficult to pick out the bottleneck now. It’s obviously the target.

But why is the target so slow? Can we breakdown that time spent in oracle even more to see exactly what it’s doing? One way we can do this is to have a look and see what objects it’s actually reading. Using the view we can get the object_id of the current object being operated on –

DW_STATS@MIDDWHP>select owner, object_name, object_type

2 from dba_objects

3 where object_id in ( select ROW_WAIT_OBJ#

4 from v$session

5 where sid = 933 )

6 /

OWNER                                 OBJECT_NAME                        OBJECT_TYPE

———————————————————————————————————————–

DW_PROD                            PK_CUST_ACCOUNT_IDX                 INDEX

Elapsed: 00:00:01.24

DW_STATS@MIDDWHP>/
OWNER                                OBJECT_NAME                          OBJECT_TYPE

————————————————————————————————————————

DW_PROD                          AK_CUST_ACCOUNT_IDX                   INDEX

Elapsed: 00:00:01.24

DW_STATS@MIDDWHP>/
OWNER                                OBJECT_NAME                          OBJECT_TYPE

————————————————————————————————————————

DW_PROD                          AK_CUST_ACCOUNT_IDX                   INDEX

Elapsed: 00:00:01.24

From the above we can see that actually, the time isn’t being spent on the table itself, it’s being spent maintaining indexes. Just to check that assumption we’ll go back to v$active_session_history –

DW_STATS@MIDDWHP>l

1 select CURRENT_OBJ#, sum(time_waited)

2 from v$active_session_history

3 where session_id = 844

4 and sample_time between sysdate-1/24/6 and sysdate

5 group by CURRENT_OBJ#

6* order by 2 desc

DW_STATS@MIDDWHP>/

CURRENT_OBJ#            SUM(TIME_WAITED)

————                           —————-

2340991                              2409213

2341005                              146570

These object_id’s correspond to the same indexes we identified through v$session. So we can now say categorically that the mapping is going only as fast as the target, so therefore, if we make the target faster, we make the mapping faster.

This issue in this case turned out to be very simple – it was large concatenated indexes on the target table. When an insert is performed oracle needs to maintain those indexes, and if those indexes are large, then the time taken to perform that maintenance can be substantial. In this case the indexes were in fact the same size as the table itself! Marking the indexes unusable provided the desired result and produced a response profile that looked like this;

infa threads 2

So from the above graphs, we can now see that the target is no longer the bottleneck in the system. Neither is the source. We’re in the position now, where the “bottleneck” is informatica. However, the above profile could be from a mapping that is absolutely fine. There are no obvious bottlenecks here. In order to identify further bottlenecks, we need to look inside informatica and at the performance metrics it produces. I’ll cover this is a later paper.

One thing to note is the rise in the Cache Build active time. The actual time taken to build the cache didn’t change, but because the overall time of the mapping went down dramatically, the cache build actually took up 10% of the overall time of the mapping, which in this case was reasonable.

Conclusion

We have seen from this simple example that it’s possible to gather information about informatica’s performance directly from oracle. By understanding how informatica handles it’s connections we can start to draw conclusions about the data we extract from oracle. This essentially allows a developer or DBA to identify where the current bottlenecks are and where expensive development or support time should be spent in the future in order to gain the maximum benefit.

When this data is readily available to developers / testers & DBA’s it can have a dramatic effect on the productivity and quality of the work produced, saving development and testing time and potentially negating the need to buy ever bigger boxes!

A Whitepaper brought to you by www.SeeTheHIPPO.com

For further information please contact stephen.barr@assertive-software.com.
Copyright Assertive Software Ltd, 2011.

The Twelve Days of Christmas from HIPPO: No. 7 – Something for Everyone!

December 14, 2011 | By | No Comments

HIPPO’s Notification Centre is a feature that everyone in the Informatica community can use to receive alerts on warning signs in their Informatica environment and applications.

There’s a range of contexts where HIPPO will generate an alert. The first is a straightforward one: you define an elapsed time threshold for an Informatica Workflow or Session and when it is exceeded then an Alert is issued by email and to your HIPPO screen.

HIPPO raises the second category of Notifications when a Resource threshold is exceeded by a Workflow or Session. HIPPO allows you to define a CPU threshold in seconds or a Memory threshold in Megabytes. If a session, or the aggregated resource consumption by a workflow, exceeds this threshold then you receive an instant alert by email or to HIPPO running on your laptop, iPhone or iPad.

Similarly HIPPO will raise alerts when a named session or workflow spills cache to disk, or when the statistics for a session indicate that the execution profile is out of balance.

HIPPO will also monitor your Integration and Repository Services and alert you immediately should an outage occur.

These are just a few examples of the use of the Notification Centre. The good news is that extending these alerts is easy for us. Remember that HIPPO’s user interface displays only one-third of the information that HIPPO captures and stores within it’s repository so there is lots of scope to extend the warning signs that HIPPO will alert you to, both within Informatica and in the infrastructure that Informatica interacts with.

So take a look at what HIPPO can do for you – there really is something for everyone!

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I am choosing twelve of my favourite new benefits that HIPPO brings to the Informatica Community. I acknowledge that the twelve days of Christmas occur after Christmas and not before but I hope that you will allow me a little poetic license here!

The Twelve Days of Christmas from HIPPO: No. 6 – Cheery Capacity Planners!

December 13, 2011 | By | No Comments

An enterprise-level Informatica environment is a serious investment that requires careful analysis to define initial capacity requirements and to plan capacity over time as new applications are implemented and existing ones grow, or reduce, in terms of data volumes. The question is how can a Capacity Planner gather the information needed to right size an Informatica environment for today and keep one eye on future needs? Is there a tool that can help?

Well there are many great enterprise-monitoring tools out there that can identify an Informatica process and the resource it consumes at the highest level. The trouble is that they see all of your workflows as simply a set of pmcmd processes running on a host. That might be enough if a broad trend is all you need because, by simple aggregation, these tools give a broad picture of resource consumption by generic Informatica processes.

But what if you need more than a broad broad brush approach? That’s where HIPPO comes in. HIPPO enables Capacity Planners to plot trends in terms of CPU, Memory usage and Data Movement by Node as well as by any logical group of processes, a grouping that we call a ‘Project’ in HIPPO.

Projects can be defined at a high level: a domain, an integration service or even a Repository. Or at an Application level: for example a Data Warehouse application made up of Informatica (and even non-Informatica) processes or a completely distinct entity, such as a web-server process. HIPPO analyses, profiles and stores the CPU, Memory and Data Movement statistics for your Projects and over time the capacity planning information stored in HIPPO becomes an ever-richer information resource to predict future capacity needs.

The strength of HIPPO comes from its ability to take the metadata within Informatica and look outwards to link this to the hardware and operating system metrics. This enables HIPPO to give you an Informatica-specific view of the trends in workload and resource consumption that are needed to plan for the challenges ahead, to free capacity where possible and to make wise investment decisions when necessary.

So make a Capacity Planner Cheery this Christmas! Tell them about HIPPO – the unique Capacity Planning tool for Informatica.

Footnote: the latest version of HIPPO has just gone on general release and it is packed with new and unique features to manage, optimize and control your Informatica environment. In the run up to Christmas I am choosing twelve of my favourite new benefits that HIPPO brings to the Informatica Community. I acknowledge that the twelve days of Christmas occur after Christmas and not before but I hope that you will allow me a little poetic license here!