All posts by confluoadmin

big-data

Criticality Of Big Data Wins over its Availability –Any Day!!

Discipline of analytics is actually about telling a story through the data – sewing together pieces of data from different sources, discovering patterns – both obvious and concealed ones, and telling a story in a way that’s easy to comprehend. The journey of analytics has changed in to the realm of self-discovery and what-if scenario analysis from the more basic slicing and dicing of data along diverse dimensions. The distance between what’s interesting and what’s valuable is gradually shrinking. Much of that’s possible due to progression of both storage technology and data management competences, linked to the abilities to practice data in near-real time or real-time. But much more is needed to render the full value spectrum of helpfulness of data.

All Data Fashioned Equal?
The movement around Big Data and associated technology would nearly make us assume that data is ubiquitous, hence easily affordable and nearly equally useful. Cheaper per unit cost of data storage has motivated this mindset probably more than anything else. Not all data is created identical. As such, not all data is equally valuable. While analytics tools try to address this gap by letting users realize and consume data in a multi-dimensional way, they often fall short of linking the content of the data to the context, where truly the utility bit comes in.

Context of the data is grounded in the connotation and usage of the data. It’s the environment that provides data with its purpose and describes its role as part of a business procedure or decision making step. Effectiveness of data, whether a single component, or as a collection, increases when context sensitivity equals trust, relevance, and timeliness of this data with the job at hand. Such sureness in data improves not only its half-life but also how much it backs down-stream. Incremental attentiveness of what a data component contributes to overall story is the measure of practicality of it in the bigger context. So, the story unfolds one step at a time, enriched by the context and meaning that are associated with it. A recent CMO survey by the AMA and Duke’s Fuqua School of Business found that the number of marketing projects using marketing analytics to drive decisions decreased from an already low 37% in 2012 to even below 30% a year later. Could it be because most of the data isn’t as useful as once it was thought, despite of the fact that volume and availability of actual data increased manifold during that one year period?

Does Practicality Matter in Big Data World?
Analytics achieved on such rich context-aware data is going to be more in line with what we might want from valuable data parts and not just performing analytics for the sake of it. This is in fact more obvious in the big data world. Recent Capgemini survey reveals, only 27% of respondents consider their big data initiatives as “successful” with a measly 8% describing them as “very successful.” Big data issues especially present this conundrum where data is both abundant yet presents a needle in a haystack discovery issue for many.

Association of data components among a myriad set of data doesn’t essentially add up to the causality or meaningful derivation of underlying conduct: a story-line we so want the data to communicate, may be, sometimes too eagerly. And we almost would like big data competences to provide the panacea to understanding of what sources what – simply because more data is available. But is that the real key in detecting true meaning or diagnosing a root cause? Zoomdata CEO Justin Langseth noted similar concern when he mentioned that when it comes to big data, design is as important as performance. Big or volume factor doesn’t amount to much, as it turns out, if it misses the practicality part. So, is it still a good enough reason to treat all data as identical simply because we can process, store, and access it? Or, much more is needed to comprehend the relative currency value of each data and how does it contribute to overall understanding of business issues? Is it time to treat big data and other data components for that matter, not as ubiquitous as experts may want you to believe, rather as scarce objects and treat with care and caution? Is it time to focus more on the usefulness of each data component more than the means of getting and processing it?

Internet of Things? Wait a second, aren’t we at it for the past two decades

Like a 17-year grasshopper, the Internet of Things (IoT) has risen from a lengthy larval stage and begun taking over the world. The buzz is nearly irresistible. “The Internet of Things Is attaining escape velocity,”TechCrunch reveals. “For Samsung, ‘Internet of Things’ is the insignia of things to come,” proclaims The Economic Times. “Center Stage for Devices Connecting the Home,” announces The New York Times.
If you trust the hype, Internet-connected sensors and applications will shortly be monitoring and also running every characteristic of our lives – from our “smart” homes to our self-driving cars to our retail behaviors to our fitness and health. So lay back and relax: The IoT is about to provide you much better control of your life or might even take over for you.

And you’re going to love it.

All of which leaves me feeling bemused. On the one hand, I believe we really have reached an inflection point for the IoT; we are about to see it expand into new areas of our lives and play a more significant role. On the other hand, a little perspective is definitely in order.

The Internet of Things is not new. For the past 25 years, ever since the development of microprocessors and network-based instruments companies in the process industries such as oil and gas, chemicals, refining, pharmaceuticals, manufacturing and mining have been avidly exploring how to use sensors to make their processes more reliable, efficient and safe.

Many of these enterprises work with products and materials that can be readily measured as they flow through pipes. And with the help of my company, Emerson Process Management, they have gained deep experience in the IoT, even if they haven’t always called it that. Since there was no public Internet when we started, we didn’t call it the IoT, but it was based on the same concept: Integration of very large amounts of data to achieve better decision-making.

Those of us who have long labored in this field know that wringing value from the IoT involves significant challenges, which some industries may not overcome. The IoT’s maturation process will therefore be checkered and evolutionary – more like the prolonged development of the Industrial Revolution than the introduction of a new killer app. performance of individual pieces of equipment with sensors and wireless communications is relatively simple, and can produce significant cost savings. Remote monitoring of steam traps, for example, can help us determine which ones are malfunctioning, and save as much as 5 percent of a plant’s energy cost for steam production.

But monitoring an entire process or operation is a different matter. Big assets like industrial plants are a lot like human beings: They’re complicated, mercurial, and different – every single one. In any given plant, the equipment keeps changing as it wears or gets replaced. The supervisor who ran the operations yesterday is off today and has been replaced by one who runs things differently.

Even the weather has an impact; when a warm front blows in, performance changes.

As a result, modeling most complex processes or operations requires subject matter experts with a really deep and comprehensive understanding of how everything works, separately and together. Analyzing the resulting data is no easy task either. It’s often both science and art – not unlike a doctor’s interpretation of a patient’s chart and own words. These kinds of interpretive skills do not grow on trees – and certainly not within most companies.

The upshot: Unless they’re willing to outsource the modeling of their operations as well as the collection and interpretation of their data, many industries will be limited in what they can derive from the IoT by their own in-house skills – at least until applications can be made more sophisticated.

Security is another major issue. How prepared is Walgreens to post sensitive information that might be captured by CVS? How prepared is a pharmaceutical or chemical company to post information related to its proprietary formulas and processes? They’re not. As a result, many companies will conceal information behind their firewalls. Unnecessary secrecy will stunt the IoT’s potential impact throughout entire industries. Some secrecy is understandable, but companies leveraging the IoT also need to be willing to share. There’s lots of data concealed behind firewalls that, if aggregated with similar information from other companies, would provide just the kind of comparative data all of them need to improve their performance. For instance, I can tell you that my heart beats 150 times a minute after I run a mile at an eight-minute pace. It’s just one data point. Is it good or bad? Who knows? But data becomes information when it is analyzed in context. My heart rate data becomes more valuable when it is compared with how fast the hearts of other men my age beat after a similar effort. For the benefit of my overall health and the health of others, I shouldn’t fear contributing my heart rate data to the pool of general human knowledge.

For these and other reasons, I think the IoT stands today about where the dot.com revolution stood in the late 1990s. Hundreds of ventures from that frothy period died in infancy, because no matter how clever the conceit behind them, they didn’t connect with the real world. Yet some huge successes — Amazon and eBay EBAY +0.43%, for example — did indeed emerge, as completely new business models. innovations. Driving these changes are increasingly inexpensive sensors, the maturation of the Internet, and the beginnings of enhanced analytics.

Sensors are now so cheap and easy to install – no drilling, no screws, practically “lick ‘n’ stick” in many cases – that we now refer to “pervasive sensing,” and use them especially in locations described within our industries as the four “d’s” – dull, dangerous, dirty and distant. The ubiquity of the Internet means companies can now count on service even in developing countries, and monitor operations remotely. And enhanced analytics are beginning to provide some of the applications companies need to interpret their data with sophistication.

In the process industries, these changes are leading to an expansion of the IoT’s role. Until recently, only process control and safety functions were monitored and connected. Now, with costs plummeting, areas like plant and equipment reliability, energy management, personnel safety and environmental compliance are increasingly being addressed.

So the IoT is going to have a major impact. It will expand on the progress already made in the last couple of decades, during which it developed without the benefit of the catchier name it enjoys today. Important innovations will be here soon in fields like healthcare, home automation and transportation.

But in many other industries, issues related to competition, security and sheer complexity will present significant obstacles. In these industries, simply having the capacity to collect large volumes of data will not clear the path for delivering the full potential of the IoT. These industries will need to figure out how to make the data work for them – painlessly, under whatever constraints and in whatever context they operate. Like the dotcom companies that actually succeeded, these industries will develop clear, pragmatic strategies – for what kinds of insights they can reasonably target, and how they can achieve them. Depending on the industry, this could take many years – perhaps even decades – to fully work out.

And when that happens – not if, but when – then all that hype you’re hearing today will be justified.

Business-Intelligence-vs-Business-Analytics

Business Intelligence vs Business Analytics: What it really means??

Business Analytics and Business Intelligence, What’s the difference between? The right answer is: everybody has a viewpoint, but nobody knows for sure, and frankly you shouldn’t bother.

For example, when SAP mentions “business analytics” rather than “business intelligence”, it’s envisioned to specify that business analytics is a gigantic term including business intelligence, data warehousing, enterprise performance management, enterprise information management, analytic applications, and risk, governance and compliance.

But different vendors (such as SAS) use “business analytics” to show some level of vertical/horizontal domain knowledge secured with predictive or statistical analytics.

There are two things worth differentiating, at the end of the day:

The first is the business characteristic of BI — the need to get the utmost value out of information. This need hasn’t really changed in over fifty years (although the increasing complexity of the world economy means it’s ever harder to deliver). And the majority of real issues that stop us from getting value out of information (information culture, politics, lack of analytic competence, etc.) haven’t changed in decades either.

The second is the IT characteristic of BI — what technology is used to assist provide the business need. This obviously does change over time — sometimes drastically.

The problems in nomenclature typically happens because “business intelligence” is commonly used to mention both of these, according to the context, thus confusing the heck out of everybody.

In particular, as the IT infrastructure inevitably changes over time, analysts and vendors (especially new entrants) become uncomfortable with what increasingly strikes them as a “dated” term, and want to change it for a newer term that they think will differentiate their coverage/products (when I joined the industry, it was called “decision support systems” – which I still think is a better term in many ways).

When people introduce a new term, they inevitably (and deliberately, cynically?) dismiss the old one as “just technology driven” and “backward looking”, while the new term is “business oriented” and “actionable”.

This is complete rubbish, and I encourage you to boo loudly whenever you hear a pundit say it.

The very first use of what we now mostly refer as business intelligence was in 1951, as far as I can tell, with the arrival of the first commercial computer ever, dubbed LEO for Lyons Electronic Office, powered by over 6,000 vacuum tubes. And it was already about “meeting business needs through actionable information”, in this case deciding the number of cakes and sandwiches to make for the next day, based on the previous demand in J. Lyons Co. tea shops in the UK.

And It most definitely was not “only IT” or “only looking in the rear-view mirror” as some people pompously try to dismiss “old-style BI”.

At the end of the day, nobody important cares what this stuff is called. If you’re in charge of a project, what matters is working out the best way to leverage the information opportunity in your organization, and putting in place appropriate technology to meet that business need — and you can call that process whatever you like: it won’t make any difference…

Data or Big Data or Jumbo Data

Data or Big Data or Jumbo Data?

I am sure a week doesn’t go by in an organization, when you do not discuss “data”. Probably you must have chatted about it in today’s presentation. However, do you really know what is needed out of it, why and what needs to be drawn out of it? Or are you a victim of the noise that is being constantly created around you about the essence of data, or how ignoring it might be suicidal.

Data is power and smart use of data is the greatest insurance you can make against failure. There is however lot of concern and confusion about data, how much to collect, what to collect and how to use it. The easy answer is accumulate everything, all the time – but this often only results in expensive time wasting in storing, finding and then trying to examine information that is actually leading you nowhere.

What kind of data will we be making use of in five, ten and thirty years? If you imagine every bit of information about you that isn’t already in the civic domain, that is your result. The digital age has not only permitted us to share information more effortlessly, but has made it a usual way of life which we happily continue to move along with. Social media will command what information we collect because it is continuously redefining what we consider to be usual and acceptable to share. It is here where people absorb what to share and as soon as we are contented within the safety of our private online environments, we become much more expected to share to the wider world. By keeping an eye on what data your friends are sharing and what you become happier or authorized to share, you are categorizing the data that you should be utilizing for your business.

Who is really in control?
You may be thinking that you make decisions about what you share – but do you really? When we first started using social media we just told our friends what we were up to, and then Facebook told us to start sharing pictures, and we did, and then videos, then our location, and every time we happily just start sharing more and more. It has not been our choice, it is what the technology companies are conditioning us to accept as normal behaviour, but only when they want us to adopt the behaviour. The point here is not that these companies are bad, but that they are constantly evolving what data is available and we need to be much better equipped to begin collecting and using this data much sooner. They are also dictating the way we live our lives as we increasingly need to react to the global audience in real time. 9-5 is becoming as common as the compact disc. Near field technology, the Internet of things, personalisation – all typically thrown at us as hot marketing methods and tools to be using, but how can we use them and the data they generate to boost our business?

The next data challenge…
The greatest challenge we face today with regards to data is finding innovative ways to analyse the new types of content people are posting and translating it in to meaningful information. On any social media channel today you will see hundreds, if not thousands of posts. Most of these will include some kind of media, links, @s, #s, locations and what device was used. There is a wealth of valuable information held in these brief posts which is probably not being recorded and used in the most effective way. We do of course use locations to find audiences and hashtags to find people who are talking about us, but how are these recorded and used as a measure of affinity with the brand? The greatest challenge is interpreting visual data in to meaningful metrics. If someone takes a picture with your brand on show but no other reference, how do you find and then use that to identify why they took the picture, where they were and how you may follow up with them? How will social media define how we continue to communicate? We will most likely pay for items in the future using nothing more than a post starting with £ – bit what other symbols will replace our communications and transactions? How are you going to use the vast amounts of data now being shared through Instagram, Snapchat and Periscope – which don’t comply with our standard definition of data?

A picture paints a thousand words – how are you going to start collecting those words?

big data

Myth Busted!! Small Data is motivating the Internet of Things (IoT) rather than Big Data

Now days, When people talk about the Internet of Things (IoT) they incline to think about big data technologies like Cloudera, Hadoop where petabyte size datasets are analyzed and stored for both unknown and known patterns. What many people don’t understand is that many IoT use cases only require small datasets. Now the question arise, What is small data, you ask? Small data is a dataset that contains very specific attributes. Small data is used to determine current states and conditions or may be generated by analyzing larger data sets. When we talk about smart devices being deployed on valves, on wind turbines, small packages and pipes, or attached to drones, we are talking about collecting small datasets. Small data tell us about temperature, location, wetness, pressure, vibration, or even whether an item has been opened or not. Sensors give us small datasets in real time that we ingest into big data sets which provide a historical view.

So why is small data significant? Small data can trigger events based on what is happening now. Those events can be merged with behavioral or trending information derived from machine learning algorithms run against big data datasets. Here are some examples:

Examples of Small and Big Data
A wind turbine has a variety of sensors mounted on it to govern vibration, velocity, wind direction, temperature and other applicable attributes. The turbine’s blades can be programmed to inevitably adjust to varying wind conditions based on the information rapidly provided by small data. These small data sets are also gulped into a large data creek where machine-learning algorithms begin to comprehend patterns. These configurations can reveal performance of certain mechanisms based on their historical maintenance record, like how wind and weather conditions effect wear and tear on various components, and what the life expectancy is of a particular part.

Another example is the use of smart labels on medicine bottles. Small data can be used to determine where the medicine is located, its remaining shelf life, if the seal of the bottle has been broken, and the current temperature conditions in an effort to prevent spoilage. Big data can be used to look at this information over time to examine root cause analysis of why drugs are expiring or spoiling. Is it due to a certain shipping company or a certain retailer? Are there reoccurring patterns that can point to problems in the supply chain that can help determine how to minimize these events?

Big-Data: Do You Need Big or Small Data?
Despite what people say, big data is not a prerequisite for all IoT use cases. In many instances, knowing the current state of a handful of attributes is all that is required to trigger a desired event. Are the containers in the refrigerated truck at the optimal temperature? Are the patient’s blood sugar levels too high? Is the valve leaking?

Does the soil have the right mixture of nutrients? Optimizing these business procedures can save companies millions of dollars through the analysis of comparatively small datasets. Small data knows what a tracked object is doing. If you want to understand why the object is doing that, then big data is what you seek. So, the next time someone tells you they are embarking on an IoT initiative, don’t assume that they are also embarking on a big data project.

business agility

Business agility enhanced by cloud computing, what it really defines

Is cloud computing finally an authentic business strategy, or is it an IT optimization strategy? It might well be, a latest survey suggests. However, this new second phase of cloud means things are going to be harder to measure, conceptualize and design in the big picture.

Business agility has become the key advantage being delivered by cloud, says a new survey by Harvard Business Review Analytic Services of 527 HBR readers in large and mid-size organizations finds. The study was underwritten by Verizon Enterprise Solutions.

Business agility leads the list of drivers for adopting cloud computing, with nearly a third of respondents (32 percent) saying it was their primary reason for pursuing cloud. This was followed by increased innovation (14 percent), lower costs (14 percent), and the ability to scale up and down in response to variations in business demand (13 percent).

Defining “business agility” is where things get challenging. In the first phase of cloud computing, relative success could be measured on the basis of cost savings. Such calculations are based on hard, obvious metrics: Before cloud, the enterprise spent $100,000 on on-premises servers and software licenses and equipment every year, along with $200,000 of staff time (hypothetically speaking). After cloud, it spends $50,000 on subscription-based services and $150,000 of associated staff time.

Now consider how you would measure cloud’s impact on business agility. Business agility before cloud was… ugh. After cloud, things are, well, snappier, faster, more responsive.

Wikipedia seems to be having problems with defining “business agility,” with its existing definition having “multiple issues.” For the record, the site’s current definition of business agility starts off as “the ability of a business to adapt rapidly and cost efficiently in response to changes in the business environment.” Again, hard to measure. Business agility is one of those amorphous, ill-defined states that could mean any number of things. It means more profitability, faster time to market, greater ability to spin up new programs, create new products, hire new people, retain people — pick any. What’s important is not only speed, but conducting the right things.

The HBR-Verizon survey report doesn’t define agility head-on, but it delivers meaningful clues to the outcomes that agility provides: such as ability to enter new markets, reduce complexity, increase employee efficiency, and, yes, decreases costs. The survey discovers, for example, those enterprises more progressive in their cloud deployments, are more likely than their less-advanced counterparts to have entered a new market in the past three years (49 percent) or to have been part of a merger or acquisition (49 percent).

In addition, almost three-quarters of executives say cloud will reduce business complexity (24 percent significantly and 47 percent somewhat). In addition, 66 percent suggest cloud computing will reduce complexity in their company’s IT operations. Another 61 percent suggest it will increase employee productivity and 53 percent say it will increase responsiveness to customers (53 percent).

Benefits already seen from current cloud deployments include the simplification of internal operations (37 percent); better delivery of internal resources (33 percent); and new ways for employees to work, connect, and collaborate (31 percent). Also on the list of cloud advantage realized are faster rollout of new business initiatives to exploit new opportunities (at 23 percent) and improved ability to acquire, share, analyze, and act on data (23 percent).

SaaS Companies: Is it Investor’s First Love??

The estimation of SaaS organizations is much more than software product sales organizations. Public SaaS organizations have a Revenue median ratio to Enterprise of 6.6x compared with 3.0x for software organizations according to Software Equity Group. Do Investors adore SaaS companies as they have higher valuations or do SaaS organizations have higher valuations as Investors loves them?

Well actually, this isn’t irrational buoyance. There are solid reasons why SaaS organizations have high valuations. The valuations are a consequence of the superior SaaS business model.

Data from Saugatuck Technology, 2014

*Data from Saugatuck Technology, 2014

The market environment has changed; customers want to lease rather than buy – CAPEX, and not OPEX. They want the suppleness to try SaaS and then leave them, expand them, or leave them – business agility supports SaaS rather than software purchases.

SaaS based organizations take multiple forms. Software as a Service (SaaS) organizations such as Salesforce, and Infrastructure as a Service (IaaS) organizations such as Amazon have seen the highest growth, but there are multiple other SaaS industries like telecom, Media like Netflix, Healthcare, and also the “wine of the month club.” Organizations with recurring subscription revenue are seen favorably by the markets.

SaaS offer convenience, flexibility, and less lock-in for clienteles. SaaS need less time and effort for implementation – decreasing both the cost and the time to value (TTV) for clienteles.
Clienteles see the benefits of SaaS generating market demand, which powers the accelerated growth of SaaS organizations.

Legacy product retailers have a benefit in their resources, partners and clienteles. It requires a disruptive offering to seize these clienteles from the stakeholders – a SaaS offering can offer that disruption with latest capabilities, delivery mechanisms and pricing models with greater flexibility for clienteles.

SaaS organizations structurally can deliver more rapid advances than product retailers. Latest capabilities can be provided in SaaS update with their customers profiting from the most recent innovations rapidly. The capability to offer new benefits faster to customers provides a competitive advantage to SaaS organizations.

These benefits allow SaaS organizations to enter markets that were dominated by companies with a product sale mentality and contend successfully. Organizations without a SaaS offering won’t get a seat at the table – SaaS is usually a prerequisite in new projects.

Investors looks for organizations that disrupt the status quo.
Software equity group, 2014

*Software equity group, 2014

And investors adore growth. The median growth for SaaS organizations is over three times that of Software product organizations.

Steady growth in revenue is followed by a growth in organization’s value. SaaS is a disruptive paradigm which permits companies with a SaaS offering to grow briskly at the cost of the incumbents.

Investors understands that SaaS organizations are gaining market acceptance, and are preferred as well, in most industries (such as health care) and applications (such as CRM). Markets once not thought viable for SaaS subscriptions (such as ERP) are now witnessing high and growing demand. Investors wants to recognize the new leaders in these emerging organizations serving these new markets for their possibly high investment returns.

While initial SaaS revenues are less, compared to product sales, over the life of the client, the recurring SaaS revenue value is more than product sales. This higher customer value through a long term recurring revenue stream eventually leads to greater revenue for the organization.

Investors value organizations that are growing rapidly. High SaaS revenue growth is a resultant of expansion and maintenance of the customer base along with revenue from new clienteles. Software product organizations must add more clienteles each quarter to grow, while SaaS organizations just add customers on-top of the present base.

Investors see the rapid growth of SaaS organization’s valuations. For example, in the last five years, Salesforce, the principal cap SaaS stock’s share price increase has been over double than that of Microsoft, Oracle, or SAP, the three software organization with the maximum market value.

graph-2

SaaS companies benefit from Moore’s law – their Cost of Service for the SaaS companies is decreasing year after year as public cloud costs fall. Amazon, Microsoft Azure, and Google have reduced the cost of their compute instances by 36% and their storage by 65% in 2014. Those SaaS companies that host their product in the public cloud saw their costs drop dramatically. Those who purchased servers did not.

The chart on the right shows the areas where aggressive competition have reduced hosting costs of SaaS companies in 2013.

Investors now understand how to assess SaaS finances beyond simple Income Statements. Investors look beyond the sold product sale based financial analysis to assess the SaaS vendor’s financial strength. They look at the value of the future revenue stream not included in the financial statements, not just the last quarter’s profit and loss, to assess the financial prospects of SaaS companies. The long term profit potential for fast growing SaaS companies is now well understood with more sophisticated analysis of the increases in the SaaS companies’ Customer Lifetime Value and cash flow.

Investors also highly value the long term revenue stability that a SaaS revenue stream provides.

SaaS companies have lower revenues than product sales companies marketing to the same companies with comparable solutions until the recurring revenue stream builds up to exceed the value of discrete sales. Product companies that shift to SaaS have reduced revenue and cash flows until sufficient SaaS revenue builds up to exceed that of their product sales. But investors now understand this and rewards the transition to SaaS even with a short term drop in profits and revenues.

Adobe transitioned its Creative Cloud Suite to a SaaS model in May of 2013. Despite an 8% decline of overall revenue, but with a near doubling of SaaS revenue, Adobe stock soared 55% in 2013.

reduce-toc-of-sap

Reduce TCO of SAP Deployments and Upgrades by an integrated approach

Throughout the course of any SAP implementation or upgrade, enterprises should use a business technology optimization (BTO) method to achieve reduce total cost of ownership (TCO) and higher-quality applications. Mercury for SAP Solutions: a pre-packaged solution comprised of integrated software, services, and best practices.

In the current scenario, IT organizations are under growing pressure to cut costs and surge revenue — while diminishing risk and implementing new applications to meet business initiatives. Increasing the challenge is the growth in highly complex and visible Web-based applications, which can possibly upset a previously steady technology infrastructure. Integrated offerings like Mercury for SAP Solutions demonstrate cost savings by helping customers prevent downtime, drive cost out of their existing SAP and improve quality. With over 1400 customers have efficaciously used Mercury for SAP Solutions to deploy, upgrade, and maintain their SAP systems — and successfully reduce total cost of ownership (TCO).

Mercury for SAP Solutions includes several integrated offerings called Mercury Optimization Centers — pre-packaged business technology optimization (BTO) solutions that maximize IT strategy, application readiness, system performance, business availability, and problem resolution. These Centers enable an organization to apply integrated models across its IT organization and to optimize critical IT activities. They include: Resolution Center, Business Availability Center, IT Governance Center, Quality Center and Performance Center. The products encompassing the Centers are also available individually. For example, organizations can use Mercury products to solve a specific departmental issue or improve a particular process or project.

Automating tests and quality assurance
Quality Center improves application superiority by automating and managing the entire testing and quality assurance process, including functional testing of the business process workflow. It empowers testing SAP applications the same way end users interact with them in real life. This Center decreases time to test by an average of 60 percent. The QuickTest Professional 6.5 tool, part of the Quality Center, has been certified by SAP for integration with the SAP extended Computer Aided Test Tool. Using the SAP NetWeaver integration and application platform and QuickTest Professional, customers can run quality tests in environments that span beyond Windows and SAP solution-based environments, including advanced, multi-platform, highly-integrated composite, legacy, and proprietary enterprise applications.

Two core Quality Center products are tools Quality Management (formerly TestDirector) and WinRunner. WinRunner captures and reruns user exchanges automatically to identify flaws and certifies that business processes in SAP applications work impeccably the first time. To create a test, WinRunner simply records a typical business process by emulating user actions, such as ordering an item or opening a vendor account. Next, testers can add checkpoints, which compare expected and actual outcomes from the test run. WinRunner offers a variety of checkpoints, including test, GUI, bitmap, and web links. It can also verify database values to ensure transaction accuracy and database integrity, highlighting records that have been updated, modified, deleted, and inserted. With WinRunner, organizations gain several advantages, including reduced testing time by automating repetitive tasks. Support for diverse environments enables organizations to test any component mix or industry solution.

By conjoining requirements, planning, execution, and defect tracking, the Quality Management tool shapes and controls the testing process to regulate whether SAP applications are ready for deployment. Quality Management quickly translates a business process analysis into a comprehensive framework of tests that includes manual and automated functional and regression tests, as well as load test scenarios. Through its open architecture, Quality Management also seamlessly integrates with leading application lifecycle tools — from configuration management to helpdesk.

AWS: Is it as great as Amazon is claiming, yes, maybe, maybe not!!

Having been gathered information from offices using Amazon Web Services (AWS), we have come to know a lot about the reason it is being celebrated in the corporate circles nowadays. However, with this knowledge gathering that we have done for some time now, we fairly have an idea about what’s not so good about it. What we have managed to do is come up with a high-availability, high- performance system, which is slightly different than what Amazon advices.

Let’s look at two related things:
1. For folks who are trying to get their hands on AWS, or want to know more about it. We thought we will share some benefits and challenges having encountered them ourselves.

2. For those who are already using AWS, we know that the priority is always the uptime. Therefore, we thought of sharing some best practices for running a high-performance service.

It would be fair, and not over-enthusiastic to say that AWS has bought a revolution in running a technology start up in terms of sheer economics. Nobody notices how many companies are using Amazon’s Elastic Compute Cloud (EC2) somewhere in their heap until it has an outage, and unexpectedly it seems like half the Internet goes away. But it’s not like Amazon managed to do it by fluke: they have an awe-inspiring product. Everyone uses AWS because EC2 has drastically simplified running software, by enormously lowering the amount you need to know about hardware in order to do so, and the amount of moolah you need to get started.

EC2 is a modern method of running software
The primary and the most essential thing to know about EC2 is that it is not “just” a virtualized hosting service. Another way of thinking about it is like employing a network administrator and a fractional system: instead of retaining one very expensive resource to do a whole lot of automation, you choose to pay a little bit more for every box owned by you, and you have whole classes of problems distracted away. Network topology and power, vendor differences and hardware costs, network storage systems — are factors one had to give a thought about back in 2000-2004. However, with AWS you do not have to pay a heed about it, or at least not till you become a mammoth.

By far, the biggest reason and advantage of using EC2 is its suppleness. We can swirl up a new box very, very rapidly — about 5 minutes from thinking “I think I need some hardware” to logging for the first time, and being ready to go.

This allows us do some things that just a few years ago would have been crazy, for example:
• we can incorporate major advancements on new hardware. When we have a large upgrade, we spin up completely new hardware and make all the configs and dependencies right, then just prioritize it into our load balancer — reversing is as easy as resetting the load balancer, and moving forward with the new system means just shutting down the old boxes. Running double as much hardware as you require, but for just 24 hours, is simple and cheap.

• Our downtime plan for few non-critical systems, where possibly up to an hour of infrequent downtime is acceptable, to monitor the box, and if it faulters, make a new box and reestablish the system with backups.

• we can ramp up in response to event loads, comparatively than in advance of them: when your monitoring senses high load, you can spin up supplementary capacity, and it can be completed in time to handle the current load event — not the one after.

• we can not be anxious about pre-launch capacity calculations: we spin up what looks at a gut level to be sufficient hardware, launch, and then if we discover we’ve got it wrong, spin boxes up or down as required. This kind of repetition at the hardware level is one of the greatest component of AWS, and is only probable because they can provision (and de-provision) instances near-instantly.

But EC2 has some problems
While we admire EC2 and couldn’t have got where we are without it, it’s important to be honest that not everything is sunshine and roses. EC2 has serious performance and reliability limitations that it’s important to be aware of, and build into your planning.

First and foremost is their whole-zone failure pattern. AWS has multiple locations worldwide, known as “availability regions”. Within those regions, machines are divided into what is known as “availability zones”: these are physically co-located, but (in theory) isolated from each other in terms of networking, power, etc..

There’s a few important things we’ve learned about this region-zone pattern:
Virtual hardware doesn’t last as long as real hardware. Our average observed lifetime for a virtual machine on EC2 over the last 3 years has been about 200 days. After that, the chances of it being “retired” rise hugely. And Amazon’s “retirement” process is unpredictable: sometime they’ll notify you ten days in advance that a box is going to be shut down; sometimes the retirement notification email arrives 2 hours after the box has already failed. Rapidly-failing hardware is not too big a deal — it’s easy to spin up fresh hardware, after all — but it’s important to be aware of it, and invest in deployment automation early, to limit the amount of time you need to burn replacing boxes all the time.

You need to be in more than one zone, and redundant across zones. It’s been our experience that you are more likely to lose an entire zone than to lose an individual box. So when you’re planning failure scenarios, having a master and a slave in the same zone is as useless as having no slave at all — if you’ve lost the master, it’s probably because that zone is unavailable. And if your system has a single point of failure, your replacement plan cannot rely on being able to retrieve backups or configuration information from the “dead” box — if the zone is unavailable, you won’t be able to even see the box, far less retrieve data.

Multi-zone failures happen, so if you can afford it, go multi-region too. US-east, the most popular (because oldest and cheapest) AWS region, had region-wide failures in June 2012, in March 2012, and most spectacularly in April 2011, which was nicknamed cloudpocalypse. Our take on this — and we’re probably making no friends at AWS saying so — is that AWS region-wide instability seem to frequently have the same root cause, which brings me to our next point.

To maintain high uptime, we have stopped trusting EBS
This is where we differ sharply from Amazon’s marketing and best-practices advice. Elastic Block Store (EBS) is fundamental to the way AWS expects you to use EC2: it wants you to host all your data on EBS volumes, and when instances fail, you can switch the EBS volume over to the new hardware, in no time and with no fuss. It wants you to use EBS snapshots for database backup and restoration. It wants you to host the operating system itself on EBS, known as “EBS-backed instances“. In our admittedly anecdotal experience, EBS presented us with several major challenges:

I/O rates on EBS volumes are poor: I/O rates on virtualized hardware will necessarily suck relative to bare metal, but in our experience EBS has been significantly worse than local drives on the virtual host (what Amazon calls “ephemeral storage”). EBS volumes are essentially network drives, and have all the performance you would expect of a network drive — i.e. not great. AWS have attempted to address this with provisioned IOPS, which are essentially higher-performance EBS volumes, but they’re expensive enough to be an unattractive trade-off.

EBS fails at the region level, not on a per-volume basis. In our experience, EBS has had two modes of behaviour: all volumes operational, or all volumes unavailable. Of the three region-wide EC2 failures in us-east that I mentioned earlier, two were related to EBS issues cascading out of one zone into the others. If your disaster recovery plan relies on moving EBS volume around, but the downtime is due to an EBS failure, you’ll be hosed. We were bitten this way a number of times.

The failure mode of EBS on Ubuntu is extremely severe: because EBS volumes are network drives masquerading as block devices, they break abstractions in the Linux operating system. This has led to really terrible failure scenarios for us, where a failing EBS volume causes an entire box to lock up, leaving it inaccessible and affecting even operations that don’t have any direct requirement of disk activity.

For these reasons, and our strong focus on uptime, we abandoned EBS entirely, starting about six months ago, at some considerable cost in operational complexity (mostly around how we do backups and restores). So far, it has been absolutely worth it in terms of observed external up-time.

Lessons learned
If we were starting awe.sm again tomorrow, I would use AWS without thinking twice. For a startup with a small team and a small budget that needs to react quickly, AWS is a no-brainer. Similar IaaS providers like Joyent and Rackspace are catching up, though: we have good friends at both those companies, and are looking forward to working with them. As we grow from over 100 to over 1000 boxes it’s going to be necessary to diversify to those other providers, and/or somebody like Carpathia who use AWS Direct Connect to provide hosting that has extremely low latency to AWS, making a hybrid stack easier.

legacy-application

Legacy Application Modernization – Why we need it

Information Technology has always been the spearhead of agility and innovation. State-of-the-art and innovative applications fuels to the organizations’ productivity and growth, accelerate competitive edge and automate key business processes. But trailblazing, applications and systems of yester years become legacy application of today. These application support and enable core business practices and encompasses the mainstream organization’s application assets. However, the languages they are written in are often outdated and might be running on the platform which are no more scalable or no longer integrated with the company’s new technology ecosystem.

Historically, enterprise chose to write custom software rather than purchase IT systems. The bigger and more profitable the organization, the larger were their IT teams, developing customized supplication inimitably tailored to meet their distinctive business demands. With growing organizations, expanding product offering stack, acquiring and merging new business entities, organizations began to lose control of the custom-built applications, software and databases. Unable to cope with the prospective risks of retiring old systems, and switching them with new, consolidated software and applications, many organizations are now confronted with having to sustain and support original legacy systems that can date to up to three decades. Keeping these systems running requires dedicated teams with expert skill sets, which are getting harder to attain in the fast-changing world of Information Technology. Yet, many organizations do not have a flawless strategy for retiring legacy applications and continue to spend up aggressively their IT budgets for supporting the redundant, outdated sometimes completely obsolete systems.

Significantly, the tangled, sprawled application landscape leaves almost no space for impedes business agility and innovation. For instance, without a single, consolidated view of the customer database (DB), it is difficult to create custom-made profile-raising campaigns or introduce tailored product offerings. Again, without a centralized reporting system organizations may make critical mistakes in taking process payments, tracking shipments and product inventory. With many product inventory, tracking shipments, or processing payments. And with many outdated systems running almost the same types of transactions, it becomes very difficult to detect the root cause of the problem, whether the application has a functional or performance error.

Another very common concern with application growth is an exponential increase in data. Even the simplest IT systems are proficient to generate large quantities of data including shipping details, customer information and also the transaction records. Without appropriate archiving procedures, stored data can grow exponentially even up to five percent per month on a large system. The three key factors that contribute to uncontrollable data growth are:

• Acquisitions
• Poor archiving methods
• Lack of perfect internal guiding principle on data retention and compliancy

Many organizations do not have a exact process for eliminating historic data prior to merging application instances, as a result, they often find themselves holding out-of-date transaction records far beyond the required duration.

Most C suits executives cannot exactly approximate the number of out-of-date systems that are running in their application ecology and are being protected by the IT staff. Significantly, they neither have vision on the true cost of maintaining individual applications, nor a clear picture of the ideal number of applications vital to provide real business value and support future evolution. Finding the way out of the ever-increasing application extension and unrestricted data growth is not only a matter of un-cluttering the IT ecosystem and cutting operational costs; it is about finding space for innovation in IT budgets, increasing agility and efficiency and better aligning IT with the business needs.

Conclusion
Senior tech executives must constantly perform a balancing act, by satisfying the evolving demands of the business while getting the most out of their companies’ existing IT systems. With 70 percent of the typical organization’s global dealings running on legacy applications, doing more with less, is no easy feat.

Legacy Application Modernization allows IT to strip out redundant operating costs while reducing capital spend and release IT staff to create value for the business. It help address whether to migrate, re-platform or remediate legacy applications. The outcome: added value from existing applications with reduced costs, limited business disruption and decreased risk.