Startup Sutra: To Scale Quick, Ride A Cloud

Small is Big makes a catchy label for a startup to stick at the office water cooler. But Small is Big with cloud computing makes for business gyan. To put it in another way, Startup + Cloud = Another Facebook kind of valuation in the works (read on to know how). So think big. Work smart. Keep it lean and mean. Deliver stuff that works straight off the shelf. That’s what the cloud is all about, particularly for a startup. Enabling anyone to do any work or any play anywhere, anyplace, anytime. Is that not why when people say they are on cloud, they mean they are on cloud nine, eight times out of nine?

Reverse the equation for a moment. What if you are a startup actually offering cloud services? Impossible is nothing! You can potentially set the investors’ pulse racing and have over-eager venture capitalists knocking on your doors! Workday, a young Californian firm selling cloud-based software hit pay dirt managing the back-offices of large companies and ended up with a valuation of nearly $4 billion at the New York bourses. Another company, Yammer that offers social networking software, was snapped up by Microsoft for $1.2 billion.

Let’s rewind to Ground Zero when you have just buckled your straps and are starting from scratch. As a startup, you cannot afford to be straight-jacketed. You need to keep your options open. Like, one door should open when another closes.

Suppose you start with investing big on creating an all-purpose fully loaded virtual architecture, and this model ends up as a white elephant? All the more sensible therefore that you keep your investment on virtual architecture lean and mean and to the minimum, and fully leverage Cloud Service to the maximum by using it for accessing application infrastructure, processing, storage, etc.

Unless you are starting your enterprise with a billion dollars (!) your number one concern will be about how to thread your costs thin. Remember Google’s pay-per-click (PPC) concept? It’s the same with startups using cloud service. You only pay per spend, or pay per user or per quantity of processing/storage.
With cloud services, your resources are “elastic”, and you enjoy out of the box mobility by way of easy and instant access to IT facilities from any suitably configured device, including faster access to latest software and hardware upgrades on the cutting edge. For instance, days after your new state-of-art server farm arrives on its pallets, the market is abuzz about the launch of a new server that has double the processing power and is available at half the cost of your server! But if you have adopted the cloud model, you are able to access up-to-date hardware resources and software functionality, and its newly added features, at little or no extra cost.

However, many startups would like to cross the bridge to the cloud only when it becomes par for the course and not when it is still a fashion statement.

For instance, in situations where data requirements are huge, working on a smart phone view is like watching the spectacular Avatar on a 9’ inch screen and writing a review of it!

When a startup relies on a network provider for most, if not all, its IT needs, how will it cope in the event of a network disruption? How will you ensure uptime in case you lose connectivity to your data? How will you manage your Windows Active Directory servers?

Cloud for startups has its advocates and critics and it would be fair to say that it is an idea whose time will not go for some time to come. Wish we had Steve Jobs to ask the right questions and provide better answers. Or is it that he is on cloud ??

If you want to bootstrap your way to scale, your ticket is a cloud away.

Mapping the Organizations in Year 2020

Year 2020: Don’t bet your company will be the same as now. And by the way, don’t bet your company will change beyond description. There will be change, but it won’t be disruptive.

Recent research conducted by the Economist Intelligence Unit says companies will be larger and more globally integrated, with better information flow and collaboration across borders, less centralized, a flatter hierarchy and more empowered employees.

Employees will not just be knowledge workers of today, but active stakeholders in decision-making. They will double as data scientists, not because of a decree from the boss, but because of their ability to play multiple roles. For example, a LinkedIn employee uses analytics to come up with the popular “People You May Know” feature. A Facebook team creates a new coding language. And the boss cannot turn around and say, ‘I told you so’.

Size will not matter. It doesn’t really even today. Anecdotal stories of David vs Goliath will become more routine than rare, more fact than fiction. In fact, size could well be a disadvantage. Value creation will not depend on a company being a 800-pound gorilla, but on the ability of individuals to connect with one another.
Speed to market and speed to work will be the new dynamic on demand. To study it in contrast, consider the term “spinning the tape”, a fashionable jargon used by balance sheet accountants. Spinning the tape refers to the static way of analyzing accounting data for years. The new paradigm could be described as “speeding the tape”. Eg: You could be working on a deadline that is yesterday and expected to deliver just-in-time.
Employee loyalty will get virtually extinct. Blame it on global operations, emerging markets, and demographic pressure. 360-degree appraisals will be the norm. The boss will review your performance, and you will be reviewing his. It cuts both ways.

Management could be localized while company outlook will be globalized. Cross-cultural hires will be more frequent and people with poor soft skills will not be able to get a foot in. Perform or perish will be the universal credo of all organizations.

More organizations will invest in R&D and use data silos to test product launches. The metrics will vary from division to division. For instance, Google manages its various offices at Paris and New York in different ways, for there is no such thing as one-size-fits-all for organizations in the future.
But not everything will be hunky dory. Just like it always is in all enterprise history. Serving different kinds of customers in different countries through a workforce which is equally drawn from different lands, speaking different languages, create a whole new and different set of challenges for organizations. Consider working at odd hours. Outsourcing to call centers began as a great cost-cutting idea – and still is – but the intangible costs such as employee migration, employee retention, and the emotional costs on account of graveyard shifts will pose difficult and formidable challenges.

The future workplace calls for leaders with a holistic view of conducting business and managing people. Organizations will have to speed up to the science, step out of the fast lane and work on themselves. We shall be reminded often that Success, as Bill Gates famously said, is a lousy teacher…

Cloud based QA Infrastructure

A silver bullet to ward off traditional challenges

If you have some spare time at the office, spare a thought to the CIO in the IT industry. A blitzkrieg of challenges invite the CIO every day as he settles down on his desk after greeting his colleagues, rather ironically for him, a “good morning”. Here’s how the dice rolls for him every day at work:

Existing Scenario:

a)    Shrinking budget

b)    Increasing cost pressures

Expectations:

a)    Cut IT spend

b)    Deliver value and technology edge

Preferred Solution:

a)    Enhance ROI generated from IT components

b)    Increase focus on QA infrastructure and maintenance costs

c)    Lean on test managers to reduce QA infra costs as they form a major chunk of IT infrastructure budgeting.

Cutting costs, a Catch-22 situation

On the other side, test managers face a catch-22 situation as cut in QA infrastructure spend could potentially impact the quality of software deliverables. Here are a few examples of the challenges that drive cost of IT upwards while creating and managing QA infrastructure:

  • Testing operations are recurring but non-continuous. This means test infrastructure is sub-optimally utilized and therefore has a significant impact on ROI.
  • Testing work areas span a wide spectrum such as On-time QA environment provisioning for multiple projects, decommissioning of QA environment to other projects, QA environment support, managing incidents, and managing configurations for multiple projects. All these necessitate an organization to allocate and maintain proportionate skilled resources at all times which in turn drives costs upwards.
  • CIOs and Test Managers are expected to ensure testing is commissioned on recommended hardware, because most of the issues linked to later stages of the quality gate are attributed to testing on inadequate hardware. This again accounts for a significant chunk of the total IT budget
  • Creating appropriately defined QA infrastructure up and running in time (including procurement and leasing of these elements) to meet the set timelines demands more IT staffing resources
  • Many Test Managers give the goby to staging environment and directly deploy to production because of budget constraints, however creating a staging environment that mimics production is more critical to quality of software in production. Creating such environment also necessitates huge chunk of total IT budget.
  • Today’s complex application architecture involves multiple hardware and software tools which require a lot of investment in terms of time, money, resources on coordination, managing SLAs, procurement;  with multiple vendors. All these taken together add up more allocations in the budget.
  • For conducting performance testing, test managers need to set up a huge number of machines in the lab to generate desired number of virtual users demanding more budget from CIOs

The Case for QA infrastructure as a Service in Cloud

All the above challenges force CIOs and Test Managers to move away from on-premises QA infrastructure and scout for alternatives such as cloud computing for creating and managing QA environments. Organizations are leveraging cloud computing to significantly lower IT infra spend towards QA environments while at the same time deliver value, quality and efficient QA lifecycle. Already, many players, big and small, such as Amazon, IBM, Skytap, CMT, Joyent, Rackspace;  offer QA infrastructure as a service in cloud. Using this service, organizations can set up QA infrastructure in cloud, shifting focus from CAPEX to OPEX.  CIOs too are able to significantly squeeze both CAPEX and OPEX elements thereby meeting the budget cap without compromising on the quality of the solution.

How does it work?

Assume that a QA team needs a highly complex test environment configuration in order to conduct testing on a new application. Instead of setting up on-premises QA environment (which requires hardware procurement, set up, maintenance), a QA team member logs in to the QA infrastructure service provider’s self-service portal and:

* Creates an environment template with each tier of the application and network elements like web server, application servers, load balancer, database and storage.  For example a QA team member can fill the web server template like “web server with large instance and windows server 2008”.

* Submits the request through the IaaS service provider’s portal

* The service provider provisions this configuration and hardware in minutes and sends a mail to the QA team.

* The QA team uses this testing environment for required time and completes the testing.

* the QA team releases the test environment at the end of the testing cycle.

* For subsequent releases, the environment can simply be set up from the same template and the QA team can deploy the new code and start testing.

* The service provider bills for only the actual usage of the QA environment.

How does it help?

Elastic and scalable data center with no CAPEX investment: CIOs/Test Managers don’t have to worry about budgeting, procurement, setting up and maintenance of QA environment. Organizations simply need to develop applications and create a template of the required environment and request the service provider who enables the test environment. The QA team then deploys the application on a production like environment, thus saving time and expenses over traditional on-premises deployment. This shifts the focus from CAPEX to OPEX for IT infrastructure spending.

QA teams can provision their own environment: With this facility, QA teams can provision their own environment on-demand, rather than going though long IT procurement process, to set up an on-premises test environment.

Multiple parallel environments: QA teams can create different environments with different platforms and application stacks, with no investment in capex and multiple hardware, reducing the Go to Market time.

Minimize resource hoarding: Instead of setting up on-premises test environments and investing capital on hardware, QA teams can deploy the environments on cloud on a need-basis and release the resources after completion of testing. Some service providers provide ‘suspend and resume’ facility, in which case QA teams can suspend an environment saving the entire state including memory and resume at a later stage when required.

The bottom line: QA environments in cloud are lifesavers for companies. CIOs are slowly adapting cloud based QA infrastructure and moving away from on-premises QA infrastructures which demands huge CAPEX and OPEX and yields less ROI. Cloud-based QA infrastructure, if managed smartly, is a silver bullet that can neutralize most of the challenges faced by CIOs/Test Managers in traditional QA infrastructure.

Big Data: The Engine Driving the Next Era of Computing

You are at a conference. Top business honchos are huddled together with their Excel sheets and paraphernalia. The speaker whips out his palmtop and mutters ‘big data’. There follows an impressive hush. Everyone plays along. You feel emboldened to ask, “Can you define it?” Another hush follows. The big daddys of business are momentarily at a loss. Perhaps they can only Google. You get it? Everyone knows, everyone accepts, big data is big, but no one really knows how, or why. At any rate, no one knows enough straight off the bat.

In the Beginning was Data. Then data defined the world. Now big data is now refining the data-driven world. God is in the last-mile detail. Example: In the number-crunching world of accountancy, intangibles are invading the balance sheet values. “Goodwill” is treated as an expense. It morphs into an asset only when it is acquired externally like say, through a market transaction. Data scientists now ask why can’t we classify Amazon’s vast data pool of its customers as an “asset”? Think of it as the latest straw in the wind of how big data is getting bigger.

Big data is getting bigger and bigger because data today is valued as an economic input as well as an output. The time for austerity is past. Now is the time for audacity. Ask how. Answer: Try crowd sourcing your data defining skills.

When you were not watching, big data was changing the way the technology enablers play the game in the next era of computing. Applications are doing a lot more for a lot less.

Big data isn’t about bits or even gigabytes. It’s about talent. Used wisely, it helps you to take decisions you trust. Naysayers of course see the half-full glass as if it is under threat of an overspill. They insinuate that big data leads to relationships that are unreal. But the reality we don’t know is what is behind all that big data. It is after all, a massy and classy potpourri: part math, part data, with some intuition thrown in. It’s ok if you can’t figure out the math in the big data, because it is all wired in the brain, and certainly not fiction or a fictitious figment of imagination.
When you were not watching, big data was changing the way the technology enablers play the game in the next era of computing. Applications are doing a lot more for a lot less. Just to F5 (we mean refresh…):
You and me can flaunt a dirt cheap $50 computer the size of your palm AND use the same search analysis software that is run by obscenely wealthy Google.

Every physical thing is getting connected, somewhere, at some time or the other, in some or the other ways. AT&T claims a staggering 20,000% growth on wireless traffic over the past 5 years. Cisco expects IP traffic to leap frog ahead and grow four-fold by 2016. And Morgan Stanley breezes through an entire gamut of portfolio analysis, sentiment analysis, predictive analysis, et al for all its large scale investments with the help of Hadoop, the top dog for analyzing complex data. Retail giant Amazon uses one million Hadoop clusters to support their affiliate network, risk management, machine learning, website updates and lots more stuff that works for us.

Data critics though are valiantly trying to hoist big data on its own petard by demanding proof of its efficacy. Proof? Why? Do we really need to prove that we have never ever had a greater, better analyzed, more pervasive, or expansively connected computing power and information at a cheaper price in the history of the world? Give the lovable data devil its due!

In the Cloud, Don’t KISS.

Remember the Y2K dotcom era when every Tom, Dick and Harry rushed to ride the Internet bubble? It looks like many of us have forgotten our lesson, the instant Internet 2.0 (or is it 3.0?) made a comeback on a cloud, viz. Signing up for Cloud Services like you are applying for a credit card. Follow the herd mentality, you know.

To get smarter, faster, and better, go easy. And then act with speed. That’s how you win the race. Just because your competitor, your associate, or your vendor is moving to the cloud, doesn’t mean you mimic them without giving it any more thought. Think before you ink a SLA. Is your CSP (Cloud Service Provider) capable of delivering standards-based cloud solutions that are designed from the ground up to meet your specific enterprise requirements? Does your Service Level Agreement with your CSP also cover your requirements for monitoring, logging, encryption and security? Do you have the domain specific IT knowledge and expertise and the corresponding environment in place before signing up for a cloud solution? And are your security protocols in optimum functional mode?

Security protocols: Keep a hawk’s eye on them. In CIO circles, they warn you not to KISS (Keep It Stupid & Silly) when you sign for the cloud. KISS refers to common mistakes in an enterprise such as for instance, failing to to register your passwords and individual IDs with the enterprise; turning a deaf ear to demands for secure Application Programming Interfaces (API); and wrongly assuming that you are outsourcing risk, accountability and compliance obligations as well to the cloud.

The ironic party of this business of securing the cloud is the challenge of arriving at an ideal tradeoff between the need of the enterprise for security and the need of the consumer for privacy. The Economist in “Keys to the Cloud Castle” succinctly sums up this dilemma faced by cloud-based internet storage and synchronization providers like say Dropbox, using a house metaphor. Which do you prefer: An access through a master key which is in the hands of an authorized internal security or an access whereby you choose your own security key. The problem with the former is in the key falling into wrong hands, while in the latter case, the danger is in losing all access if you lose the key due to negligence. Cloud security scientists so constantly look to find a middle path that combines privacy with security.

Does this mean that a perfectly secure cloud computing is still a chimera? Happily for us, recent research in cryptography shows homographic encryption – a new algorithm which would enable a Web user to send encrypted data to a server in the cloud which it turn would process it without decrypting it and send back a still-encrypted result – is well on the way to become a pursuit of wow, among CIOs.

A clearly demarcated delegation of tasks between cloud providers and security providers could serve as a rule of thumb for ensuring both security and privacy. Cloud providers should focus on providing access, anywhere, anytime, while security providers should focus on core encryption. An integration of both these services can lead to a seamless and secure user experience. For example, you as an user encrypt your files directly on your laptop/desktop/phones, and then upload the encrypted documents to the cloud.

Bottom line: Don’t sign up for the cloud like you are applying for a credit card. Outsourcing your ideas doesn’t mean you also outsource your thinking..

For A Better Cloud Security – Wheel it Different, instead of Reinventing the Wheel !

Saas served as sauce? Wow. But only as long as it’s secure. And that’s where the penny drops. No matter. Big money now is way too big on cloud services. We can’t roll back the Age of Participation. The jury may be pondering on how secure is the cloud, but the verdict is only going to tweak “how secure is the cloud” to “how to secure the cloud”.

Yes, there is a cloud over the cloud. Less than a year ago, hackers stole 6 million passwords from dating site eHarmony and LinkedIn fueling the debate over cloud security. DropBox, a free online service provider that lets you share documents freely online, became “a problem child for cloud security” in the words of a cloud services expert.

The “Notorious Nine” threats to cloud computing security according to the Cloud Security Alliance (CSA), a not-for-profit body: Data breaches, data loss, account or service traffic hijacking, insecure interfaces and APIs, Denial of service, malicious insiders, cloud abuse, insufficient due diligence, and shared technology vulnerabilities.

However, a problem is an opportunity in disguise, and so the algorithm waiting to be discovered is to how to outsmart the hackers and overcome the threats to cloud security. More so, since the advantages that accrue from cloud services viz. flexibility, scalability, economies of scale, for instance, far outweigh the risks associated with the cloud.

One way for better cloud security is to use a tried, tested and trusted Cloud Service Provider (CSP) rather than to self-design a high availability data center. Also, a CSP yields more economies of scale.
Virtualized servers, though less secure than the physical servers they replace, are getting more and more secure than before. According to research by Gartner, virtual servers were less secure than the physical servers they replaced by 60% in 2012. In 2015, they will be only 30% less secure.

To do the new in cloud security, we could begin by reinventing the old. The traditional methods of data security, viz. Logical security, Physical security and Premises security, also apply to securing the cloud. Logical security protects data using software safeguards such as password access, authentication, and authorization, and ensuring proper allocation of privileges.

The risk in Cloud Service Offerings arises because a single host with multiple virtual machines may be attacked by one of the guest operating systems. Or a guest operating system may be used to attack another guest operating system. Cloud services are accessed from the Internet and so are vulnerable to attacks arising from Denial of Service or widespread infrastructure failure.

Traditional security protocols can also be successfully mapped to work in a cloud environment. For example Traditional physical controls such as firewalls, Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS), Network Access Control (NAC) products that ensure access control can continue to be critical components of the security architecture. However, these appliances no longer need to be a physical piece of hardware. A virtual firewall, like for example Cisco’s security gateway, performs the same functions of a physical firewall but has been virtualized to work with the hypervisor. This is catching on fast. Gartner researchers predict that by 2015, 40% of security controls in the data centers will be virtualized.

Moral of the cloud: You don’t have to reinvent the wheel to secure the cloud. But we need to keep talking – to wheel it differently.

Relevance of knowledge management for testing center of excellence

Knowledge management (KM) in simplistic terms can be defined as a collection of strategies and practices used to assimilate, articulate, share, sustain, reuse and retirement of knowledge. Undoubtedly KM is very vital for prolonging the life of knowledge. Many of the beliefs we have today (ranging from religious to understanding how the prehistoric humans lived) are strongly influenced and arrived at by reading and understanding the documentation (scriptures, carvings and drawings) created by the early KM pioneers. KM very much applies and is relevant to the present day context. In this new economy, the achievement of a sustained competitive advantage depends on organization’s capacity to develop, deploy and efficient use of its KM strategies.

Every piece of knowledge that is managed through KM is a Knowledge Asset (KA). In the context of the current day organizations, these knowledge assets can include anything from business processes, innovative ideas, lessons learned to FAQs. Industries like manufacturing, logistics, health care have employed KM since centuries. In the modern world, many of us have seen the importance and relevance of the KM for operations like Help Desk and customer care. But, it is equally important and relevant for the whole IT Services industry and particularly Testing Organizations (TCoE).
Many organizations setup TCoEs with the intent to create a centralized team that can independently verify and validate the IT solutions from the business standpoint. While there are many advantages in having a TCoE
(like sharing of best practices, tools, techniques, standards etc which are knowledge assets that can be efficiently managed using KM), one biggest advantage is that, they are the team that understands and evaluates IT solutions from the business stand-point. This very premise gives this team an undue advantage of gaining greater understanding of the business processes and their interdependence.

So, in my opinion, TCoEs are well equipped to champion the KM for the ‘business processes’ knowledge assets (with review and approval from business). TCoE can roll ‘business processes’ into their KA portfolio along with full life-cycle management for the TCoE related assets (like processes, innovations, tools, best practices and lessons learned).

In my personal experience, I have setup and worked in many TCoEs. I have seen a tremendous change in the way the TCoEs work over the last 15 years. Some of the challenges the TCoEs face include: flexing up and down in resourcing (just-in-time resourcing) to meet changing business needs, optimize the cost and improve the quality of delivery. While the above challenges are business driven, there is another challenge that plagues the IT industry especially the offshore companies; constant churn in the resources. One solution that can effectively address all the above challenges is the efficient implementation and use of KM.

The following are some of the key points that must be kept in mind for KM:
1. Defining what KAs will be managed through KM
2. Work closely with Business to capture the Business Processes and get them reviewed and approved before managing them in KM
3. Map Business processes and IT solutions/architecture so there is a correlation between both these KA
4. Assign champions for each KA area who is responsible for managing the life-cycle of the KA
5. Use a platform that can enable the publication and sharing of the KA(there are many commercial and open source tools that can be customized for this purpose like SharePoint, Redmine etc )
6. Use KM portal as a key source of knowledge acquisition for new entrants to the project/program
7. Make sure the KAs are constantly reviewed to ensure they are accurate and up to date

The bottom line is, successful TCoEs have a strong KM framework and practices in place and the world-class TCoEs include Business Process KA as part of their KM strategy.

I am the CEO. I am the Employee.

No Management. No Subordinates. Is Workspace 2.0 ready to take self-management to its logical conclusion?

How flatter can an organization get in the context of workplace 2.0 which hates hierarchy? From the Buck Stops Here to the Buck Starts Here? The knowledge worker of today doesn’t bat an eyelid cracking Dilbert jokes with his boss at the water cooler. All this is the new normal (?) and that’s why management tomes like ‘First, fire all the managers’ fly off the shelves faster than you can actually do that.

All this is oh-so romantic and utopian. But would Apple run by a million CEO workers have been better than Apple under Steve Jobs? Jobs, according to people who worked closely with him was more of an autocrat than less. Damn! Not fashionable at all. Jack Welch too by all accounts can hardly ever be a pinup idol for zero hierarchy. Of course they were all for empowering the down lines, but finally, the nuclear button vests with them.

From a historical perspective, there’s nothing “new” about self-management. Peter Drucker way back in 1953 in his classic ‘The Practice of Management’ called for empowering workers as managers and more importantly, said, it requires new tools and far-reaching changes in traditional thinking and practices.

Now that we have the new tools and think out of the traditional box, more and more companies are putting the Drucker test to practice. Gary Hamel, “the world’s most influential business thinker” according to the Wall Street Journal and author of the recently released What Matters Now, showed how Morning Star, a leading food processor, had made it to the top without bosses, titles or promotions in his analysis published in Harvard Business Review.
The California-based Morning Star, the world’s largest tomato processor handling nearly 30% of the tomatoes processed each year in the United States and with annual revenues of $700 million is not just a Drucker dream-come-true but a company which has registered phenomenal success. And that’s why it makes you pause and ponder. Is it really possible for an organization to become a global market leader where there’s no CEO, where everyone can spend the company’s money, and where each employee is responsible for acquiring the tools needed to do his or her work?

As Hamel explains in his case-study, Morning Star is built on the edifice of five fundamentals, viz. Make the mission the boss; Let employees forge agreements; Empower everyone – truly; Don’t force people into boxes; and Encourage competition for impact, not for promotions.

So why aren’t we seeing more Morning Stars shine bright? If self-management has evolved from a nice-sounding slogan to a workable sound business strategy that can enable organizations to grow to the pinnacle why is it still unfamiliar to most of us? The view from the cubicle (and even from the CEO’s penthouse) is that self-management is an idea whose time has come as much as it is an idea which is yet to really take off. Contradictory? Yes, but check your premises – and we mean your company premises. People are not ready yet. Paul Green Jr., who helps run the Self-Management Institute launched by Morning Star, is on record having admitted that not even one company to his knowledge has “fired all the managers all the way” a la Morning Star and hit pay dirt.

Self-management like Morning Star is still an outlier. Of course organizations swear by empowering people, and are eager and willing to go the extra mile to bat for a minimalist hierarchy, but as to whether go the whole hog with this, they have to wait for a nod from their Board. It’s still for the Boss to decide whether he/she can be fired…
A true answer would be to perhaps go for the middle ground. Go for self-management, and retain the boss. You can’t win Wimbledon only on the strength of your forehand…