Cloud Solutions

Top Internal Usage Scenarios for Outsourcing - Care Analytics

A Flexible Delivery Model

By | Cloud Solutions | No Comments

Recently, I was participating in a conversation with a CEO group regarding consulting and how few companies can deliver what they commit to. Somewhere in the conversation, I mentioned the importance of delivery flexibility, and had one CEO challenge me to explain what a flexible delivery model really was. I answered it and we moved on and had a very productive conversation.

But later I started to think through my answer and I began to ask myself what a flexible delivery model really means, and what does it really matter to successful delivery. So in this blog, I am going to explore what I think a flexible delivery model is, what types of delivery types make up a flexible model, and if/how the model can help. I hope it is informative and helpful. I also hope you will provide your feedback and thoughts. So with that, let’s get to it.

What is a Flexible Delivery Model?

The simplest explanation of a flexible delivery model is that you change the way you deliver a project depending on what the client needs are and the project dynamics. You will use different methods to deliver a project based on the business requirements and technical demands of the project. While this is true, and it makes complete sense, it really does not go far enough.

Flexible delivery goes beyond just changing the method of delivery. A flexible delivery model also refers to what type of project you will use. Or more accurately, it refers to how you build the team and assign resources. Having flexibility in how you build and engage with the team is a key component for delivery flexibility. Delivery flexibility also encompasses how you bill or charge back a project.

Depending on the needs of the customer or internal departments, you may decide to bill, or charge back, using various different methods. By providing this flexibility, you align cost with actual behavior and you may enable the client to begin a project that would otherwise be rejected or put on hold. So financial flexibility is part of the overall delivery flexibility. In fact, all of these individual things are components that make up a flexible delivery model.

In short, a flexible deliver model refers to how you attack solving the problem, how you build your team to deliver the solution, and how to best charge those costs to the client (whether internal or external).

What types of basic delivery options are there?

There are five types of delivery options that can be used in a flexible delivery model. The delivery options are based on how the delivery team is structures. They are:

  • Assigned Resource(s) – On or more resources are assigned to a client or project for an extended period of time. An example would be if you assign three Workday resources to a two year implementation. There can be various billing options like hourly, daily, weekly, monthly or quarterly rates.
  • Pooled Resources – An allocation amount of effort (usually in terms of man hours) are agreed to for an extend amount of time. An example would be when you hire a company to provide one thousand hours of Java development per quarter for the next three years. The vendor will allocate those hours to you and ensure that the resources are available as planned.
  • On-Demand Resources – Resources are provided as needed. This is usually to provide temporary support for peak times. For example, you may buy a support contract for 100 hours a month to cover any network outages. Or you may pay for 100 hours of call center support to cover those times where there are more than 10 calls in a queue. This is usually paid on a per usage basis or as a managed contract with a minimum amount per period (weekly, monthly, or quarterly). Companies who have seasonal fluctuations tend to need elastic resources to manage changing demand levels.

They will forecast their needs in advance and schedule resources to cover those forecasts. Once they know if they need more or less than the forecasting models predicted, they will rapidly increase or decrease resource loads.

  • Elastic Resources – Elastic resource allow you to spin resources up or tear them down within short period. Think of a resource cloud. You add more resources as you need them and if you ever have too much bandwidth, you reduce the team size. Staffing is one way of doing this, but it is not an efficient way. Staffing takes a while to get someone started and you do not want to get a reputation for treating the contractors badly by eliminating them quickly. There are better ways to provide the elasticity.

Companies who have seasonal fluctuations tend to need elastic resources to manage changing demand levels. Most of the companies who are effective at this model have someone at least part time at the client so they understand the culture and requirements.

They, in effect, are the engagement manager and the HR manager. When a client needs to add a new person, all they need to do is to contact the onsite manager with an authorization. The onsite lead will coordinate identifying the right person and starting the project quickly. They will transfer all the relevant customer culture and procedural information to the new resource.

  • Usage Based Resources – Think resource as a service. Rather than paying or requesting resources, you pay by the resource task and the vendor charges you a price for the service. The vendor then becomes responsible for managing to the service cost and SLA. In a since, fixed price projects would fit this model.

This can be a good model for companies that are mature and have a handle on their cost accounting. The vendor likes this model because it allows them to try to manage to better efficiency. It is a shared risk model.

What types of project delivery are there?

There are also at least six types of project delivery methods as well. They are:

  • Augmentation – The vendor provides their resources to augment your team
  • Outsourcing – The vendor assumes responsibility and they become your team
  • Project Delivery – The vendor uses their resources to deliver the outcome you need
  • Product Development – The vendor team manages either the development of, the QA for, or both development and QA for a reoccurring release schedule. This is must delivery scenario. The vendor is basically charging a set price for a set of agreed to functionality on a reoccurring basis.

This becomes a shared risk model, but there must be reliability as the company is marketing the functionality in a release well before it is set to release.

  • Managed Services – The vendor provide a certain amount of resource units per a specified time period. (ex. 30 hours a month for network support). This is typically used to manage ongoing systems and applications. Help desk services are the most common of the managed services categories. Managed services are also moving into the …as a service model. For example, mobility as a service, IT as a service, or procurement as a service.
  • Block of Hours – Much like a managed service, the client buys a certain amount of units for a specified period. However, there is no regular interval for the hours. The hours are used when needed. For example, if a client buys 5000 hours of development time for a six-month period, they may use all of the hours in the first month, the last month, or anywhere in between.

Does a Flexible Delivery Model even matter?

For years I have been touting a flexible delivery model as being key to delivery success. Was I right? Well, to give a consulting answer; it depends. It will help in some cases and in others it will not. When you get right down to it, a partner delivery model doesn’t matter one iota if they do not have quality resources.

It always comes down to the people. If you got good ones, you will be a good delivery company. If you don’t, you will never be good at delivery no matter what model or methodology you use. So, if you do not have good consultants, a flexible delivery model is irrelevant.

But what about those companies that do have good consultants. Will a flexible delivery model help them and if so, how? I believe it will for several reasons.

  • First, it allows you to match the project demands and the client’s capabilities with the best way to meet those demands. If the project requires periods where there is no work, and then there is rapid deployments, then an elastic delivery model is perfect for that. If the vendor is familiar with this model, they will have the appropriate tools to help predict the demand, and they will have processes for starting up a team quickly.
  • Second, a flexible delivery model will match balance cost and risk. For example, if the company has high risk because they do not have experience in application development project in a new technology, they can use a fixed price, assigned resource approach to eliminate the risk. They may even be able to use a product development approach. This will cost a little more money that the other methods, but it will reduce, if not eliminate some of the risk they face.
  • Third, a flexible delivery model can help clients get projects out to market faster and they may allow them to do more projects overall. I just had a client who wanted to outsource a project, but there were severe accounting constraints on capital expenditures. But this was an important support contract and the CIO was in bind looking for a solution. We set down together and look at various delivery options.
  • We finally found one that met his budget, his goals, and his accounting requirements. One of my partners signed a three-year support contract with him and everyone is happy. But without having a lot of options available to me, I probably would have never found the right one for the client.

So in summary, I do think that a flexible delivery model can be helpful, assuming the delivery company has good people, processes, and methodologies. It allows the delivery team to customize an approach to meet all the needs of the client organization.

Thanks for reading this. I hope it was entertaining and helpful. Please let me know if you have anything you would like to add, or you just have feedback or comments. Until next time, I wish you health, happiness, and prosperity.

Tommy Simmon
President & COO
Care Analytics

Care Analytics Moves Into New Offices In Medellin, Colombia

By | Cloud Solutions, In the News | No Comments

We are happy to announce that we have moved our operations to a brand new office in one of the most exclusive areas of Medellin, Colombia, the so-called Golden Mile. This is an excellent location that will allow us to welcome our customers, fulfill our company’s goals, and further develop our corporate culture.

Julio Aconcha – Solutions Architect – Colombia
The fact that we have a such a splendid environment will empower our team to reach higher levels of quality in the services we provide to our customer, by ensuring that we have an optimal space that enhances our resources’ effectiveness”.

Care Analytics
 is a subsidiary of TEAM International and was launched in recognition of the current technology trends and the need for expertise in cloud computing, applications, migration, and more. It is a US owned healthcare cloud solutions and analytics solutions partner, specializing in migrating, building, integrating, and managing applications in the cloud, specifically in the healthcare industry.

Our brand new office that we share with TEAM International, was unveiled on August 23rd. With an outstanding view of the city, our offices offer a modern design, a comfortable style, and are in the epicenter of the business district, with easy access to transport and commercial centers.

Our office has space for more than one hundred employees, kitchen amenities, amusement areas, conference rooms, TVs, Bean Bags, electronic entertainment, and much more.

Andres Gómez – Operational Manager – Colombia
We have created a place where our passion for technology, our keen interest to create new relationships, and our tenacity to meet our current business partners needs are reflected”.


We had the pleasure to have as guests other innovative institutions and companies in the IT sector, as well as the president of TEAM International, Chris Walton, and our Board Chairman, Matt Moore. The opening was a complete success, where we gave a warm welcome to all those who are part of our Care Analytics family.

Care Analytics New Location - Medellin ColombiaToday, our operations are running smoothly. Our president, Tommy Simon, visited on October16th, and reviewed the last details for Care Analytics’ operation in the new location and with his final approval we could mark this chapter as completed.

Tommy Simon – President of Care Analytics
Our history is one of people, whether it is the employees that work for us, you as our customer that we serve, the care givers that we support, the community as a whole, or the most important person, the patient that we can help to change their life and give them better outcomes. Our new location will allow us to better fulfill the needs of the people who have shaped our history”


Our Care Analytics family has become better and will keep on growing to continue to support all of our clients’ needs.

Is The Cloud For Everyone?

By | Amazon Cloud, Azure vs AWS, Cloud Solutions | No Comments

So, a funny thing happened to me at the airport the other day, and it dawned on me that I knew something that I had thought was common knowledge within the community of people that I will label “People who run computers.” The common knowledge, or so I thought, was that the Cloud is for everyone.

Now, when I say “People who run computers”, I am not referring to the masses of us that have figured out that our little Android helpers in our pockets, or our bedazzled sequin Siris in our purses, are little computers. No, I am referring to those people that, for one unknown reason or another, have decided to make computers their focus within their professional lives. I mean the people who endeavor to keep our Global thirst for ever increasing information processing and storage running, even when we wake up at 3:00 am and decide to embark on an electronic version of the “Midnight Munchies.”

So there I was, at the airport, listening to two of these very people, who were contemplating the 21st century electronic philosophic equivalent of “Who am I, and why am I here” the question being, “Why do I need the Cloud?

It was a fascinating question, and one that many people are currently asking, given that you can’t be awake for a whole day and not come across some small reminder, someone, somewhere, doing something – “In the Cloud.”

After hearing the one younger man, probably in his thirties, explaining to the other older man how he prefers “Azure to AWS– I grew more interested in the conversation. Now most people would be thinking that they were discussing a chocolate bar, or perhaps the latest fitness routine, but they were, in fact, discussing “The Cloud” and the merits of why Microsoft was a better selection as a provider of Cloud Services as opposed to Amazon’s Cloud platform – or more commonly referred to as “Amazon Web Services“.

Care Analytics is an AWS partner. 

It wasn’t until the older gentleman had voiced the comment “Why do I need a Cloud full of servers when I already have a cupboard full?” – that was when I was busted, as I must have visibly shown in my smile that I was eavesdropping on the conversation …… so I deflected and warmly said: “But EVERYONE needs the Cloud.”

For the next hour, while I waited for my flight, there was a great conversation that covered every aspect of how the older gentleman had purchased servers from Dell several years back for his call center, only to find that the relatively large purchase was just the beginning of his journey. Even though he considered himself a technical person, he soon discovered that he needed a pretty sophisticated router setup to connect all of the servers and allow them all access to the Internet. He initially thought that he could use a similar device that he had been using in his office at his house. He even admitted that in less than two years he had maxed out the connections and processing power of his router after gaining a massive timeshare contract for his call center business and had to go to the “next level” of a router which was $1,500, even on Ebay.

I asked him if he had set that router up himself and he said no, as it was a little more complicated and he had paid a friend of his son $300 for two days work setting it up.

I explained how all of that was built into the Cloud and Amazon Web Services for literal pennies, as it came with your very first deployed computer, or “instances” as Amazon likes to call them.

So Rick (as I came to know him) – continued to tell me all the tales of woe that he had managed to find himself in over the several years since his purchase. They had been a de facto educatio0n in computers forced upon him by the simple business evolution that he had during that time.

From Routers to network cables, from power supplies to power losses, poor Rick had no end to his battle scars.

He even had to replace four servers in a flood because some of them were not in a rack and he managed to get ten inches of water into his “cupboard” one time. His “cupboard”, by the way, was a small office within his call center that his company had re-purposed as a “temporary” computer room that never made it out of the “temporary” category.

Clearly, Rick was not going into competitive business with Amazon anytime soon.

I had so much fun with Rick over that hour that it flew by, he gave me so many reasons for the comment that I left him with as I got up to leave for my flight.

Looking over my shoulder as I started to walk away, I winked at Rick and said, “you know Rick, for a guy that did not need the Cloud, you sure seem like you need it to me!”.

As I got almost out of earshot, Rick shouted: “How much is Amazon’s Cloud, it sounds expensive!”.

“It can be” – I fired back, but the first year is FREE!”.

I strolled on – not sure if I had helped poor Rick, or even if my comments sunk in, or left him with a new conclusion of if or why he may just have a need for “The Cloud.”

So I end with the beginning of my blog entry, and perhaps a small contribution to the question “Is the Cloud really for everyone?

Well, maybe, and maybe not, but certainly it was for Rick, so Rick, if you are reading this, thank you for making my airport wait time so entertaining.

For anyone else reading this blog entry, is the Cloud for everyone, including you?


If you’re not sure, Contact Us.


~ Mark Richards

AWS Solutions Architect – Care Analytics


The Evolution Of Computer Storage

By | Amazon Cloud, Cloud Solutions, Data Services | No Comments

So, nope, I am not talking about where you keep your laptop or your iPhone, I am talking about Computer storage, the “Hard Disk,” the one that is on your Laptop or iPhone. It’s just another one of these things that we all take for granted in today’s world of IoT (Internet of Things) and the Cloud.

But here’s the history of how it came to be a Terabyte in your pocket now. For that matter, here is even a history of what a Terabyte really is, depending on whom you talk to.

Of course, the Terabyte in your pocket wasn’t always that way, but just like rings counted in a freshly cut tree trunk, if you have been around the block a few times, you can almost tell someone’s age by the first disk or device that they remember.

For me, that is ancient … I can start at the very beginning, which is before the hard disk even existed.

I wondered the other day, just how many of our bright young generation, actually know what a “floppy disk” was or is? And this lead me to write this blog so that we can remember the “good ole days,” the days of the vinyl 78’s, Betamax and a shiny 62 Chevy … well, actually I digress, I am really not THAT old … I just wanted to see if you were paying attention.

So let’s get back to the floppy disk. The very first floppy disk I saw was in 1976, so there you go, I really aged myself there didn’t I?

Anyway, in 1976, I remember being amazed at the floppy disk. I was still in school, and I spent my early days playing with everything that remotely resembled a technical toy, and of course, any computer that I could find and dismantle, to figure out how it worked and sometimes didn’t, after breaking more than a few. But it was worth it, and it was very educational, just an expensive hobby at the time. It was good I found a job collecting old bike parts and building bikes to sell, because I am pretty sure after about five broken computers, my parents would have given up funding that exercise. I am certain that they would never have understood that to “fix” a computer there was a need to open them up and look inside, and sometimes even “break” them.

Anyway, I had some good grounding in the early days, I had worked with punch tape before floppy disks, and in the days when I really had nothing to impress the girls, there was always my fallback trick, of actually being able to look at eight holes on a paper tape, and tell them what letter it was…I don’t say this to impress because back then that’s all I had! – but to tell you that in the beginning, there were eight holes. Those eight holes, in all combinations, represented all of the characters of a typewriter and more, some special characters that told the computer that the girls fed the paper tapes into could “read.” Now pay attention, this is not the boring bit, this is the beginning….

Each one of those rows of holes represented a character – or a “byte” of information. If holes one through three were punched it was a certain letter, and if there was no hole where hole two may have been, it was another letter.

As the days of punch tape and punch cards gave way to the first hard drives, the original makers became enormous corporations very quickly.

Companies like IBM and Western Digital, both still well-known names today, even 50 years later. They both owe their fortune and fame to the original hard drives.

It took a while, and I don’t remember the very first drive, though I know it was called a Winchester, for the life of me I still don’t know why you would call a hard drive after a gun, being British that may be lost on me. I guess it’s the same reason why Google today calls the Android Operating system releases names like Cupcake and Gingerbread. Strange people computer people.

So back to the floppy, and focus!

I don’t remember the 3340 model Winchester disk, but I did start work on the 3370 – it’s successor…. I think that was in 1979, I remember they were the size of a fridge, and I think I even remember that they were tens of thousands of dollars, or good old “pounds sterling,” as they were to me in Britain then.

That was about the time that I also came across an 8-inch floppy disk, and I remember reading the label as if it were yesterday, it held 102k “Kilo Bytes” of information.

So, this is the point where I have to digress, or you’ll get bored by the numbers, but stay with me, because this will all become relevant, there is madness in my method!

I will jump in and out of the floppy drive and hard drive in this blog post, but the storage concept is the same for both.

So I think most of us know that there are 8 “bits” in a byte. If you don’t, and you were paying attention above (see, don’t skip) – you will have found that those eight “bits” represent a single character.

Now, a computer works with those bits and sees them on a magnetic storage (whether it be floppy or hard disk) as ones and zeros…Or to be more precise, the magnetism is there to be used by the computer to “switch on” and off – the precise dot on the disk that represents exactly one-eighth of a character… so if it reads eight “dots” or “bits” in a row, it then moves those “bits” into memory to perform a task with it.  

So, now we know – there are 8 “bits” in a “Byte.”

I supposed I should tell you that two “Bytes” are “Nibble” – not that there is any relevance at all, except that I really think you should know that. You never know when that may save your life, or at the very least become the next Slumdog Millionaire.

Ok, so I feel that you are getting this, so we must go faster now that the training wheels are off….

So, even though the 8 inch floppy disks had been around for several years prior, the public never saw them, and to all intents and purposes, the first floppy disk anyone outside of the boffins with the pencils in their pockets and the broken glasses saw was the 5 ¼ inch diskette, invented in 1976.

For most people, it stored 360 kilobytes or 360,000 characters.

It was shortly after, that the disk manufacturers figured out not only how to make a “Double Density” floppy disk, but those clever little beggars had also written to the magnetic medium on both sides. So now we were looking at a whopping 1024 Kilobytes, so we entered the new realm – of the Megabyte!!!!

So most people think of “Mega” as “Big” – or “magnificent” – in terms of computers, it represents a million bytes or a thousand kilobytes.

I won’t go into the “1024” and not “1000” thing … because then, I really will bore you. So we forge ahead!

So in 1980, at the same time that Apple brought out their floppy disk, and Apple II disk (I had and broke one of them), Atari had their floppy disk version and computer (I had and broke two of them). There were multiple versions and iterations, but over a period of five years, there evolved a standard where the 5 /4 “floppy disk” or “diskette” as they became to be known, went from 100 Kilobytes in the lab version – to a retail version that now was double sided, double density and contained the first and famous “Megabyte”.

But as quickly as it arrived, the media companies were constantly seeking more and more capacity and more creative ways to achieve that goal. It wasn’t long until the inevitable happened, and they made it smaller.

They didn’t make it smaller in the amount it held, though they did make it a 3 ½ drive by 1983 – what’s more, they had now made the “floppy disk” – NOT SO FLOPPY!

The 3 ½ disk drive was born – it wasn’t very floppy, in fact, it was not floppy at all – but the name stuck, even though the magnetic disk was encased in a hard plastic and it, well, no longer “flopped.”

So here we are, from 1983 to 1986, where multiple versions of the 3 ½ inch diskette had arrived at a first 720kb version, and then on to the most popular version of all time – the 1.44mb diskette.

I know that one well because I walked around the office with my “boot disk” in my pocket. I could see in those days, so I never did the broken glasses thing, but I do vaguely remember having a pocket full of pencils, If that means anything to you.

In 1983, as the floppy was evolving, so was the hard drive. I remember being shocked that the 5Mb hard disk was quickly replaced with a 10Mb version. Good ole Floppy, or diskette, was still 1.44Mb. To this day they are still available on EBay. I think they did eventually evolve to a higher density – the usual doubling of technology effect that made some of the 3 ½ disks hold as much as 2.8 Mb. Not much of media company’s R & D dollars went into the Floppy diskette after 1985, because the media industry had invented a new toy – the CD Rom!

Ok, so here we go again….in 1991 “CD-ROMs” came out, so called because they were “Compact Disks” and ROM – meaning “Read Only Media.”

Unlike the floppy disks and even the hard disks, CD-ROMs were not magnetic media anymore, and they were now Optical. This meant that although they were still switching bits on and off to represent Binary data, the drives were able to do this optically, by the use of very small laser light technology built into the drives that could pinpoint very small large on disk and change a single bit.

The 10Mb hard drive was also galloping away at technology improvements at this time, from 10 to 20 and 20 to 40 and so on. By 1991 – IBM had introduced the first 1,000 Megabyte drive – thus entering the age of the “Gigabyte” How do I know this?

Because I paid 1,000 dollars for it in 1992!!! – Ssh, don’t tell my wife!

How did they get from the late 80’s 40 Mb drive to that magic 1 Gigabyte number and density in 1992? – They had by that time managed to build drives that were essentially eight disks, or eight “Platters” on a spindle, and also doubling the density of the magnetic medium. More “bits per inch” as the industry calls it.

From this point forward, the competition to produce the drives really started heating up, huge companies like Fujitsu, Maxtor, Quantum, Hitachi, IBM, Seagate, and Western Digital were in a never ending race to continue to be the biggest or the fastest drive in the industry. All the while reducing the cost and providing double the capacity.

In 1992, Seagate was the first to market the 2. 1 Gb Barracuda drive, able to spin the disks at a whopping 7,200 rpm to get the fastest read and write capability of its time.

Not to be outdone, IBM was working on technology that would soon cross a billion bits per inch marker, an astonishing feat back then, with clean rooms needed without a single spec of dust for fear of contamination, costing hundreds of millions of R & D dollars.

The leap frog game in 1996 again went to Seagate, who had by then reached a drive capable of spinning at 1,000 rpm with the Cheetah family of disk drives. Only to be outdone in 1997 by IBM once again bringing out a technology leap with magnetoresistive heads (no, actually) – boosting capacity to a huge 16.8 Gigabytes.

In the year 2000, the big guys figured if you can’t beat them, buy them!

Maxtor bought Quantum, who was the number two drive maker in the world, to now surpass Seagate as number one.

Seagate, not to be outdone, introduced an even faster drive in 2002 with the 15,000 rpm Cheetah drive. The fastest drive in the world could read the disk and retrieve information in an average seek time of 3.9 milliseconds, and pulling 48 Mb every second from the drive. I never did really understand why it gave a “more enjoyable online experience” as it was marketed, they probably just got fed up with saying it was “fast” all the time.

I feel like this is a good time to take a break and get an update on just how big these drives can get.

So … we have established that there are 1024 Kilobytes in a Megabyte earlier, and we also touched on a Gigabyte – being a thousand Megabytes.

But, did you know….

That 1,000 Gigabytes is a Terabyte (or 1012bytes)?

And 1,000 Terabytes is a Petabyte? 1,000 Petabytes being an Exabyte …

And 1,000 Exabytes is a Zettabyte?

But I bet you knew that a 1,000 Zettabytes is a Yotabyte – right?

It wasn’t until 2007 that we saw the very first 1 Terabyte hard drive from Hitachi.

In the following four years we saw the leapfrogging continue, with Seagate and Western Digital increasing the capacity to 1.5 Tb, 2.0 Tb, 3.0 Tb respectively, and in 2011 Seagate finally trumped WD with the 4 Tb drive.

It is no wonder then that IBM decided that the playing field got too crowded, and decided to sell its hard drive business to Hitachi for an “undisclosed sum” in 2012.

The merger of the IBM intellectual property with Hitachi produced a new heavyweight to combat Western Digital and Seagate.

Hitachi announces the first 6 Tb drive in 2014, with Hitachi once again claiming the crown in 2015 with the world’s first 10 Tb drive, the Helium filled Ultrastar H210.

The drive reportedly has an average 2.5 million hours between failures, which is about four times more time than the average human has. If you ever wanted to bury a boat load of data in a time machine, for the future benefit of humankind, this is certainly the drive for you.

The sad part is, this drive sells today for under $500, or twice what I paid for that 1 Gb drive in 1992. No, don’t tell the wife.

Sixty years ago, data storage cost $640 per megabyte, per month. At IBM’s 1956 rates for storage, a new iPhone 7 would cost you about $20.5 million  – a month.

In 1965 The US Government planned the world’s first data center to store 742 million tax returns and 175 million sets of fingerprints on magnetic tape. None of that magnetic tape could have survived to today, yet you could store the records on your iPhone.

These days you can fit 2 TB onto an SD card the size of a postage stamp.

So where do we go from here? Who knows, with Virtual Reality finally a reality, and every hour of James Cameron’s new 800 mm stereoscopic 3D movie cameras for Avatar containing more information than the Library of Congress, no one really does.

But I do know, that I have spun up 30 computers in the last week on Amazon’s Cloud Computing platform – with a total of 5 Terabytes of storage – and all without leaving my desk.  

Care Analytics is an AWS partner

If you are looking for more storage and considering the Cloud, Contact Us.

~ Mark Richards

AWS Solutions Architect – Care Analytics