SDLC Series: NoOps Primer

SDLC Series: NoOps Primer

Team Polyrific

If you haven't already heard the term "NoOps" as it pertains to enterprise software development and delivery you probably will soon. NoOps is an emerging movement that seeks to relieve a bottleneck created by traditional IT operations and on-premise application hosting by utilizing solutions rooted in automation and cloud-based infrastructure. At Polyrific, we have developed an outstanding NoOps solution called Catapult and we offer this article in hopes that it helps you better understand why Catapult is such a big deal.

From DevOps to NoOps

Perhaps the best way to begin understanding the NoOps movement is to first understand the DevOps movement. The term "DevOps" is an amalgamation of "Development" and "Operations" and refers to the interplay between software developers and IT operations during the process of deploying applications to the world. In every enterprise, it is necessary for these two departments to stay close to one another in order to best serve the needs of the business.

At most enterprises, responsibilities for developers generally include the following:

  • Work with stakeholders to understand the needs of the business
  • Distill those needs into requirements and specifications
  • Develop applications that fulfill said requirements

By contrast, IT operations are generally responsible for interfacing with network hardware:

  • Allocation & management of server resources
  • Fault planning & monitoring
  • Security & compliance
  • Device management

Obviously, applications that are developed to suit the needs of the business have to be deployed somewhere so that they can be consumed, and this is where the interplay between the developers and IT operations managers comes in: they must work together to take the developer's work and deploy it to the world on their enterprises resources. This makes perfect sense if the picture were so simple but, as we will see in the next section, reality is a bit more complicated.

Agile & Continuous Deployment

In the early days of enterprise software solutions, very few enterprises created custom software solutions or applications of their own. However, as workplace environments have become more dynamic and reliant on smart hardware and software solutions, the demand for rapid release of custom software applications has grown dramatically. The Agile movement was largely a response to this exponential growth in application demand and it is founded on principles inspired by the Silicon Valley "fail fast & fail early" philosophy.  Gone are the days of months of planning, tedious software architecture design, and software release schedules following a waterfall schedule into a deployment phase that is given equal weight by the IT operations team. Today's software development teams are expected to respond immediately to a seemingly never-ending stream of features and demands requested by the business.

Often, projects are started as bare-bones applications that are immediately thrust into production environments where they will be constantly updated and expanded upon as the business requirements evolve. This sounds great, but it presents a few challenges to software development and IT operations teams, especially with regards to quality of the end-user experience and application uptime. To counter this, the development and ops teams employ a set of automation tools and checkpoints, collectively referred to as "Continuous Integration" or "Continuous Deployment" that smooth out the problems caused by rapid iterations in the software development life cycle. For example, when properly configured, and CI pipeline can trigger a series of automated tests whenever a developer checks in new code to ensure that the new code does not break anything or cause "regression" bugs.

The (Traditional) IT Bottleneck

It operations experts are fantastic but, in our view, their role is best executed when the evidence of their work is everywhere, but their presence is not so apparent. A good server at a restaurant will keep your glass full and your food coming without you noticing them much at all and it should be the same with IT operations managers but sometimes--often through no fault of their own--this is not the case. Without considerable depth of automation in your software development life cycle (SDLC), it becomes necessary for the development team to spend significantly more time with the IT operations team in order to coordinate downtime, deployments, rollbacks, and so forth. This is especially true in the case of on-premise deployments. This close coupling between IT ops folk and the developers is bad for at least three reasons:

  1. It takes the developer's focus away from understanding the needs of the business stakeholders
  2. It cuts into development time
  3. It can influence the engineering and delivery schedule of the application

Given the above, you can probably start to see where this is headed: interaction between development and IT operations should be automated so that the software engineers can remain focused on what they do best: delivering application-based solutions that serve the immediate needs of the business.

NoOps Produces Better Outcomes

So in order to respond to ever-changing demands of the business, development teams must be capable of quickly organizing the stakeholder's needs into business requirements and then parlaying those requirements into working code that is tested, quality assured, accepted by the end user and deployed into production environment on a frequent and recurring basis without being slowed down or distracted by hardware and deployment challenges on the IT ops side of things. Does this mean that IT operations professionals must be removed from the SDLC? Of course not. What it does mean is that IT operations personnel should join forces with the developers to implement game-changing solutions that help to automate the business of getting the developer's changes into production with very little interfacing required between development and operations.

In a NoOps world, developers don't check with IT operations before deploying code or to schedule downtime. In fact, they don't deploy code at all--they simply check their changes into source control and the rest happens automatically, behind-the-scenes, just like the server who always keeps your drink full without your noticing they were there at all. Similarly, developers do not need to request allocation of new resources from the IT department. They can, in theory, "spin up" a new ecosystem of server and database environments for a special purpose app while they sit with the stakeholder during a requirements gathering session.

The Catapult Digital Developer & NoOps Solution

As previously mentioned, we have developed a software solution called Catapult that takes automation of enterprise software delivery to the extreme. Using Catapult, even non-technical stakeholders can create new application projects on a meta-level that immediately spin up server resources using popular cloud platforms such as Azure and AWS. Catapult then allows entry of high-level data models in order to populate databases (or it can connect to existing ones) and generate, and deploy comprehensive codebases all without the user knowing how to write the simplest of SQL queries.

Like the restaurant server that deftly keeps your needs satisfied without making his or her presence known, Catapult allocates hardware resources, creates code bases, sets up source control repositories, allows stakeholders to manage content and seed test data, manages branching strategies, communicates with engineering team members to let them know of code changes, and pretty much anything else a competent developer and IT operations professional on your team would do. That is why we refer to Catapult as the "enterprise digital developer".

If you'd like to learn more about Catapult or any of our other software development solutions, please contact us or call us at 833-POLYRIFIC.

Team Polyrific | Mar 10, 2021

Way back in 1947 Rear Admiral Grace "Amazing Grace" Hopper documented for the first time in programming, a bug. More specifically, it was a moth that had decided to party between two solenoid contacts which shorted out an electro-mechanical relay at the Harvard Computation Lab. Amazing Grace, also known as "Grandma COBOL" didn't have Trello, Asana, or Jira back then, so she documented the bug on a piece of graphing paper:

A lot has changed since Amazing Grace's day, and bugs--of the digital variety--are far more common due to the fact that software is exponentially more complex and touches every aspect of our daily lives. Bugs are viewed differently from developer to project manager to product owner to end-user and being mindful of those different viewpoints is critical to stopping any bug infestation. If you find that bugs, like the ants on your kitchen counter, are a bit too abundant for comfort, here are five ways you can manage:

1. Have a requirements document

A requirements document, also known as a functional specification, is critical to the success of any project. In their own headspace, product owners and stakeholders usually have a crystal clear vision of their product. But that vision can lack practical details about how the app should work in the real world.  When the development team makes a good-faith effort to bring that vision to life, the product owner may be confronted with details that don't fit the way that they think the app should work and that is frustrating from their perspective.

It is the job of a business analyst and/or the project manager to tease out of the stakeholder a complete map of their vision--including aspects that they may not have considered--and to document that vision in the form of a detailed functional specification that tells the developers exactly how the app should operate.

This may seem obvious, but in our hyper-agile world this step of creating a blueprint for the project is skipped more often than you might think and too frequently the developers get blamed for not using their own "common sense" because there was a disconnect between their vision and that of the product owner.

2. Know the difference between enhancements and bugs

It is very difficult for stakeholders to articulate the optimal way in which an app should behave when they are working with their imagination alone. Often, it takes iterations of development, then feedback, then further development before a final release. This is perfectly normal--the stakeholders need to see the app, experience it, play with it a bit, before they can say, "this registration path is a bit more cumbersome than I intended". The problem in this scenario is that sometimes a project manager will take such feedback from the stakeholder and label it as a "bug" when in fact it is an enhancement.

As a rule of thumb, if the issue being documented isn't breaking the app and the end-user can still complete the user journey (e.g. successfully register, successfully add a product to their cart, etc), then the issue is not a bug, it is an enhancement and should be labeled as such.

Some developers interpret the term "bug" as "you made a mistake here" and rightfully so: developers are often the only party blamed when things go wrong even though they are only one part of the software development team. With this in mind, you can see how being told that they made a mistake because they didn't anticipate how a stakeholder's vision would evolve over time can be a bit irritating. 

As a PM or stakeholder, the best way to alleviate this situation when documenting new issues is to substitute (in your head) the term "bug" with "mistake". In fact, before you log it, think of the developer who will get the ticket and say to yourself, "Johnny, you made a mistake because _________.". If the sentence sounds preposterous in your head, then it will sound preposterous to your developers as well--label it an "enhancement". On the other hand, if the sentence is not ridiculous ("Johnny, you made a mistake because you didn't add a logout button when it was clearly in the functional spec and wireframes. Now the users can not log out.") then you have a true bug on your hands.

3. You know what they say about ASSumptions. . .

We recently had a project that included a form with a text field for entering a date. The PM and stakeholders assumed that this field would have a date-picker (calendar control) fly out when gaining focus, but this was never documented in the functional specs or shown in the wireframes. As a consequence the developers simply made it a text box that validated for a date and moved on. The QA team signed off on this because, after all, it met the requirements of the functional spec.

During UAT, the stakeholders were visibly irritated that there was no calendar control for this field and the issue got bounced back to development as a bug. Was it a bug? No. It was an assumption made about which design pattern should be selected by the development team and never documented. 

If a requirement is important, the PM should be certain that it has been included in either the functional or non-functional specs. If that doesn't happen, the issue should simply be labeled as an "enhancement" and prioritized in the backlog.

4. Not all bugs are in the code

Occasionally, an app that was working fine will suddenly display several prominent bugs causing a five-alarm fire among the maintenance engineering team. This is very rare, but it does happen. Almost without fail, we have found that the cause of this can be down to a change in the code's dependencies rather than a defect with the code itself. A few examples:

  • A DBA makes a change to the schema of a production database. The DBA thought this would not be of consequence--they simply changed the datatype of a column to save space, or dropped a column that contained no data. Unfortunately this type of issue can have an affect on the code and should always be discussed with the development team before such changes are made.
  • A third party API on which the code depends has gone down. For example, there are APIs that calculate sales tax for eCommerce apps. If that API is having an outage, or has bugs of it's own, then it can make the app consuming the service break.
  • A maintenance engineer updates dependencies or frameworks without consulting the dev team. The app breaks. Again, any changes to software connected to the app should be thoroughly discussed with the engineers before they are implemented.
  • The app is migrated to a new server that does not have all of the necessary dependencies installed.

Essentially, this one is all about communication. If any changes are to be made that might affect the code, they should be discussed beforehand. The primary engineering team should include a dependency profile with their turnover documentation to provide greater visibility into potential issues for the maintenance engineers. Also, there should always be a backout plan when making such changes.

5. Software can be hard

This may sound simple and silly but it is valuable to remember. The software game is full of risks--that's why the financial rewards can be so high. It is a complex game with layers upon layers of systems and considerations. You should always allow time in your plan for things to go wrong so that when they do, you can have enough time and budget to step back, carefully consider whether what you are seeing is a technical defect or just the product of an evolving vision, and then properly label the issue.

At Polyrific, we take pride in the fact that we invest ourselves fully into our client's projects. We know how this game is played and we know how to shepherd our clients through the process directly because we plan for and deal with issues, like bugs, that will inevitably come up. 

If you'd like to discuss your vision for a software product, please do not hesitate to contact us.

Team Polyrific | Jul 09, 2018

Despite the ever increasing pace of advancements in the software world, there are still no out-of-the-box solutions that work perfectly for every enterprise. If you think about it, this is actually a good thing because if your enterprise operates in exactly the same manner as your competition, then there is nothing to competitively separate you from them and that is one of many reasons that we aren't fans of one-size-fits-all solutions that are critical to your core business operations.

If you have lived through or are in the midst of rolling out a big name software package, we'll wager that it has caused you your fair share of heartburn. Here are ten reasons why your off-the-shelf implementation isn't going so well:

1. You are paying for features you don't need and won't use

Take a look at any off-the-shelf software solution and you are likely to find myriad features that have no relevance to your business. This makes sense because the software in question was undoubtedly built to "work" for the widest possible market segment and this means inclusion of features that may make sense for others but not for you. The problem is, your license fee is covering the cost of those features you will never use. A pervasive example of this is the reporting features that get bundled with most off-the-shelf ERPs & CRMs. It's likely that most of those reports aren't being used at your enterprise.

2. Hidden costs

Off-the-shelf rollouts (especially ERP and CRM) are never as easy or fast as the well-funded salespeople told you they'd be. Many ERP rollouts take between 6-12 months which is time that you are paying consultants to make this "out-of-the-box" solution do what you paid a license fee to have it do. Think of the irony there for a moment--shouldn't an "out-of-the-box" solution just work right out of the box?

Apart from the high cost of implementation there is also the cost of penetration/security testing, upgrade charges, the cost of extension packages that make the solution behave in a way that better works for your business, recurring annual licenses, and the cost to train your staff to use the solution that may not operate in a manner they are used to.

3. The software doesn't fit the business

No off-the-shelf solution will perfectly match the way your organization does business. At best, it can only come close. This means that your stakeholders are going to have to shape their business practices around the way that the software operates.  

A dangerous aspect of off-the-shelf solutions is the temptation to underinvest in requirements gathering and business analysis because you figure that the "off-the-shelf" software you just paid a mint for will cover their needs. This creates resentment among your stakeholders who feel they are not being heard and it stymies good ideas that they may have brought to the table for boosting productivity, innovation, and profits.

4. "Configuration" means customization

Think that the "off-the-shelf" approach will mean that your aren't doing any customization, think again. In the COTS (commercial off-the-shelf) world the word "configuration" might as well be "customization" and it will cost you just as much or more than a solid custom solution would have in the first place.

5. Lack of free and fast support

There are going to be times that you encounter an edge case bug in your software or you simply have made a mistake such as deleting data that you need to retrieve. In the "off-the-shelf" world you will either have to pay for fast tier 3 support or you will have to wait--sometimes for weeks--for the vendor to resolve your problem. Also, since the software's source code is closed to your team, you are not empowered to have your in-house engineering staff tackle the problem for you meaning that their hands are tied even though they may have been able to resolve the issue quickly.

6. No control over the product roadmap

Have a great idea for a feature that would be hugely valuable in your off-the-shelf software package? The vendor might be willing to hear about it and, if so might be willing to implement it *if* they think it is in line with their wider market sector. Even if they do decide to move forward with developing the feature it could be months or even years before you see it.

7. No competitive separation

Think of the industry in which you and your competitors work as a gene pool. When you are all using the same big name off-the-shelf products, there is a great likelihood that you are all being channeled into doing business in the exact same way. You can probably see where I am going with the gene pool analogy: lack of diversity in the way you and your competitors do business means lack of evolution and competitive separation.

Competitive separation is further limited through the limits that off-the-shelf solutions place on innovation. Very few in your organization are going to come forward with innovative ideas for bettering your business if they know they would be impossible to implement within the context of your off-the-shelf solution. What would be the point?

8. Lack of usability leads to lack of adoption

Can you imagine foisting an out-of-the box ERP interface on a field mechanic who has to log his or her equipment inspections? It happens everyday though--golved or greasy hands having to manipulate a stylus to fill out forms better displayed on a desktop in a cubicle. Swearing ensues and sometime tablets grow wings.  Your field personnel are good at what they do and they do not want to spend too much time living in your world. A much better solution for them might have been a custom solution offering a voice interface that they can run on their smartphone. Both Google and Microsoft offer APIs that make this an easy feature to incorporate into custom applications but you don't have that ability with your off-the-shelf solution.

9. Lack of integration with existing software and data

There are off-the-shelf products that now ship with integration capabilities, often for other big name off-the-shelf products, but many still offer no such integration and none of them have a way to interface with your enterprises native data sources. For that to work, you need yet another custom engineering effort, often in the form of "glueware" built to tie your new off-the-shelf product to your existing data stores. 

There is an alternative--new off-the-shelf "integration platforms" help you tie all of your packages together but require high amounts of implementation time and consulting. At some point, it becomes easier to just build a more focused custom application in the first place.

10. High total cost of ownership (TCO)

One thing that will certainly put a sour taste in your mouth is the increasing level of cost of your application over time. Thought you were done spending money when you wrote the initial licensing check? Not even close. Before you have used your expensive off-the-shelf product for a few years you will likely have spent money on annual licensing fees, upgrades, security testing, market place extensions, support, and so on. You also might have to allocate funds for internal staff to monitor and maintain the off-the-shelf product so that you can be sure it stays up-to-date and compliant. Besure to create a five-year total cost of ownership projection when considering off-the-shelf software solutions.


As you can probably tell, we aren't big fans of "off-the-shelf" solutions at Polyrific because in the best case we think they are agents of averageness and in the worst case we think they negatively impact the clients who we care about. It's not to say that all off-the-shelf solutions are bad of course, but you do need to carefully consider your decision before going in that direction. Don't be afraid to ask around--other folks you know might have had a run-in with the software you are considering and can give you the real story. We don't know too many people who are happy with their off-the-shelf purchase decisions but we know of plenty who are delighted with our custom solutions!

If you are interested in developing a custom application to better your enterprise, please feel free to contact us for help. We are experts in developing world class custom enterprise solutions.

Team Polyrific | Feb 14, 2018

You may have heard the term "continuous integration" or "continuous deployment" or even "continuous delivery" tossed about in the your department as a catch-all phrase for "we need to ship code quickly and constantly". It's true that a well honed continuous integration (CI) program can result in rapid, hyper-agile delivery of software but in order to reap the philosophy's rewards you have to establish and adhere to a disciplined protocol that is based on a true understanding of what CI actually is.

In order to understand CI, let's look at the way software used to be shipped in the years leading up to the golden age of Agile, say, the aughts (2000 to 2010-ish). During these years, even the smallest feature change to an in-place application was a major undertaking. Budgets had to be approved, designs made, code written and tested, bugs fixed, user acceptance granted, and then a big monolithic chunk of code was released as a new version of the software. Because making a release was such a big undertaking, the needs of most stakeholders from the business fell to the wayside; there simply wasn't enough time or budget to cater to all of their needs. 

Things began to change, however, as we entered the teens (2013 to present).  Web and native apps intended for consumption on smartphones exploded filling more and more nich needs and the typical business stakeholder became ever savvier in all things software as they grew accustomed to having myriad software features to solve problems in their personal life. This created a demand that spilled over into the workplace and became common in just about every conference room around the world:  "Amazon sends me updates about the location of my order every step of the way! Why can't we do that with our replacement part orders??" or "Searching for information on Google is so intuitive--it should be the same when we search our inventory" or how about, "We should make a mini-game like Angry Birds to promote this new ad campaign". Overnight, software delivery professionals--from developers to quality assurance to analysts--were overwhelmed with requests and outnumbered by throngs of stakeholders with wishlists a mile long. The age of carefully planned, waterfall-like software release delivery schedules and the age of "I want it all and I want it now" Agile methodology had begun.

In the years since that critical inflection point in the art of software delivery Agile, a stream-of-consciousness approach to software delivery, has proliferated in response to stakeholder demand / impatience and this has in turn given rise to CI which is the process of streamlining and automating the business of requirement specification, development, quality assurance, testing, user acceptance and, finally, production deployment. 

In a CI world, a stakeholder may express a desire for a new feature in the company intranet during the Monday morning meeting. By lunchtime, the business analyst has gathered detailed requirements and placed the requirements into a ticketing system such as Visual Studio Team Services or Jira. This alerts the dev team automatically so that they can step away from the foosball table and get back to their workstations. By Monday afternoon, the developer(s) has accepted the ticket and used it's automatic integration with the source control repository to create a new "branch" of the code. The developer's job is done within the hour and her code is checked in which triggers an automatic execution of unit and end-to-end tests and then an automatic build to the QA environment and Slack notification to the QA testers. Once the QA staff has approved the build, the CI pipeline takes over once more and automatically handles the placement of the new branch into the production environment while maintaining the ability to easily roll back to the previous build if necessary.  By Tuesday morning, the stakeholder is happily using the feature he requested during the previous day's morning meeting. This would never be possible without an established CI program in the organization.

A well-developed CI program isn't just for the benefit of the stakeholders; it has plenty of deep technical advantages as well. For example, most projects have multiple developers working in isolation. Adherence to a CI protocol forces a degree of work atomization which limits the ability for discrete tasks to become too large. This means more frequent code check-ins and integration with the production environment which means fewer nasty merge conflicts and bugs.

By now you probably get what CI does, but you may be asking yourself what exactly it is. Is it a tool? A platform? A philosophy? In reality, it's a little bit of everything. DevOps professionals create "build definitions" using popular build engines like Visual Studio Team Services, Team City by JetBrains, Jenkins, or Octopus. You can think of these definitions as scripts that have hooks into both the source control repository where your application's code resides as well as into the environments (servers) that run the working code. In a sense, these build definitions are a collection of IF THIS THEN THAT statements: "If a new ticket is added then create a new branch and email the dev team", "If a developer checks in their code, then run unit tests", and then, "If all unit tests pass, then deploy to the QA environment and email the testers". 

Different build engines have different strengths and it is possible that your organization use more than one of them. In fact, we developed our NoOps/digital developer product called Catapult to help abstract and alleviate the stress of managing multiple build servers and other resources in order to further streamline the continuous integration process.

The other aspect of a solid CI program is to enforce a protocol to be followed by all team members. This is very important because if the team does not use the correct toolchain, the CI program won't work and it's benefits are lost. For example, if the stakeholder from the Monday morning meeting were to have simply emailed his request directly to the developer, that developer--eager to please--may have coded the feature and then checked it directly into the source control repository without following the proper branching protocol. This may have caused merge conflicts that then require a manual review and possibly fail to trigger automatic tests which would then result in the very real possibility that bugs slip through to production environments and create the need for site downtime.  The good news is, there are some pretty great tools out there to make adherence to a CI protocol pretty easy. We are, of course, partial to Catapult but regular ol' VSTS or Jira are pretty good as well.

If you are interested in instituting a CI program at your enterprise but don't know where to start, please feel free to contact us for help. We are experts in the field of CI and we can either help you design and roll out a custom CI program or implement a licensed instance of Catapult to make CI (and devOps in general) feel like magic.

Team Polyrific | Jan 24, 2018

The 2018 Consumer Electronic Show is now underway in Las Vegas, Nevada. Each year CES brings forth emerging technologies to the world stage that will soon power the way we live, work, and play. Here are the buzz-worthy technology trends at CES this year:


Of notable buzz is the expansion of 5G New Radio (NR) cellular data transfer and millimeter wave technology. Five years ago, the upgrade to 4G felt like a big deal, but 5G is like nothing we have seen before. Whereas 4G can transfer data at 100 Mbps, by 2020 5G will transfer data at a searing 10 Gbps.  To put this into perspective, 10 Gbps data rates will allow you to download a two hour long high definition movie to your smart device in about three seconds

Such high data transfer rates should catch the US up to other areas of the world that have newer (and therefore faster) data infrastructure like South Korea, Japan, and Singapore. The importance of 5G speed isn't in the fact that we can download more media in less time--5G is important because of the industries it will enable such as streaming 8K video for digital medicine, data streaming for self-driving cars, mega-encryption for the Internet of Things, and so forth.

AI (again)

Be prepared to hear more about AI now and for the next several CES conferences. Specific intelligence, that is intelligence trained for a very specific purpose, is now a mature technology and one that you most likely already use on a daily basis. There is a heavy focus this year on the application of AI to building better and more conversational digital assistants like Alexa, Siri, Cortana, and "Hey Google" (seems Google dropped the additional syllable in "OK Google"). 

As AI goes from specific to general (a process that will take many more years), conversational interfaces become more, well, conversational. For example, instead of "Hey Google, find Italian restaurants", we would have, "Hey Google, I want to go out tonight. The weather is going to be bad so I don't want to travel far from home. Just go ahead and make a reservation somewhere close--you know I love Italian food but Mexican is fine as well".


AI and Robotics are the peanut butter and jelly of the tech world. You can't have efficacious robots without strong AI. AI has come a long way in the last few years and this is giving rise to a whole new family of robotics here at CES this year.  There have already been unveiling events for several humanoid robots which, like there predecessors, have been clunky and prone to errors; however, the more purpose-built robots geared towards specific industrial or practical purposes are faring much better. Among such technologies are "smart baggage" and self-driving vehicles. Check back for more detailed articles on such robotics in the future.

Virtual & Augmented Reality

Virtual reality is still limping it's way to mass adoption with Sony announcing that just under 3 million Playstation VR units have been released as of the Holiday 2017 season. Many of the big names such as Oculus and HTC have announced lower-cost and self-contained VR units in a move to catch up with Sony who currently dominates the space. In our view, VR seems to still be a ways off in terms of mass commercial adoption; however, there are interesting applications such as therapy for post traumatic stress disorder that we believe will be useful in the near term. 

By contrast, augmented reality technology is just beginning to sprint towards mass commercial adoption. When you think of augmented reality, think about viewing the world through the window that is your smartphone rather than through special glasses (though both are happening). What we are seeing here at CES are several applications wherein ordinary smart phone owners can use the phone to overlay useful information onto the real world like where the nearest restroom is. We will be adding more articles about augmented reality in the coming weeks.

Digital Therapeutics

Digital therapy is another big topic at CES 2018. The term "digital therapeutics" encompasses all types of sensor-based diagnostics that enable virtual medicine. At Polyrific, we view emerging technologies in digital therapeutics and virtual medicine as essential for the well-being of US citizens in our changing healthcare landscape. We will be publishing articles on digital therapy in the future, but essentially this topic involves the gathering of personal health data from a variety of sensors in our smart devices and checking that information against oceans of data to indicate trends and even perhaps make a diagnosis. Additionally, with your permission digital therapy enables doctors from across the world to review your medical history and deliver a consultation which, depending on your healthcare situation, might be critical to your well being.

Internet of Things (IoT)

The Internet of things is nothing new to CES and is prevalent once again this year as it continues to expand and serve as the world's digital nervous system. Of particular focus this year are the IoT implementations that drive smart cities and energy conservation.

Various Improvements to Consumer Electronics

As you might imagine, there are many fun updates to consumer technology being announced at CES 2018. We won't go too deep into these areas but a few highlights include 8k video, thinner, lighter, and more powerful laptops, hand-held mini-camcorders with built-in stabilization gimbals, and new ways to enjoy sports in virtual reality.

So these are the primary trends driving CES 2018! Stay tuned throughout the week and follow @Polyrific on twitter for more CES coverage.

Team Polyrific | Jul 05, 2017

The story of Polyrific began back in 2011 when company founder Matt Cashatt was thinking of a name for a polymorphic database concept and landed on the portmanteau "Polyrific" as a great way to describe a product that could make many different facets of enterprise data management faster and easier. It didn't take long for Matt to decide that the name, and the concept behind it, was bigger than any single product: so many different facets of enterprise software creation and management need to be made faster and easier. And with that, a brand was born.

Since those early days we have grown into an enterprise-focused technology company that specializes in software development, machine learning and devOps. Our original vision is woven in everything we do: we constantly streamline and perfect the way custom software is designed and delivered so that the process becomes faster, easier, and more economical with each project. Our imperative is to stay close to our clients and understand their needs clearly while continuing to develop the game-changing technologies that delight them.  

This latest website of ours was designed to give our clients, colleagues, and friends insight into contemporary technology topics that today's enterprises must embrace if they hope to stay relevant in the marketplace as well as to stimulate ideas related to these technologies. Here you will find engaging articles intended to quickly get you up-to-speed on such topics, as well as the ways in which Polyrific can help guide your enterprise into territory that, for many, may be unfamiliar. We have also created high-level pages to help our new guests understand the types of services that Polyrific can offer them such as custom software development, general technology consulting, and on-premise devOps automation.

Perhaps our most important corporate value is that "we go farther together”.  This value is meant for not only our internal team members, but for our clients and friends as well. We hope to be a catalyst for positive and impactful change that helps your enterprise soar to new heights by aggressively growing our expertise and offerings in machine learning, data science, bots, personal assistants, and new form factors such as the Amazon Echo Show, which we believe will have far-reaching uses in the enterprise environment. We'll bring to the table the knowledge, expertise, and even some good ideas. You bring the desire, imagination, and vision for an incredible future.

We are glad you are here, and hope to see you back often. We would like to hear your feedback about our new website and hope you will share your thoughts and suggestions about any section you find interesting.

Team Polyrific | Apr 13, 2017

X-Ray vision. Telepathy. Telekinesis. These are the powers that have captured many childhood--and adult--imaginations since the first Superman comic book hit store shelves in 1938. Who hasn't dreamed of being a superhero? Thanks to advancements in wearable technology, we might all have our chance. 

Back in 2015 wearable technologies, or "wearables", hit the consumer market in a full-on assault led by tech giants such as Google, Apple, and Samsung. These tech giants bet big on fast consumer adoption resulting in record-breaking profits. . .and they lost. Then, something interesting happened that is rarely seen in the world of tech: enterprises began adopting technologies originally intended for consumers on a massive scale thereby keeping the market for the erstwhile "game-changing technology" alive.

DHL began incorporating smart glasses in the warehouse to speed up the process of picking orders, Quebec City International Airport equipped their duty managers with Apple Watches enabling them to receive real-time operational alerts with a quick glance at their wrist so that they can make better decisions and decrease delays, Buffalo Wings & Rings restaurant put the new Samsung Gear S3 to work notifying servers when customers need attention without requiring the use of their hands and shaving critical minutes off of the table-to-check cycle which boosts turnover and revenue.

The thing is, wearables solve a problem more critical to the enterprise than to the individual consumer: it multiplies the capability and productivity of a worker while keeping his or her hands free. In essence, wearable technology gives works an extra set of hands. In the case of Buffalo Wings & Rings this means that servers are notified the moment that a new table is seated and immediately when service is needed. This translates to faster turn-around time on tables and, when boosting sales each night by 10-20% is the result, it's pretty significant.

The use of wearables in the enterprise uncovers the potential to take leaps in productivity the likes of which have not been seen since the industrial revolution. Imagine equipping construction workers with smart glasses that allow them to see inside walls or underground so they can locate existing utilities or using smartwatches on knowledge workers to detect when they are at their desk (and therefore available for calls) or when they are becoming stressed or losing the ability to pay attention in a meeting which has carried on for too long.

Wearable technology is going to be the most popular trend in the enterprises over the next couple of years with sales expected to reach $53.2 billion by 2019. This technology has many different workplace benefits and plenty of options available to suit the individual use-cases of almost any enterprise. In addition to smart watches and glasses, smart clothing is now emerging. In March 2017, Levi's and Google announced a partnership to develop a smart jacket that allows it's owner to interact via gestures such as brushing a hand on a sleeve. This may sound silly at first glance, but imagine a field worker, soldier, fire-fighter, or any other type of worker who is wearing bulky clothing and possibly gloves that make interacting with a tablet difficult--to them it's not so silly of an idea.

Wearables also can collect myriad biometrics from their users which may or may not be subject of privacy debate in your enterprise. Assuming that your employees are willing to grant access to their personal biometric data, there are many interesting insights that may come from it such as levels of stress and fatigue, including stress when in proximity to another specific worker.

Wearable technology indeed has the ability to grant superpowers to workers of all types in your enterprise. Your only limitation is your imagination (and ours if you hire us) but one thing is for certain: you will need a great technology consultant and application developer to make your vision become a reality and we want to be the consultant that helps you become a superhero in your enterprise.

PRO TIP:  Begin integrating wearable technology into your enterprise by identifying tasks that require employees to use their hands while at the same time requiring them to refer to various data sources. Whenever you find a situation like this, there is almost certainly a solution that can be provided by the use of wearable technology.

Please contact us today learn more about wearable technology and how it can be used at your enterprise.

Team Polyrific | Apr 10, 2017

Use of bots, or "chatbots", are an immensely impactful, rapidly-growing trend in enterprise operations and customer interaction that change the way we work. 

What is a bot?

Bots are a light-weight form of machine learning that basically convert unstructured human language into structured data that serves as instructions for the software that consumes it. For example, the following statements all initiate the same software action of adding an appointment to the user's calendar:

  • "Put a meeting on my calendar for tomorrow at 10am"
  • "Add appointment called 'Meet with Bob' tomorrow on my calendar"
  • "Invite Bob to a meeting tomorrow at 10am"

In each case, the bot parses out the user's intent as well as the parameters that complete the request. In the above examples, the bot understands that the intent is to create a meeting. The parameters are the date ("tomorrow"), the time ("10am"), and a meeting participant to invite ("Bob"). If there are any required parameters missing from the command, the bot will follow up with the user by requesting that information: "What is the subject for this meeting?".

 This interaction is fundamentally the same as filling out a form that, upon validation, alerts you that you missed a required field. In fact, the code that consumes the structured data from the bot is processing your meeting request in exactly the same way. 

It's a big deal

It may not seem to be very earth-shattering at first glance, but bots are quite monumental in the evolution of human-computer interaction (HCI). Think about it--bots represent a total inversion of control wherein we, the users, command the computer in a way that's natural for us rather than needing to conform to a series of steps that the software dictates. This can save untold amounts of time and frustration by not having to learn the detailed work-flows of a given application or website and instead get right to what you need by simply "talking" to the program.

Enterprise bots

Bots have become pretty common in our personal lives. We use them for anything from scheduling doctor's appointments, shopping for the latest fashion styles, playing online games, to sending money to friends and so forth. But bots hold a tremendous value proposition for the enterprise as well which is why heavy-hitters like Microsoft and Google's GSuite are becoming major players in the space.

People often associate bots with customer service when thinking about them within the enterprise context. While it's true that bots can be tremendously helpful in directing your customers to the resources they need without incurring the cost of human assistants, there is tremendous value to be had in regular enterprise operations as well. For example, an IT department can utilize bots to provision new user accounts, automate devOps tasks, and request security scan reports. Executives can use bots to request sales reports and financial forecasts. Field staff can use bots to conduct inspections, request supplies and materials, update stock levels, and report progress.

Implementing a custom bot specifically for your enterprise will make your team more efficient, increase data accuracy, and refocus your human resources on higher-value work. Bots get "smarter" over time and as they do, more obstacles are removed between us users and the outcomes we seek. As you use any application today, think about the steps you take to get the outcome you need and ask yourself whether a bot could have improved the process by allowing you to get straight to what you needed with a single command stated in your own way.

Bots and your customers

Regardless of whether you are in healthcare, retail, travel, hospitality, or any other industry you can’t afford to ignore the bot concept. You have to "meet the customer where they are" by ensuring that your applications work in a way similar to what they have become accustomed to in their personal lives. Whether the medium is a messenger app, SMS, or your own application interface, you need to provide your customer with a way to simply "tell" the application what they need.

Facebook CEO Mark Zuckerberg agrees with this notion. Facebook is investing heavily into bot integration for their Messenger product. Says Zuckerberg, “You should just be able to message a business the same way that you message a friend, you should get a quick response and it shouldn’t take your full attention like phone calling and you shouldn’t install a new app”.

The time is now

 Whether your enterprise is prepared or not, bots have arrived and are being further interwoven into our cultural fabric each day. It is critical that you act now to implement bots of your own before you lose customers to a competitor who offers a better experience or employees to a workplace that allows them to do their job with less frustration. At Polyrific, we have a special affinity for bots and machine learning in general and we have the experience necessary to successfully implement a family of bots throughout your enterprises application ecosystem. 

Please contact us today if you are ready to join the bot revolution and take your enterprise operations and sales to new heights!



Team Polyrific | Apr 10, 2017

Machine learning is a system of algorithms aimed at detecting patterns in big data and then learning from those patterns without being explicitly programmed by a human operator. These algorithms take a probabilistic, rather than a deterministic, approach to accomplishing goals. Let's take a quick look at what that means by way of example:

Deterministic Approach

A human programs software in no uncertain terms to remind him to bring his umbrella to work if the chance of precipitation in his area is greater than 40%:

//Get the weather forecast
var chanceOfRain = myWeatherForecast().precipitationChance;

//Send the message
if(chanceOfRain > 40%){
    sendEmail("Bring your umbrella to work!");

This approach is deterministic because the outcome is predetermined by the the author of the code in no uncertain terms.

Probabilistic (Machine Learning) Approach

An algorithm crawls big datasets such as tweets about the weather, weather news, forecasts, umbrella sales, and supervised feedback (we'll get to that later) such as, "did you bring an umbrella to work today?":

machine learning

As you can now see, Machine Learning is about statistics and probability. Whereas humans typically write algorithms with a predetermined output for a given set of a few inputs, machine learning algorithms consume a vast amount of inputs and surface the most likely output. Over time, machine learning algorithms get better by adjusting their own internal methods to predict outcomes based on a growing body of data or knowledge. Eventually, a machine learning algorithm can become better at predicting outcomes than can its human counterparts because human beings, and deterministic algorithms written by humans, can not factor the massive quantities of inputs than machine learning algorithms easily digest.

Machine learning is nothing new

Believe it or not, the early concepts of machine learning and statistical analysis were proposed by British mathematician Alan Turing more than sixty years ago. So why is machine learning just now becoming so popular? There are several factors for this but chiefly it is because processors are more powerful, data storage is incredibly cheap, and thanks to the Internet, everything is now connected. In other words, it is now common for the phone in your pocket to display to you a message such as "bring your umbrella" as it detects you are leaving your home for the day as a result of a server to which it is connected crunching potentially millions of connected data points. Even more impressive, this happens in less than a second and it is all thanks to machine learning.

You use it every day

You might not have realized it, but you already benefit from machine learning every day. Here are a few of the common areas in which machine learning is making our daily lives better:

Search Results

Think that Google and Bing get your search results simply by how many times your search term is displayed on a given web page? Think again. Such a deterministic approach would never help you wade through the universe of irrelevant information to get to what you actually need. Search engines use complicated machine learning algorithms that factor information such as your current location, the time of day, your recent searches, the page you last visited, your recent purchases, and so on. This probabilistic approach usually surfaces what you were probably after within the top ten results on your page. A deterministic approach would never succeed in doing the same.

Spam Filters

Every time you classify an email as "spam" you are training a machine learning algorithm to recognize similar emails as having a high-degree of probability of being spam.

Recommendation Engines

Netflix, Amazon, and even Spotify use your searches and feedback as fodder for machine learning engines that in turn offer you better recommendations.

News Feeds

You may thank that your Facebook feed is already full of posts that don't hold your interest, but this would be even worse without machine learning which is at least promoting to the top of your feed people and topics you seem to care about the most.


You may have already heard of Google Translate which helps you translate text from one language to another but it gets even better: Microsoft recently capitalized on machine learning algorithms to translate spoken Skype conversations in real time. This means you can literally talk to someone in english while they hear Mandarin and vice versa.

Fraud Detection

Ever had your credit card declined when you are travelling or making a large purchase? Machine learning algorithms help credit card companies detect events that seem out of character for you and decline the card to prevent fraudulent use.

Will machine learning replace humans in the workplace?

Probably, but but it should be considered as good news because it will promote productivity and creativity. In some cases, machine learning will augment workforces wherein there are currently shortages of skilled workers. 

Consider the example of a general practice physician. Currently, you will need to schedule an appointment with your GP when you have a medical issue and, once you get in to see the doctor, you have to carefully explain your symptoms and answer the doctor's questions. Drawing on her previous experience the doctor may be able to make a diagnosis, but if she happens to not have prior experience or knowledge about your particular situation, then she may refer you to a specialist or send you to a lab for tests. 

Now consider a digital doctor which is trained by vast quantities of shared data and experience. At any given time, this digital doctor may be processing--and learning from--millions of cases. This doctor becomes more experienced and better at making correct diagnoses by the second. So when this doctor interviews you and makes a diagnosis, it is drawing on millions upon millions of past cases, symptoms, and outcomes in order to let you know what is ailing you in a matter of seconds. There is not likely a need to go for further tests or a need to go see a specialist.

Human doctors can use such a digital doctor to augment, rather than replace, their care of you. Now they can receive a report and know immediately whether your appointment should be expedited, or whether they can simply write a prescription for you to come pick up.

Get started!

Machine learning is a productivity multiplier. You can think of it as a way to unload cognitive tasks to the computer so you can free your staff's minds to focus on higher value work. Machine learning is no longer the property of big tech companies alone, we can all now capitalize on the amazing things it can accomplish. Contact us today to learn more about how machine learning can benefit your business.

Team Polyrific | Apr 10, 2017

You probably already know that machine learning is an incredibly powerful technology that has the ability to solve difficult problems in a surprisingly effective manner.  What you may not have realized, however, is that since machine learning algorithmically builds it's "gray matter" by learning from previous patterns, trends and data models, we are at present witnessing only the very early stages of what machine learning can do for us.

Recently the science behind machine learning hit a significant milestone in fields that hadn't really moved the needle for some time like speech recognition and image understanding. With the recent proliferation of sufficiently capable computing hardware, we witnessing a BIG BANG in machine learning technology that represents a major step forward in how computers can learn and perform.

For several years now, the use of machine learning has been used as a form of automation for low-value tasks that are easy to do but time-consuming when carried out by human hands. As we move into the near future, expect to see an explosion of applied machine learning as the necessary computing power and software implementation proliferate around you but it won't all be easy; machine learning algorithms tend to have errors, and it is very interesting to know how we humans-in-the-loop in "coach" of those errors out of the result sets through training and deep-instruction in neural networks.

With that said, machine learning will have a great impact on all areas of business. One of the important things for enterprises to bear in mind is that they need to look beyond the AI hype for practical ways to incorporate machine learning into their operations. Expect too much too fast and we will find ourselves in another "AI Winter": a season we have witnessed before during which confidence in machine learning plummets and investment stops. Machine learning algorithms should be regarded as a child in need of time and instruction to become truly effective. Goldcorp – a mining company that uses immense vehicles to haul tailings and other debris away from mining sites - is taking this step-by-step approach with great results by iterating a machine learning algorithm over time that now predicts with over 90 % accuracy, when will their machines need maintenance. Since a vehicle breakdown can cost Goldcorp over $2 million per day, it's hard to argue the economy of this kind of applied machine learning; however, had Goldcorp expected for machine learning to first be able to make all of their monster vehicles self-driving, it is very possible that the program would have failed and the more simple, but extremely useful, algorithms would have never been implemented.

Short Term Predictions

More enterprises will begin their machine learning journey over the next 18 months than any other time in history. The smarter ones will create competitive separation for their enterprise by getting started with machine learning now while still learning from other's mistakes. Resisting the urge to expect too much too fast will pay-off handsomely - as was the case with Goldcorp - while machine learning quietly takes hold beneath a cacophony of AI marketing speak.


Some of the gloomier predictions will end on a higher note: machine learning will automate some human jobs out of the equation, but those jobs will be replaced with higher-value, more stimulating work. Retail and sales jobs are primed for machine learning implementation and automation. We will see robots in hospitals delivering medicines, materials, and meals. Self-driving cars will rule the highways in the next few years. In fact, we will see autonomous trucks, tractors, taxis, forklifts, and cargo handlers etc.

Automation of such large parts of our workforce is going to require that our governments come together in a very bipartisan way to avoid economic strive, but on the positive side, the world will see a tidal wave of creativity and innovation like never before due to the freeing of creative thought afforded by machine learning-based automation.

Culture Shift

Machine Learning will become so powerful in the future that it will shape culture by driving us to make better decisions and providing us a more profound vision for the pursuit of happiness and showing you the outcomes, explanations or evidence that you might be missing in topics both big and small. And it will not only show you those missing elements but will also support you in weighing and making sense of them.

Machine learning will also bring about revolutionary personalization in the services and products based on your tastes, historical choices, location, even your DNA. This of course changes the way products are made, consumed, and marketed.

In conclusion, machine learning is changing everything quietly at the moment with the volume increasing dramatically over the next two years. Ignoring the technology is not an option, but it is important to measure your expectations an have a long-game for machine learning in order to reap the highest rewards. 

It's impossible to predict exactly where this phenomena will lead us, but in the words of Peter Thiel, 

"Not being able to get the future exactly right doesn’t mean you don’t have to think about it". 

We are here to help you and think about your future and how machine learning can become a part of it as soon as possible. Please contact us to get started.