Making Quality transparent
How a Bioanalytical Company succeeds in the Digital Age
Micro-Biolytics is a Bioanalytical Applications Company from Germany. They revolutionize chemical analysis through the digitization of liquid samples. Their AquaSpec (TM) technology instantly creates globally transferable and comparable results. Their method is quickerand easier to use than other analytical technologies, detecting all ingredients at marginal costs in a single measurement. Micro-Biolytics guarantees quality performance independent of processing location, operator or device.
This helps their world-leading customers quality-check received and manufactured material, improve their production processes and gain decisive know-how for the development of new products. Micro-Biolytics allows customers to exploit the full potential ofdigitalized analytics by offering intelligent AI and Big Data solutions that pave the way for their journey to Industry 4.0.
To further enrich their customer’s experience, Micro-Biolytics is building a Cloud Platform that enables their customers to manage and operate Analyzers across teams and organizations.
On its SaaS Offering, Customers can collaborate on the analytical Data gathered by their Analyzers in real-time in a modern, mobile-first Web-UI backed by a high-secure, high-available Cloud Infrastructure in german datacenters. Micro-Biolytics is accelerating their customers’ productivity in working with the complex datasets by providing additional business & supply chain interconnectedness and state of the art security.
Micro-Biolytics chose to go out of its way to make sure their Platform follows industry best practices in terms of Security, Reliability, Resilience und Future-Proofness. As a company with a more scientific background, Micro-Biolytics was relentless in adopting modern technologies and methodologies to make sure they can provide a service platform to their customers that is in line with the high quality standards that made their Analyzers become so successful.
Making quality transparent. Instantly, for everyone.
Micro-Biolytics had a clear vision of what they wanted to achieve when we first talked. Their Cloud Platform needed to:
Implement Industry Best-Practices
Be easy to operate without dedicated staff
Be flexible in terms of infrastructure
Give them a feeling of safety and reliability
Of course one of the most challenging aspects of building SaaS Software is settling for the right Architecture that supports Operation and Scaling. This means designing a Service Architecture and aligning Infrastructure accordingly by picking the right Orchestration, Automation and Storage Architecture.
To make good use of any Platform, you not only need to know everything about the Platform, you also need to have a complementing skillset when designing applications on top of it. This includes an optimized Development Workflow, established strategies for the Onboarding of new Technologies and a Culture of Learning and Collaboration.
In the Cloud Age, Customer Expectations are constantly changing with technological innovations throughout the industry. If everyone is offering Feature X, chances are good that you’ll be expected to offer it, too. Today, you want to develop at Startup-Speed while delivering Enterprise-Grade Security and Capability or you will have hard times to stay competetive. A modern Cloud Platform needs to support this.
When dealing with Customer Data, you are expected to keep it safe and available all the time. This imposes a huge Challenge and Risk for Companies operating Cloud Platforms that requires a well-chosen Storage Architecture to enable state of the art Persistence, Safety and Availability of data.
Cost Efficiency relates directly to Profitability. Companies more and more move their Business to the Cloud to get quicker access to resources that accelerate their Business, but it often comes at a cost. The Challenge is to build technology that can be operated with a minimum of Staff and Expert Knowledge to leverage software-defined Cloud Resources at maximum profitability. The goal is to keep Maintenance Cost low to be able to invest in Innovation.
When designing scalable, future-proof Systems, a big challenge is Vendor Lock-In. The ability to run and operate a Platform on any given Infrastructure is a key concept of modern Cloud Architecture and something that needs to be addressed early on in the design process.
After our initial conversations, we soon had a good idea of the key technologies we would use to achieve the desired outcome. We’ll be going into details on two of the technological choices made throughout the Design- and Implementation-Process: Docker Swarm and Storidge.
Micro-Biolytics took the 6 months before our project to get their feet wet with modern, cloud-native technologies to get a good idea of the available toolchain and what fits their Development Process best. They opted for a Container-based Infrastructure, leveraging Continuous Integration, Continuous Deployment and Continuous Delivery along with agile Project Management and a cloud-native Service Architecture.
This was a huge hill to climb as most of the team did not have a Cloud/SaaS background. So they rolled their sleeves up, designed their Platform in small iterations while implementing new insights along the way, and once they knew what the outcome should look like, we came together to discuss how their vision could be implemented.
Docker Swarm – native Docker Cluster Management
Why not K8S?
Kubernetes being the coolest kid on the block in cloud-native environments, it also is a huge paradigm shift when you never worked with Containers or are used to standalone Docker before. We chose to not pick K8s to start with to keep the complexity of Operations and Scaling as low as possible. Yet the Platform is prepared to swap Docker Swarm for Kubernetes once it makes sense to do so.
Setting up K8s in a transparent and reliable way is a huge task and requires Expert Staff for Deployment and Maintenance. Starting off with Kubernetes requires way more upfront preparation and post-deployment fine-tuning than choosing a less-versatile but ultra-solid Platform like Docker Swarm.
Cost of Operation
Running Kubernetes isn’t for everyone. You need Expert Staff and a strong attitude in dealing with breaking changes and maintenance. While K8s can be integrated as a service by using Cloud Provider offerings, running and scaling a Kubernetes Cluster in the Cloud isn’t cheap either. It also imposes limitations in terms of flexibility and transparency.
Kubernetes is hard to get familiar with when you have no prior experience with Clusters, Orchestrators and the concept of Containerization. It’s even harder to learn how to efficiently use and operate it, which makes it a problematic choice for fast-paced environments where reliability is more important than features like auto-scaling.
As with every Platform, there’s a certain risk to be dependent on its functionality or offering. Kubernetes adds lots of abstraction to Container Management and thus has significant impact on your architectural decisions.
When talking K8s, people often refer to it as a Black Box. It’s hard to fully grasp what happens behind the curtains and how to deal with certain events of failure.
Docker Swarm is a reliable orchestration engine that understands plain Docker commands. It’s not necessary to learn new DSLs or Paradigms. This lowers the entry-barrier for on-boarding services to the Platform. Development Environments can replicate Production Environments with ease and no additional abstraction needed.
And Docker Swarm sticks to its guns: it operates Code constantly according to Developer specifications and doesn’t try to also handle the entire App Lifecycle. It helps collecting application and infrastructure metrics and offers lots of ways for applications to even integrate with infrastructure. Due to Swarm being native Docker, it also integrates well with everything that integrates with Docker. Such as storage plugins.
While Docker Swarm makes it easy to operate Cloud Platforms in a scalable and reliable way out of the box, it doesn’t solve one of the biggest Challenges of SaaS Platforms: Data. Any software handling Customer Data needs a reliable way of storing, securing and recovering this data. Docker Swarm makes use of Docker Volumes and is easily able to provide Data Persistence to your Containers, but it doesn’t handle Data Availability or Data Safety in distributed environments. If you run a stateful service with 5 containers on different nodes, how can you make sure the Persistent Volumes are available on all nodes in case of failure or rescheduling? You can’t with Docker Swarm alone. Enter Storidge.
Storidge – Automated Storage for Containers
Modern apps run in orchestrated environments with Operations often being out of the provisioning cycle. This is why Storidge developed Automated Storage with Developers as Key Users. Their developer-centric approach immediately appealed to us and the way they integrated their software into the Docker Ecosystem made it easy to onboard it onto our project.
Automated Storage means:
- Storage Infrastructure as Code.
- Docker-integrated On-Demand Volume Provisioning
- Automated Data Locality for rescheduled Containers
- Automated Volume Capacity Extension
- Automatic Failover to ensure High Availability for applications
- Automatic Data Recovery
We evaluated multiple options for Data Availability, amongst them NFS, Portworx and S3FS. We finally settled for Storidge Container IO as the storage backend for the Platform. After integrating it into our Infrastructure Code (which was a breeze thanks to Storidge’s excellent Demo Code), and ironing out a few issues together with the Storidge Team in Slack, we had a Distributed Storage Cluster backing our Swarm and it’s running smoothly ever since.
Here’s why we chose Storidge as our storage backend:
Best practices for cloud native storage, automated.
Storidge automates Storage Infrastructure so enterprises deliver stateful apps efficiently, faster and with less effort.
In times of Continuous Delivery, Data Persistence must be delivered as a service so Developers and Operators can spend their best energies solving business problems and creating value. The storage backend should be an invisible service that is easily integrated into the Application Stack.
Storidge delivers storage as a Software Abstraction Layer that automates the complexity of Storage Infrastructure Management to deliver storage as a service. Tight integration with orchestration systems enables block-, file- and object-datastores to be provisioned on demand through declarative, programmatic interfaces for cloud native apps and legacy applications.
Purpose built for orchestration systems, Storidge automates many of the tasks that previously required Expert Staff.
Storidge automatically provisions volumes on demand, moves docker volumes to their containers when they are rescheduled, and expands volume capacity as needed. The automation greatly simplifies Data Availability in terms of storage operations and recovering data on infrastructure failures, with no operator effort.
Vertical and Horizontal Scaling
Storidge enables easy capacity- and performance-scaling by allowing Operators to add and remove nodes on demand.
Additionally, by allowing Operators to define simple and granular performance constraints, Storidge can guarantee consistent performance for critical applications, while making it easy to scale predictably.
Orchestration systems reschedule applications to new nodes for various reasons. This introduces variable network latency and inconsistent performance, creating issues for latency sensitive applications.
Storidge automatically rebuilds data on the same node as your container. This both ensures consistent performance and conserves network resources, while removing the need to set node constraints or install API extensions to ensure data locality.
Data Safety is a challenge for any SaaS Business. Storidge enables effortless backups and snapshots and keeps production-data safe on disk. Its auto-healing and -scaling capacities ensure safe operations and automated recovery.
Among all the pieces composing the Platform, Docker Swarm and Storidge played the most critical role for the success of Micro-Biolytic’s initial Mission for their Platform.
They provide the necessary infrastructure for the Operation of a customer-facing SaaS Offering. An infrastructure to build Services upon.
With german privacy laws and the sensitivity of the managed data in mind, the Platform was designed to be adaptable, auditable and scalable even in highly regulated environments.
DevOps by Default
Infrastructure as Code as a design-principle made it easy to collaborate on the entire system and reduce complexity of operations.
Having a reliable Infrastructure with simple APIs accelerated the onboarding of Platform services as fewer resources were dedicated to operation.
The pluggable architecture enabled easy integration of legacy resources such as root servers as it doesn’t depend solely on Cloud Resources.
Automating basic Operation-Tasks saves time. Time that can be used to improve operation security and reliability. Fixing Errors became Preventing Errors.
Making Quality transparent
Micro-Biolytics applied the quality standards and paradigms they base their scientific work upon to their approach on a modern Cloud Platform – and succeeded.
In their efforts to innovate towards a data-led future, the team leaves no stone unturned to build a Platform their customers can trust. Even more, the team decided to invest in technology that makes Software Developers first-class citizens and supports rapid innovation in a strictly regulated environment with mixed technical skillset.
Over the past couple of months, Micro-Biolytics built the foundations for new business opportunities and acted on their their promise to make quality transparent. The company placed a big bet on new technologies and talent to stay competetive and innovative in the digital age – the trust Micro-Biolytics puts into its team and their culture of Exploration and Learning are outstanding and the biggest accelerators of their mission.
I’m grateful to be part of this – it’s an opportunity to explore something new and exciting, but it’s also a chance to grow for all of us. We’re living in a world of high standards and expectations but – as Micro-Biolytics showed – it’s absolutely possible to meet them without making compromises, if you’re willing to go out of your way for it. And so they did!
With Mirantis announcing continued support and development for Docker Swarm recently, Micro-Biolytics straight-forward approach to decision making has showen to be not only technologically innovative, but also future-proof. Putting trust into something is always a risk – a risk rewarding to be taken in this case. After Mirantis acquired Docker Enterprise in Novemeber 2019 – right in the middle of our project – and the whole world went bonkers about the future of Docker Swarm, the team stuck to their decision. And they were right.
It’s hard to stay ahead of the curve and true to your values. Micro-Biolytics manages both, adopting modern Technologies and Paradigms to enrich their customer’s experience and keep their innovative pace – a perfect example of how to succeed in the digital age.
Want to learn more?
Take a dive into our DevOps Application Platform zero.
Based on what we’ve learned from our customer’s needs, zero is your cloud-native Hybrid-Cloud solution for heterogeneous application workloads.
zero features a dedicated and automated Storage Backplane as well as support for running Kubernetes and Swarm on mixed Linux and Windows clusters.
Fordern Sie uns heraus
Wenn Sie denken "Da geht noch etwas mehr" aber nicht wissen was und wie, lassen Sie uns reden! In einem unverbindlichen Erstgespräch können wir gemeinsam Ihre Möglichkeiten sondieren - ganz ohne Risiko.