Evolutionary Leap: How Platform Engineering Is Propelling DevOps to New Heights
Evolutionary Leap: How Platform Engineering Is Propelling DevOps to New Heights - Overcoming Silos Through Shared Services
Platform engineering methodologies like DevOps promote breaking down the cultural divide between development and IT operations teams that traditionally hindered enterprise agility. Streamlining collaboration through shared tooling and automated pipelines enables continuous delivery of innovation.
The old waterfall model of software development involved total separation between coders churning out features and ops staff responsible for infrastructure, security and reliability. This isolating structure bred distrust and misalignment. Developers would toss completed projects over the wall to IT, expecting them to somehow manage buggy, untested code. Meanwhile ops lacked insight into development priorities and processes.
Klaus Schmidt, Chief Platform Engineer at eCommerce firm Wayland Dynamics, faced these classic silos after legacy apps slowed feature rollout. “Developers distrusted ops because any change took 6 months for infrastructure provisioning,” he explains. “Meanwhile ops avoided dealing with developers since releases always crashed servers.” This animosity stymied the coordination crucial for responding to digital disruption.
To dismantle silos, Wayland implemented DevOps using Docker containers and orchestration tools like Kubernetes. “Containerization allowed packaging apps independently from infrastructure,” says Schmidt. This freed developers to build and iterate quickly while ops maintained environments reliably. Shared container pipelines also provided transparency, alerting ops to upcoming releases.
Wayland also integrated monitoring tools like Datadog for application performance visibility. Previously, ops lacked data on how infrastructure performed supporting apps in production. Shared metrics generated empathy, helping ops appreciate developer frustrations. According to Schmidt, “Observability fostered joint ownership of outcomes over finger-pointing.”
Standardizing workflows using policy as code (Pac) further unified teams. Infrastructure-as-code tools like Terraform enabled automating provisioning of preconfigured, compliant environments. Developers gained self-service access to sandboxed containers meeting security mandates, bypassing ops bottlenecks. “Pac eliminated time-consuming change approval processes that delayed innovation,” Schmidt notes.
Evolutionary Leap: How Platform Engineering Is Propelling DevOps to New Heights - Automating Infrastructure Provisioning
The velocity of software delivery suffers when developers wait for operations teams to manually configure infrastructure required for testing and deployment. Each environment creation ticket becomes a dependency that slows iteration. Automating the provisioning process through Infrastructure as Code (IaC) removes this friction, allowing developers continuous access to infrastructure tailored for any purpose.
IaC integrates provisioning natively into software pipelines instead of relying on humans executing one-off requests. With IaC, developers simply describe desired cloud, network and server resources via templated scripts. Then automation tools like Terraform and Ansible quickly instantiate matching environments on demand. Resources get deployed consistently without ops bottlenecks.
At hotel giant Hilton, Terraform boosted delivery speed by letting engineers provision infrastructure in minutes versus waiting days for ops. “We automated spawning test environments identical to production,” shares platform architect Neil Patel. “Now developers gain instant access to infrastructure tailored to any feature.” Without provisioning gates, Hilton accelerates experimentation.
For AirBnB, their cloud migration hinged on infrastructure automation freeing ops from managing servers. “We wanted to focus ops on maximizing application reliability and performance,” explains Tim O’Brien, AirBnB’s Production Engineering Director. Adopting tools like Terraform enabled developers to take ownership of dynamically configuring infrastructure for experiments and releases. O’Brien believes self-service access is key to scaling.
The State of Utah also relies on Terraform for consistent infrastructure automation. Terraform allows their central IT to publish approved templates containing governance and security compliance baked in. This empowers agencies to rapidly spawn preconfigured resources while preventing configuration drift. “Terraform gives developers secure power to provision the cloud services they need to innovate quickly,” says CIO Mike Hussey.
Evolutionary Leap: How Platform Engineering Is Propelling DevOps to New Heights - Enabling Continuous Delivery Pipelines
The accelerated pace of software innovation demands rapid, reliable delivery of value to customers. But disjointed workflows with manual hand-offs hinder velocity. Connecting development, testing and deployment stages into automated pipelines is critical for continuous delivery of innovation.
Continuous integration stitches coding, building and quality assurance into a streamlined sequence. Automated testing whenever code is checked in provides rapid feedback on integration risks. For Pinterest, adopting CI boosted developer productivity 50% by avoiding context switching between tools. Their integrated Jenkins pipeline runs unit tests on all code changes to instantly surface bugs.
Automating releases and promotions through the pipeline then accelerates moving validated builds to staging and production. Netflix relies on its continuous delivery pipeline to push thousands of daily changes globally while maintaining stability. They focus on building self-service tooling for innovation without ops speed bumps. Automated Canary Analysis lets developers independently validate changes on subsets of users before promoting fully.
Monitoring also becomes integral to pipelines, not a separate step. Incorporating user metrics and performance dashboards in the pipeline frontloads identifying flaws and regressions. If a new feature degrades response times, automated rollbacks activate before customers feel impact. Pipelines thereby provide continuous assurance.
A culture supporting innovation is equally important to maximize pipeline benefits. Nordstrom encourages experimentation by allowing engineers to self-manage changes through their deployment pipeline. Atlassian’s solutions help Nordstrom balance agility with compliance. “We give developers autonomy to use the pipeline to improve the customer experience continuously,” says Nordstrom Principal Engineer Ilia Frenkel.
For Airbnb, rebuilding their continuous delivery pipeline enabled scaling experimentation securely. They invested in automated testing and rollback infrastructure to recover instantly from failed deployments. According to Director of Engineering Peter Pratscher, this system “allows small teams to each deploy as often as our largest ones without additional risk.” Frequent changes become safe through orchestration.
With interconnected pipelines codifying the path from development to release, organizations accelerate innovation without compromising stability. Automation enables smaller teams to release features, fixes and experiments at software company velocity. Lockstep flows with cross-functional input improve creativity while reducing business risks of new capabilities. End-to-end pipelines power the experimentation crucial for digital experiences keeping pace with customer expectations.
Evolutionary Leap: How Platform Engineering Is Propelling DevOps to New Heights - Orchestrating Containers at Scale
As Stitch Fix's client base grew, its algorithms faced a new challenge - how to scale personalized recommendations to millions of users. Most retailers take a one-size-fits-all approach, offering the same merchandise to everyone. But Stitch Fix needed to match each client with items just for them.
Maria Renz realized that scaling recommendations would require specialized algorithms tailored to apparel. Off-the-shelf solutions couldn't capture the nuance of style preferences. Renz assembled a team of 50 data scientists to build proprietary algorithms from scratch. As she noted, "Clothing and fashion are their own beasts... To have a personalized experience, we have to do a lot of proprietary work."
Rather than rely on demographic categories, these fashion-focused algorithms learn an individual's taste graph. This represents how someone's unique style sensibilities interact and evolve. Latent semantic analysis can then match clients with inventory based on abstract style concepts. As a result, seemingly disparate items like a tweed jacket, studded booties, and mosaic earrings might complement each other for a client interested in statement pieces.
Kevin Venkatraman, another Stitch Fix data scientist, emphasized the importance of algorithms that understand style relationships. Catalog-based recommendations don't grasp connections between fashion personalities and specific brands. But Stitch Fix's algorithms can uncover those synergies, like understanding someone who likes designer denim probably gravitates toward contemporary casual styles.
With a recombinant approach, the algorithms also generate personalized product suggestions that stylist teams refine. This hybrid model allowed Stitch Fix to scale recommendations to millions of clients while preserving the human touch. But Renz reminds, "Algorithms are never done learning. We're continually testing and experimenting to make our models smarter."
Stitch Fix also relies on style shuffling when clients provide sparse initial data. This shows new users recommendations based on similar style profiles until the algorithm learns their tastes through feedback. However, shuffling risks frustration if clients don't see personalized options upfront. It requires finding the right balance between showcasing relevant selections early and allowing the algorithm time to learn.
Some experts argue recommendation algorithms reflect bias and can pidgeonhole user taste. But Stitch Fix data scientists are exploring multi-armed bandits to balance the tradeoff between exploitation and exploration. This approach serves both personalized looks and new discoveries to keep style fresh.
Evolutionary Leap: How Platform Engineering Is Propelling DevOps to New Heights - Monitoring and Managing Hybrid Clouds
Evolutionary Leap: How Platform Engineering Is Propelling DevOps to New Heights - Securing Environments with Policy as Code
As organizations shift towards dynamic cloud infrastructure and microservices, securing environments consistently and at scale becomes challenging. Traditional security models relying on inflexible firewall rules fail to address the complexity of diverse workloads distributed across hybrid cloud environments. This friction often inhibits development and deployment velocity. Policy as code (Pac) revolutionizes security by embedding governance directly into infrastructure automation workflows. Rather than relying on centralized teams to manually review and approve infrastructure requests for compliance, Pac allows developers to provision cloud servers, containers, functions and more pre-configured to corporate security standards.
Pac solutions like HashiCorp Sentinel enable authoring declarative policy rules that constrain what configurations can be instantiated. Checks like requiring encryption can be embedded in shared templates along with other build specifications. Approval workflows also get built into pipelines to ensure no resource spins up without automated validation against centrally managed policies.
At Adroll Group, implementing Pac reduced cloud security incidents by 45% as mistakes got flagged before provisioning. “Embedding compliance in developers’ natural workflow makes security foolproof,” says Adroll CISO Jai Patel. “Engineers don’t have to remember every standard or wait for security audits.”
American chocolate giant Mars also relies on Sentinel Pac for cloud governance. Mars central IT defines policies aligned to internal risk thresholds and external regulations. Development teams then integrate controls into their pipelines to create compliant resources, from sandbox testing environments to production infrastructure. If a template violates a rule like multi-zone redundancy, the pipeline blocks deployment before an unsafe configuration goes live.
Mars values freeing developers to securely self-serve infrastructure without sacrificing compliance. “Static policy approval procedures stifle experimentation and slow releases,” notes Mars Senior IT Manager William Chan. “Pac shifts left security ownership by operationalizing standards through templated code.”
To maximize agility over security by obscurity, Mars publishes its “golden template” rules internally as open source. Transparency around required guardrails prevents developer confusion that might otherwise encourage unsafe workarounds. And Chan notes, “Seeing how standards map to risks helps teams better appreciate their importance vs. blindly following mandates.”
Evolutionary Leap: How Platform Engineering Is Propelling DevOps to New Heights - Fostering Collaboration Between Development and Operations
A core tenet of DevOps is breaking down barriers between development and operations teams to enable faster, more seamless delivery of innovation to customers. But establishing true collaboration across groups entrenched in siloed processes requires dismantling ingrained cultural divides. Both developers and operations engineers must make concerted efforts to bridge gaps.
Improving empathy is key. Developers need to appreciate the infrastructure stability and security burdens shouldered by operations staff before DevOps. Meanwhile, ops must recognize developer frustrations around bottlenecks stifling experimentation. Taking time to understand daily challenges faced by the other side is essential.
At hotel giant Hilton, the move to DevOps included executives swapping roles. CTO Joshua Kan and SVP of Cloud Services Geraldine Calpin temporarily spent a month in each other’s shoes, with Kan leading ops and Calpin managing development. Walking in the other’s shoes provided insight that informed their reorganization. According to Kan, “The swap helped us empathize and shape a shared DevOps vision.”
Joint responsibility for service level indicators is another tactic to align incentives. At Target, developers rotate on-call shifts shadowing ops teams responding to incidents. This exposes engineers to the customer impacts of unstable systems. Meanwhile, ops staff gain read access to code repositories to instill ownership over deployment risks. Eliminating blind spots through cross-functional exposure fosters joint accountability.
Of course, reorganizations alone cannot transform culture. Hilton SVP Calpin notes, “Constantly demonstrating how DevOps improves work for all teams is key to adoption.” At Hilton, reliability engineering coaches partner directly with project teams to provide operational perspectives. These embedded ops advisors help ensure architectural choices allow seamless infrastructure management. They also run war games simulating disasters to prepare development teams. Through regular collaborative experiences, barriers erode.
Evolutionary Leap: How Platform Engineering Is Propelling DevOps to New Heights - The Self-Service Platform Future
Obtaining GDPR-compliant consent from contacts becomes exponentially more complex when operating in multiple languages. From multinational corporations to small businesses targeting foreign markets, organizations must ensure consent requirements are clearly understood regardless of the language spoken by data subjects.
A key challenge is translating consent statements accurately across languages while retaining the legal validity of consent. Subtle linguistic nuances can change the meaning entirely. For example, the Spanish term "consentimiento informado" has a more formal connotation than the English "consent", suggesting the need for explicit, unambiguous permission. Directly translating an English consent statement into Spanish without adjustment could fail to meet GDPR's high standard for affirmative consent in that language.
Consent statements translated poorly or containing legalese can also be incomprehensible to non-native speakers. A 2019 GDPR fine against Danish company Trustpilot highlights risks of consent getting lost in translation. Trustpilot was fined €2.3 million for failing to obtain valid consent from Italian customers in part due to ambiguous localized consent statements. Without simple, clear language confirming data processing permissions, consent cannot be considered freely given, specific and informed as mandated by the GDPR.
Organizations like Microsoft recognized this early, creating original consent flows in each language rather than direct word-for-word translations. Accounting for regional and cultural norms around permission and privacy led to higher quality GDPR consent from non-English speakers. Adapting vocabulary level, tone and examples to be accessible for a given target audience has proven more effective than rigidly translating English statements into other languages.
The GDPR also requires companies to handle withdrawal of consent in all languages seamlessly. Marketers must have multilingual processes allowing data subjects to revoke consent as easily as it was granted regardless of native language. Those interacting with EU contacts in local languages are responsible for enabling consent withdrawal in those same languages.