Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Serverless Stack Development with SST -  William Smith

Serverless Stack Development with SST (eBook)

The Complete Guide for Developers and Engineers
eBook Download: EPUB
2025 | 1. Auflage
250 Seiten
HiTeX Press (Verlag)
978-0-00-102724-4 (ISBN)
Systemvoraussetzungen
8,52 inkl. MwSt
(CHF 8,30)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

'Serverless Stack Development with SST'
'Serverless Stack Development with SST' is an authoritative guide that navigates the rapidly evolving landscape of serverless computing, blending foundational principles with practical, modern techniques. Through its comprehensive chapters, the book unpacks the serverless paradigm-examining its core concepts, key patterns, and architectural evolutions-before focusing on the emergence of SST (Serverless Stack Toolkit) as an essential extension of the AWS Cloud Development Kit (CDK). Readers will benefit from clear, critical comparisons between SST and other leading frameworks, gaining a nuanced understanding of SST's role in unlocking scalable, modular cloud solutions for organizations of all sizes.
Delving deeply into real-world technical challenges, the book addresses advanced API development, event-driven design, and multi-environment project structuring. It explores sophisticated data layer management techniques, including DynamoDB modeling, securing configuration and secrets, as well as integrating with relational and in-memory databases. Security, compliance, and DevOps concerns are treated with equal rigor-offering practical strategies for IAM role design, end-to-end encryption, continuous compliance, automated deployments, zero-downtime rollouts, and full-stack observability. Through actionable blueprints and best practices, technical leaders and practitioners are empowered to elevate their serverless infrastructure to the highest standards of reliability and efficiency.
The journey culminates with advanced engineering topics and future-focused insights. Readers explore performance tuning for cold starts, resource scaling, bottleneck analysis, and ecosystem extensibility through plugins and custom constructs. Case studies illuminate real-world SST deployments-from multi-tenant SaaS to legacy modernization-while a closing survey of emerging trends ensures readers are equipped for the next wave of innovation. Whether architecting new greenfield systems, modernizing legacy workloads, or scaling global platforms, 'Serverless Stack Development with SST' is an indispensable resource for mastering next-generation cloud development.

Chapter 1
Serverless Fundamentals and the Role of SST


How did serverless reshape not only architecture, but the limits of developer productivity itself? This chapter traces the provocative journey of serverless thinking, revealing the motivations, principles, and technical innovations underlying this paradigm. By dissecting common patterns and confronting modern challenges, we prepare to contrast conventional approaches with the philosophy and architecture of SST. Whether you’re refining cloud operations or seeking a framework that bridges ambition and best practice, this is where that evolution begins.

1.1 Serverless Paradigm: Concepts and Evolution


The serverless paradigm represents a profound shift in cloud computing, characterized by its abstraction from traditional server management and its pay-per-execution economic model. Unlike conventional infrastructure models where developers provision and maintain virtual machines (VMs) or containers, serverless computing inherently conceals these layers. This abstraction elevates the developer experience by enabling a focus on code and business logic rather than operational concerns, thereby accelerating innovation cycles and reducing time-to-market.

Historically, the emergence of the serverless model is rooted in the evolving needs of cloud infrastructure economics and software architecture complexity. Early cloud services primarily replicated on-premises infrastructure in virtualized form, emphasizing Infrastructure as a Service (IaaS). This approach, while flexible, imposed significant operational overhead and complexity in resource management. Subsequently, Platform as a Service (PaaS) aimed to alleviate these burdens by abstracting runtime environments, though it often constrained flexibility and portability. Serverless computing—often exemplified by Functions as a Service (FaaS)—represents an evolutionary leap by offering fine-grained, event-driven execution units that scale automatically and incur costs solely based on actual usage.

The initial wave of FaaS introduced by platforms such as Amazon Lambda (launched in 2014) signaled a pivotal moment. It enabled developers to deploy discrete functions triggered by various events, from HTTP requests to cloud storage changes, without provisioning or managing servers. This innovation drastically reduced both capital and operational expenditure, aligning closely with cloud economics principles that emphasize elasticity and cost efficiency. The pay-as-you-go billing model transformed cloud utilization from fixed expenses based on provisioned capacity to variable expenses tied directly to consumption patterns.

From a technological perspective, the success of serverless hinged on several innovations. First, high levels of automation in function lifecycle management—encompassing deployment, scaling, load balancing, and failure recovery—were critical. These capabilities relied on advances in container orchestration, lightweight virtualization, and event-driven architectures, enabling near real-time scaling down to zero instances. Second, improvements in runtime environments and isolation technologies preserved security and performance without sacrificing agility. The advent of microVMs, language sandboxing, and specialized FaaS runtimes contributed to this balance.

Organizationally, serverless facilitated a transformation in software delivery models. It fostered the decomposition of traditional monolithic applications into loosely coupled, event-driven microservices. This decomposition was not merely a technical shift but an enabler of agile development practices, continuous integration and continuous delivery (CI/CD), and domain-driven design. Teams could independently develop, deploy, and scale functions aligned with discrete business capabilities, reducing interdependencies and accelerating iteration cycles.

The evolutionary trajectory from monoliths and VMs to serverless architectures also entailed a redefinition of software boundaries and resource ownership. Traditional VMs encapsulated entire applications or services with dedicated operating systems, leading to over-provisioning and underutilization. Serverless functions, in contrast, often encapsulate atomic units of execution without persistent state, calling for new paradigms in state management, event orchestration, and data consistency. Technologies such as managed state stores, event streaming platforms (e.g., Apache Kafka, AWS Kinesis), and workflow orchestrators (e.g., AWS Step Functions) emerged to complement serverless compute, enabling complex distributed applications.

Notable milestones illustrate this evolution. The 2014 launch of AWS Lambda popularized FaaS, followed by Google Cloud Functions and Microsoft Azure Functions, each expanding platform capabilities and regional reach. Concurrently, the ecosystem matured with the introduction of frameworks like Serverless Framework and AWS SAM, which simplified function deployment and infrastructure as code. Around the late 2010s, hybrid architectures combining serverless, containers, and traditional services became prevalent, driven by the need to balance cold start latency, runtime limitations, and legacy system integration.

The narrative extends to multi-service architectures where serverless functions interoperate with managed databases, messaging systems, identity services, and analytics platforms. This integration realizes the promise of composability and scalability, exemplified in architectures underpinning large-scale applications such as streaming media, real-time analytics, and IoT backends. Consequently, serverless is now a core component of modern cloud strategies, enabling enterprises to harness agility, reduce operational burden, and optimize costs amid accelerating digital transformation.

In summary, serverless computing emerged as a response to the economic imperatives of cloud scalability and the complexity of software delivery, advancing through technological innovation and organizational adoption. Its evolution from early FaaS implementations to sophisticated, multi-service ecosystems reflects a durable shift in cloud architecture paradigms, one that continues to reshape how applications are conceived, developed, and deployed.

1.2 Key Serverless Patterns and Workflows


Serverless architectures fundamentally reshape the design and operation of distributed applications by abstracting away infrastructure management and promoting fine-grained, event-driven execution models. Core to exploiting serverless benefits is understanding the predominant patterns that govern function interaction, state management, and system decomposition.

Event-driven design lies at the heart of serverless, where functions-typically stateless compute units such as AWS Lambda-are invoked by events rather than direct calls or persistent connections. Events range from HTTP requests, message queue arrivals, and blob storage changes to custom application signals. This paradigm decouples producers and consumers both temporally and spatially, enabling improved scalability and fault tolerance. For example, an S3 file upload event can trigger image processing asynchronously, without introducing latency to file uploads.

Event-driven architectures require careful event schema design and ideally facilitate idempotent handlers to safeguard against duplicate invocation. Event routing is often managed by cloud services such as API Gateway or event buses like Amazon EventBridge, which support filtering and transformation. Patterns such as event sourcing and CQRS (Command Query Responsibility Segregation) complement event-driven setups by preserving an immutable log of events and separating read/write workloads effectively.

Serverless functions emphasize statelessness, which minimizes coordination overhead and enables arbitrary horizontal scaling. Since execution contexts are ephemeral and not guaranteed to persist across invocations, application state must reside in external durable stores such as databases, distributed caches, or object storage. This constraint enforces a separation of concerns: business logic is parameterized solely by input events and external state retrieval.

Statelessness simplifies retry policies and failure handling by eliminating concerns about local in-memory state reconciliation. However, it introduces latency and complexity tied to database round-trips or eventual consistency of storage systems. Solutions to mitigate these limitations include utilizing distributed caches for session-like state or embedding small context blobs inside event payloads to reduce external calls.

Serverless architectures naturally align with microservices, where granular functions encapsulate discrete capabilities. The decomposition can follow domain-driven design principles, creating bounded contexts with loosely coupled responsibilities. Each function or group thereof acts as an autonomous service, enabling independent deployment, scaling, and evolution.

This decomposition is facilitated by the modular nature of serverless platforms, which allow per-function policies, resource limits, and versioning. However, over-decomposition can increase operational complexity and invocation overhead, while under-decomposition may cause monolithic behaviors negating serverless benefits. The design must balance cohesion and...

Erscheint lt. Verlag 20.8.2025
Sprache englisch
Themenwelt Mathematik / Informatik Informatik Programmiersprachen / -werkzeuge
ISBN-10 0-00-102724-7 / 0001027247
ISBN-13 978-0-00-102724-4 / 9780001027244
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)
Größe: 910 KB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Apps programmieren für macOS, iOS, watchOS und tvOS

von Thomas Sillmann

eBook Download (2025)
Carl Hanser Verlag GmbH & Co. KG
CHF 40,95
Apps programmieren für macOS, iOS, watchOS und tvOS

von Thomas Sillmann

eBook Download (2025)
Carl Hanser Verlag GmbH & Co. KG
CHF 40,95