What Is The Resolution
This article answers the question what is the resolution in the context of live streaming, and gives concrete, production-ready guidance for picking resolution, bitrates, encoder and SRT transport settings when you need low latency, predictable quality and reliable delivery. If this is your main use case, this practical walkthrough helps: Best Camera For Live Streaming. Before full production rollout, run a Test and QA pass with Generate test videos and streaming quality check and video preview. For this workflow, teams usually combine Player & embed, Video platform API, and Ingest & route. Before full production rollout, run a Test and QA pass with a test app for end-to-end validation.
What it means (definitions and thresholds)
Resolution is the spatial size of a video frame expressed as pixel dimensions (width x height). In streaming engineering we care about resolution together with frame rate, bitrate and chroma/bit depth because these four define the delivered quality and bandwidth. Below are definitions and practical thresholds you will use when designing pipelines. For an implementation variant, compare the approach in Obs Recording Software.
-
Common resolution categories
- SD: 640x360 or 854x480 (360p - 480p)
- HD: 1280x720 (720p)
- Full HD: 1920x1080 (1080p)
- QHD: 2560x1440 (1440p)
- UHD/4K: 3840x2160 (2160p)
-
Pixel count vs perceived quality
Higher resolution requires more bits for the same visual quality, especially with motion. Bitrate per pixel is the practical metric: reducing resolution reduces required bitrate roughly linearly for similar quality. If you need a deeper operational checklist, use Internet Speed For Streaming Twitch.
-
Chroma and color depth
Most live streams use 4:2:0 chroma subsampling and 8-bit color. Moving to 4:2:2 or 10-bit increases bandwidth and encoder load but helps chroma-heavy content (graphics, studio feeds). A related implementation reference is Low Latency.
-
Frame rate (fps)
Common live frame rates are 24/25/30/50/60. Higher fps increases motion smoothness and bitrate requirements. For the same latency budget, 30fps vs 60fps will often require halving per-frame processing budget if you keep a 1s keyframe interval. Pricing path: validate with bitrate calculator, self hosted streaming solution, and AWS Marketplace listing.
-
Keyframe (IDR) and GOP
GOP length is the number of frames between keyframes. For low-latency live streaming set keyframes frequently: typical target is 1 second, so for 30fps GOP=30, for 60fps GOP=60. Long GOPs (2-4s) save bandwidth but increase recovery time after packet loss and can hurt latency-sensitive switching.
-
Bitrate thresholds (H.264, typical target ranges)
- 480p@30fps: 600 kbps – 1.2 Mbps
- 720p@30fps: 1.5 Mbps – 3 Mbps
- 720p@60fps: 3 Mbps – 5 Mbps
- 1080p@30fps: 3 Mbps – 6 Mbps
- 1080p@60fps: 6 Mbps – 10 Mbps
- 1440p@30fps: 6 Mbps – 12 Mbps
- 2160p@30fps: 14 Mbps – 25 Mbps
HEVC/H.265 reduces these by roughly 30-50% for the same perceived quality at the cost of decoder availability and higher encode complexity.
-
Part size / segment size
For chunked streaming: standard HLS segments are usually 2-6 seconds. Low-latency CMAF/LL-HLS uses partial/fmp4 part durations of 100–500 ms; shorter parts reduce latency but increase request/CPU overhead and CDN origin pressure.
Decision guide
Pick resolution by balancing three constraints: the acceptable end-to-end latency, the typical viewer bandwidth, and the content motion/clarity requirements. Use the following decision flow.
-
What latency do you need?
- Interactive (sub-second to 1s, e.g., auctions, gaming, remote camera control): favor lower resolution and WebRTC or SRT+gateway to WebRTC. Target 480p–720p and 500 kbps–3 Mbps.
- Low-latency live (<3s, e.g., sports highlights, live news with near-real-time experience): use SRT for contribution, transcode to LL-HLS/CMAF or WebRTC for players. Target 720p–1080p and 1.5–8 Mbps.
- Broadcast-grade (5–30s latency acceptable): you can target 1080p–4K with higher bitrates and larger segments for CDN efficiency.
-
What is audience bandwidth?
- If >70% users on 5 Mbps+ networks, 1080p30 is feasible with adaptive streaming.
- If many mobile users on 2–4 Mbps, prefer 720p30 or multi-bitrate ladder that includes 720p and 480p renditions.
-
Is interactivity required?
- For two-way video and low-latency viewer interaction, sacrifice resolution to reach latency goals. Use 480p–720p, 15–30 fps.
-
Budget and infrastructure
- High-resolution multi-bitrate with low latency requires more transcoding capacity and CDN costs. Consider offloading to a managed API like the /products/video-api or using multi-streaming features at /products/multi-streaming.
Latency budget / architecture budget
Design latency budgets top-down. Below are sample budgets for three end-to-end goals. Each item is actionable and uses measured ranges suitable for production planning.
Target A: Sub-second interactive (250–500 ms)
- Capture + camera pipeline: 30–50 ms (30fps frame interval = 33 ms)
- Encoder (hardware low-latency): 10–80 ms
- Transport RTT + SRT jitter buffer: 100–200 ms (set SRT latency=100–200 ms)
- Server processing (ingest/transcode minimal): 20–50 ms
- Player decode & render: 40–80 ms
- Total target: ~200–460 ms
Target B: Low-latency streaming (500 ms – 3 s)
- Capture: 30–50 ms
- Encode: 30–200 ms
- Transport + SRT jitter buffer: 200–800 ms (latency param 200–800 ms)
- Transcode / packaging / CDN: 100–500 ms
- Player buffer & decode: 100–500 ms
- Total target: 500 ms – 2 s (typical)
Target C: Broadcast-grade (5–30 s)
- Capture: 30–100 ms
- Encode: 50–400 ms (more complex encoding, higher quality)
- Transport & segmentation: 2–10 s (longer HLS or DASH segments)
- CDN caching and player buffer: 2–20 s
- Total target: 5–30 s
Action: pick a target and allocate numbers for each component; instrument RTT, packet loss and P95 latency per stage.
Practical recipes
Concrete, copy-paste-friendly recipes. Replace endpoints and bitrates to suit your network and CDN.
Recipe 1: Remote contributor to studio via SRT, low-latency 720p30
- Encoder: software or hardware encoder at contributor site.
- Encoder settings (libx264 example with ffmpeg):
ffmpeg -re -i input -c:v libx264 -preset veryfast -tune zerolatency -profile:v high -level 4.1 -g 30 -keyint_min 30 -bf 0 -b:v 2500k -maxrate 3000k -bufsize 5000k -c:a aac -b:a 128k -f mpegts 'srt://STUDIO_IP:PORT?pkt_size=1316&latency=200'
Notes:
- pkt_size=1316 reduces UDP fragmentation for MTU ~1500.
- latency=200 sets SRT jitter buffer to 200 ms on the sender side. Match with receiver or leave receiver to tune.
- bf 0 disables B-frames for lower decode latency.
- Network tuning at contributor and ingest server:
- Increase UDP socket buffers: net.core.rmem_max and net.core.wmem_max to 33554432 (32 MB) on Linux.
- Ensure firewall allows the SRT port and avoids deep packet inspection that reorders packets.
- At studio ingest: terminate SRT, validate RTT and packet loss, then handoff to transcoder.
Recipe 2: Low-latency live to viewers using SRT contribution + LL-HLS output
- Contribution: use SRT with latency 300–800 ms depending on network variability.
- Transcode/packager: transcode to a multi-bitrate ladder and generate CMAF fMP4 parts with part size 200–400 ms.
- Example ladder: 1080p30 @ 5 Mbps, 720p30 @ 3 Mbps, 480p30 @ 1.2 Mbps, 360p30 @ 700 kbps.
- Segment config: part duration 200 ms, target window 3 parts before availability (LL-HLS player typically needs 2–3 parts before playback).
- CDN: choose a CDN that supports low-latency CMAF or use a real-time gateway to WebRTC for sub-second hotspots. If using a CDN-based LL-HLS workflow, expect 0.5–3 s depending on part size and CDN performance.
- Player: configure minimal buffer and use an LL-HLS-capable player with 2–3 part prefetch.
Recipe 3: Interactive multi-party calls (very low latency)
- Use WebRTC for last-mile interactivity. Use SRT for high-quality contribution to a central mixer if you need reliable contributor feeds.
- Resolution targets per camera: 480p30 for remote participants, 720p30 for presenters. Bitrates: 400–1500 kbps per camera depending on resolution.
- Encoder settings: minimize buffering, set keyframe interval to 1s, disable B-frames, and prefer hardware encoders for CPU efficiency.
- Topology: local mixer or SFU with adaptive simulcast to reduce uplink from each client. Map the studio clean feeds via SRT into the SFU or mixer.
Practical configuration targets
Targets you can use as baseline templates. Start here, then run load and network tests and iterate.
-
Ultra-low interactive
- Resolution: 640x480 or 854x480 (480p)
- Framerate: 30 fps
- Bitrate: 500–1,200 kbps
- GOP/keyframe: 30 frames (1s)
- Encoder tune: zerolatency, bf=0, refs=1
- Transport: WebRTC or SRT with latency=100–200 ms
-
Low-latency streaming (viewer-facing)
- Resolution: 720p30 or 1080p30
- Bitrate: 1.5–6 Mbps (use adaptive ladder)
- GOP: 30 frames (1s)
- Part size (LL-HLS/CMAF): 200–400 ms
- SRT latency: 200–800 ms for contributor links
-
Broadcast-quality
- Resolution: 1080p60 or 4K depending on use case
- Bitrate: 6–50 Mbps
- GOP: 2s typical for bandwidth-savings (increase keyframe frequency for fast channel switching)
- Segment: 2–6s for HLS
Limitations and trade-offs
Every choice forces trade-offs. Make these explicit in design documents and SLOs.
-
Resolution vs latency
Higher resolution increases encoder latency and bitrate; to keep latency you must either increase transport buffer (which raises latency) or reduce resolution. For sub-second interactivity, reduce resolution first.
-
Bitrate vs error recovery
SRT uses retransmission; this improves quality at the cost of added jitter-buffer-based latency. Lower bitrates make streams more sensitive to packet loss and motion complexity.
-
CPU vs hardware
Software encoders provide highest quality per-bit at cost of encode latency and CPU. Hardware encoders (NVENC, QuickSync) reduce latency and CPU but sometimes at the cost of slightly lower compression efficiency. Measure for your scene complexity.
-
CDN constraints
Some CDNs do not fully support LL-HLS or will buffer parts at the edge. Verify CDN support for part-level delivery if you need <3s latency.
Common mistakes and fixes
Actionable list of common misconfigurations and how to resolve them quickly.
-
Mistake: Using long segment durations for low-latency needs
Fix: Reconfigure packager to use CMAF parts 200–400 ms or LL-HLS with shorter parts; expect higher request rates and scale CDN accordingly.
-
Mistake: Keyframe interval too long
Fix: Set keyframe interval to 1s (g = fps) so that player switching and error recovery are faster.
-
Mistake: B-frames enabled by default
Fix: Disable B-frames (bf 0) for low-latency workflows where decode/render delay matters.
-
Mistake: Not tuning socket buffers
Fix: Increase net.core.rmem_max and net.core.wmem_max to 16–32 MB on encoder and ingest servers; set appropriate UDP buffer sizes on the encoder process if available.
-
Mistake: Expecting SRT to appear in browsers
Fix: Use SRT for contribution and terminate/republish via WebRTC or LL-HLS for browser playback; callaba supports pipelines that convert SRT to player-side protocols through the /products/video-api and /products/multi-streaming services.
Rollout checklist
Before switching a live stream to production, validate each item below.
- Set measurable SLOs: target E2E latency, P95 viewer join time, acceptable packet loss.
- Confirm contributor link stability: run 24-hour SRT tests with packet loss injection at 0.5% and 2% using netem.
- Validate encoder config: reproduce under CPU and GPU limits; ensure encoder stays within real-time bounds without frame drops.
- Test CDN: publish load test and confirm LL-HLS part propagation within your latency budget.
- Instrument metrics: capture RTT, packet loss, encode time, round-trip SRT latency, segment generation time and player buffer occupancy.
- Enable graceful fallback renditions and ABR ladders for variable networks.
- Document emergency steps: increase SRT latency to 800 ms, reduce resolution by one step, or switch to a lower-bitrate rendition.
Example architectures
Three tested architectures with product mapping and where to use them.
Architecture 1: Contributor SRT -> Studio -> LL-HLS -> CDN -> Browser
- Use SRT for the contributor hop to the studio (latency 200–800 ms).
- At ingest, transcode to a low-latency CMAF packager (part size 200–400 ms) and push to CDN edges as LL-HLS.
- Call to action: use /products/video-api to manage ingest, transcoding and packaging programmatically.
Architecture 2: Multi-party interactive -> SFU -> WebRTC clients
- Use WebRTC for browser-native sub-second interactivity.
- For higher-fidelity feeds, contributors send SRT to a media server, which then injects into SFU streams.
- Map to product: /products/multi-streaming for rebroadcast and restream tasks after mixing.
Architecture 3: High-quality studio -> SRT uplink -> CDN for VOD
- Use SRT for contributor and uplink reliability to ingest points.
- Record and archive into VOD store with transcoded renditions for adaptive playback; use /products/video-on-demand to provision storage and playback manifests.
- For private/self-hosted deployments, consider /self-hosted-streaming-solution or our partner marketplace listing on AWS for preconfigured images: https://aws.amazon.com/marketplace/pp/prodview-npubds4oydmku
Troubleshooting quick wins
When things go wrong, apply these quick checks in order of impact.
- Measure end-to-end: record timestamps at capture, ingest, packaging and player to localize latency.
- If packet loss is visible, temporarily raise SRT latency to 500–1000 ms to allow retransmits; then tune up socket buffers and investigate network path.
- If encoder CPU spikes and drops frames, lower resolution or move to hardware encoder; for software x264, move preset from veryfast to superfast or ultrafast as needed.
- If players report long startup, check part availability at CDN edge and reduce initial parts-to-play threshold; verify CDN supports LL-HLS parts propagation.
- If viewers on mobile complain of buffering, add lower-bitrate renditions (360p and 480p) and set ABR minimum threshold to bring them up reliably.
Next step
Resolution is one variable in a full streaming stack. Translate your target latency into concrete encoder and transport configs, then automate tests and monitoring. If you want an operational path forward:
- Programmatic ingest, transcoding and packaging: explore our /products/video-api for controlling ingest and generating low-latency outputs.
- Multi-destination and restreaming: use /products/multi-streaming to reach social platforms and CDNs while keeping contribution stable via SRT.
- Archival and on-demand workflows: use /products/video-on-demand to store and repackage recorded streams at multi-bitrate targets.
- Self-hosted options and enterprise deployments: read /self-hosted-streaming-solution and see our AWS Marketplace offering for deployable instances at https://aws.amazon.com/marketplace/pp/prodview-npubds4oydmku
- Technical references and how-tos: follow these docs to implement the recipes above: /docs/srt-setup, /docs/encoder-configuration, /docs/latency-tuning
If you want hands-on help mapping your exact use case (audience bandwidth distribution, target latency and available encoder hardware) to a production configuration and a rollout plan, contact our engineering team through the /products/video-api page or start with the multi-streaming product for quick tests at scale.


