Skip to content

[OCS] Support for dynamic bucket size based on rate of data consumption. #811

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
vihangpatil opened this issue Sep 4, 2019 · 1 comment

Comments

@vihangpatil
Copy link
Member

Problem description

We end up configuring a large bucket size on PGW, OCSGW or OCS which can handle fastest data consumption.
But most of the users do not consume data at a peak rate.
Due to the large bucket size, the consumption happens in large steps (of 100 MB, for e.g.).
There is also a risk of large data to be unaccounted for due to technical issues like restart in components or delayed/lost messages between them.

Solution

Instead of having a large bucket size to handle the fastest data consumption, the bucket size should be proportionate for the moderate data consumption, but which can dynamically increase if someone is consuming at higher rates.

@jigarpatel1007
Copy link

Really insightful observation — this hits a common tradeoff in real-time charging platforms: fixed large buckets ensure service continuity but reduce granularity, especially for low/average traffic users.

Here’s a possible approach I’ve seen work in similar telecom infrastructures:


🧠 Proposal: Sliding Window Bucket Resizing Strategy

Instead of statically assigning a large bucket upfront, the system could apply an adaptive window model:

  1. Start with a moderate base bucket size, e.g., 10MB or 25MB
  2. Monitor actual consumption rate over a short, rolling window (e.g., 15–30 seconds)
  3. If consumption rate exceeds a threshold, bump the bucket size up (e.g., double it)
  4. If the rate drops back below threshold for a defined cooldown, shrink the bucket again

This allows the system to:

  • Stay lean for normal usage
  • Scale up smoothly when users download large files or video streams
  • Minimize unaccounted consumption during outages or crashes

🔧 Implementation Outline

  • Add a lightweight RateTracker component (in-memory or Redis) keyed by user/session
  • Track deltas in consumption timestamps
  • Trigger bucket size suggestion per session (or per RAT, if needed)
  • Keep hard upper/lower bounds to prevent flapping or overgrowth

✅ Benefits

  • Preserves accuracy for average users
  • Reduces risk of unbilled traffic due to component failure
  • Provides a smoother UX without needing config changes per PGW

Happy to help sketch out a design doc or contribute some testing strategy for how this could be piloted in a pre-prod cluster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants