-
Notifications
You must be signed in to change notification settings - Fork 152
Lambda execution: Why use now-bridge with local http-server instead of just calling the render function? #231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, While there are existing mocking solutions for request & response around (e.g. Since the server is frozen by AWS Lambda between calls, the server is also only generated and started on a Lambda cold start. The |
"full Next.js server" means that you will use the Next.js custom server api?
It also seems that @dphang from the serverless-next.js repo is working on a simple compat layer for normal lambdas right now: |
Not exactly the Next.js custom server but the core of it (called I have an ongoing PR that implements it: #89 Yes, sls-next is currently in the process moving its Lambdas from Lambda@Edge to regional Lambdas to follow a similar architecture as ours (and Vercel's). It's kind of crazy that each community Next.js serverless project creates their own builder (this project, sls-next, @netlify, Amplify, Flightcontrol.dev) given that we all deploy to AWS Lambda eventually 😅 |
👍 , any roadmap/eta?
You're absolutely right. Kind of a bummer that vercel doesn't provide the full build tools for us. Leaves a bittersweet taste given the fact that Next.js is promoted as "open-source". Nevertheless, I also understand that it's their business model to kind of force people to use their hosting platform. |
Hey all, just chiming in since I was mentioned - definitely an interesting project here that I didn't know about yet. Yes, that is right, the build/packaging logic is actually mostly done (thanks in large part to Jan Varho for the initial work) but I'm currently working on testing deployment logic using CDK for Terraform and ironing bugs out.
Yeah, that is definitely a lot of duplication, I wish we could collaborate more across projects. Though for serverless-next.js, I guess the original author made it very coupled/custom logic for Lambda@Edge to optimize for serverless platforms instead of trying to use parts of In fact https://github.com/serverless-stack/serverless-stack does use our One thing I've been focused on is also try to simplify the architecture and improve perf - one Lambda for routing, getting data from S3, etc. Previously, for Lambda@Edge to do fallbacks, add headers it needs an origin response handler which was CloudFront specific and made development more complex. And these Lambdas actually just wrap generic handlers with a compat layer and platform-specific logic to retrieve files (so we can reuse for Azure, GCP, etc.) So it can be either: CloudFront -> Lambda@Edge origin request handler -> S3/SQS or SSR rendering
Yeah, I definitely understand too since their business model is hosting on Vercel, but considering Next.js is open source I believe we should also have bunch of open source self-hosted options (AWS, etc.), hence why I continued on maintaining serverless-next.js after the original maintainer left... |
Plan is to ship it together with ISR mode in the 0.11 milestone by end of November.
Always welcome! Nice to hear your thoughts on this topic, thanks for sharing! 👍 It think Vercel has learned some lessons from the Gatsby story: They invested a lot of time (of their then small team) to improve the open source Markdown parsing / MDX projects and then Gatsby came in, put a GraphQL API on top of MDX and scored their first big VC money from it. However since they pulled the deployment code from the public repo they always worked towards removing special Vercel code from Next.js (which resulted in the deprecation of the serverless target mode in v12). However it's still hard to keep up-to-date with the latest Next.js features since most of the changes of internal Next.js behaviors are not public (or hidden in patch releases), so reverse engineering for new features takes a lot of time.
That might be some points where working together would make sense:
|
I'm going to close this issue because it has been inactive for 30 days ⏳. This helps to find and focus on the active issues. |
First of all, I'm just here because
tf-next build
enables me to actually use my next-app in a manually configured environment using CloudFormation/SAM. Good job 👍 .I want to understand why you made the decision to implement the launcher and bridge (the runtime of the lambda proxy) in a way that you create a local http server and call it directly from inside the code. It seems to me that this just creates a massive unnecessary overhead.
The only thing that is needed is the routing stuff from the generated
now__launcher.js
.The render function can be called directly with mocked req/res objects:
The text was updated successfully, but these errors were encountered: