GraphQL Schema First (Complete stack)
This repository includes a full cycle NestJS stack to develop with containers:
- NestJS + GraphQL Schema First (default application generated by
@nestjs/cli
) with some configurations:class-validator
package with middleware configured to show errors when input data don't matches with criteria'snestjs-pino
: replaces built-in Nest logger to register outputs using JSON syntax and easily allow to collect with ElasticSearch or tools like that.- Modules to work with Redis:
infra/redis-cache/redis-cache.module.ts
: To use Redis as key/value, useful for cache functionality, needscache-manager
(already installed)infra/redis-pubsub/redis-pubsub.module.ts
: To use Redis as Pub/Subinfra/redis-queue/redis-queue.module.ts
: To use Redis as queue and execute async tasks
ms
: Allow to easily convert values from string to milliseconds, like1d
,1m
, etc.@golevelup/ts-jest
: Allow to easily create mocks from providers and use them on tests
- Dockerfile with multi-stage build and
docker-compose
to local development with live reload and debugger; - Redis (via Kubernetes with StatefulSet);
- Dynatrace (via Kubernetes) - you will need to configure OneAgent following the DynaTrace docs;
- OpenTelemetry will be used for this.
- Kubernetes deployments - application, ingress and service;
- Docker Desktop StorageClass fix for Kubernetes (only when running locally);
The GraphQL includes:
- Subscription handlers are configured
directive @deprecated
configured onschema.graphql
for example purposesgraphql-scalars
: allow to work with custom types and validate datascalar DateTime
declared onschema.graphql
only for example purposes
withCancel()
function to listen when client disconnect fromSubscription
- Node.js (v18);
- Docker (with Docker Compose);
- Kubernetes cluster (if you desire to use it);
- DynaTrace account (if you desire to use it);
Environment variables are defined on:
.env
- for local development, via Docker and NodeJS. Read more on this section.ConfigMap
- for production or another environment using Kubernetes. On this location.
See all available environment variables on
environments.ts
file.
The Redis auth credentials were changed to:
user default on +@pubsub ~* nopass
user admin on +@all ~* >adminpassword
Credentials:
Username | Scope | Password | ACL (permissions) |
---|---|---|---|
default |
pubsub |
nopass (make it empty) | +@read +@write +@list +@set +@string +@connection +@transaction +@stream +@pubsub +@scripting +@slow +@fast |
admin |
all |
adminpassword |
+@all |
Note: you can change the credentials in the
redis-configmap.yaml
file on these lines.
Local tests with Docker: to apply permissions on localhost, you will need to change the
redis.conf
on Docker folder too.
The keys stored on Redis for all modules (Queue, PubSub, Cache) will have a prefix to avoid conflicts with other applications. The value is defined on the REDIS_PREFIX
environment variable. The default value is the value of name
property on package.json
file.
Resource/Module | Key example |
---|---|
Cache /RedisCacheModule |
prefix :cache :key |
Queue /RedisQueueModule |
prefix :key |
PubSub /RedisPubSubModule |
prefix :key |
If you don't want to use DynaTrace, you can remove the opentelemetry.js
file and will need to remove the Dockerfile
excluding "--require", "./opentelemetry.js"
on this line.
You can start the application locally with:
# with Docker Compose
# -V remove volumes
# --build rebuild the images
docker-compose up --build -V
# You can specify the app port with SERVER_PORT (default will be 3000)
SERVER_PORT=4000 docker-compose up --build -V
# or simply with NodeJS
npm run start:dev
If you made changes on Redis container, run Docker Compose using
--force-recreate
to reloadredis.conf
file.
To access services (like mocks) on host machine, you can simply use the DNS: host.docker.internal
You can configure the environment variables in the file .env
.
Note: the
.env
file will be used ONLY for local development. On production or another environment, you will need to change it via ConfigMap (if using K8S).
You don't need to use Docker Compose to development, but using Redis you will need to install and configure it before.
# Create Docker network
docker network create redis
# Run Redis, attach network and expose port
docker run -d --rm --network redis --name redis -p 6379:6379 redis:6.2.3-alpine
In this case, running out of the Docker network the Redis Host (
REDIS_HOST
env) will belocalhost
or127.0.0.1
. Replace on the.env
file.
Now you can simply start like any other NestJS app:
npm run start:dev
The advantage of use Docker to run all the stack is why all the configurations are made and the tunnel between the app and Redis will automatically defined. Redis host will be simply
redis
(Docker network name).
You can configure the environment variables in the file .env
.
Note: the
.env
file will be used ONLY for local development. On production or another environment, you will need to change it via ConfigMap (if using K8S).
On this repository we are using
integration
as the image name, but you can change it to whatever you want.
You can build the application with:
# Production build
# using multi-stage build
docker build --target production -t <image-name> .
docker run --rm -it -p 3000:3000 <image-name>
Firstly you will need to create a Kubernetes cluster. Use any engine, like K3D, MiniKube, Docker Desktop, etc.
You can get start (only at the first time) simply running:
# This command will create namespaces and apply deployments
sh ./infra/k8s/start-k8s-cluster.sh
Note: If you changed the Docker image name to other value (not
integration
), you will need to replace the K8S deployments with the new image.
.graphql
files need to be manually copied to thedist
folder;- Defining log points with Pino
- Error when adding
redisStore
on RedisCacheModule: issue 1, issue 2