Skip to content

Commit

Permalink
Update documentation with clearer instructions for setup and deployment
Browse files Browse the repository at this point in the history
  • Loading branch information
Jason Huang committed Dec 17, 2023
1 parent 74a4dcc commit 777cc7e
Showing 1 changed file with 55 additions and 17 deletions.
72 changes: 55 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,21 +9,42 @@ Fleet Telemetry is a server reference implementation for Tesla's telemetry proto

The service handles device connectivity as well as receiving and storing transmitted data. Once configured, devices establish a WebSocket connection to push configurable telemetry records. Fleet Telemetry provides clients with ack, error, or rate limit responses.

## Configuring and running the service
## Prerequisites
There are a few system prerequisites/recommendations for installing and deploying a Fleet Telemetry server. It is highly recommended that you run this service in a Debian server. In order for any Fleet Telemetry server to interact with vehicles, you will need to own a website domain and host Fleet Telemetry on that domain, which is defined as the `partner_domain` in the [Fleet API documentation](https://developer.tesla.com/docs/fleet-api#fleet-telemetry).

### Minimum System Requirements
These minimum system requirements for any Fleet Telemetry server with 10 active vehicles connected running on a Debian-based server with regular day-to-day usage. Note that the requirements will be much higher depending on vehicle usage or number of vehicles.

* 0.2 AWS vCPU
* 256MB of RAM
* 8GB of Storage

As a service provider, you will need to register a publicly available endpoint to receive device connections. Tesla devices will rely on a mutual TLS (mTLS) WebSocket to create a connection with the backend. The application has been designed to operate on top of Kubernetes, but you can run it as a standalone binary if you prefer.
### Dependencies
Fleet Telemetry has the following dependencies:

### Install on Kubernetes with Helm Chart (recommended)
For ease of installation and operation, we recommend running Fleet Telemetry on Kubernetes. Helm Charts help you define, install, and upgrade applications on Kubernetes. You can find a reference helm chart [here](https://github.com/teslamotors/helm-charts/blob/main/charts/fleet-telemetry/README.md).
* Go 1.20+
* libzmq
* Docker*
* Kubernetes/Helm*

### Install manually (skip this if you have installed with Helm on Kubernetes)
1. Allocate and assign a [FQDN](https://en.wikipedia.org/wiki/Fully_qualified_domain_name). This will be used in the server and client (vehicle) configuration.
*Docker and Kubernetes/Helm are required for installing through deploying an image or using Helm Charts (recommended).

2. Design a simple hosting architecture. We recommend: Firewall/Loadbalancer -> Fleet Telemetry -> Kafka.
### Vehicle Compatibility

Vehicles must be running firmware version 2023.20.6 or later. You may find this information under your VIN in the Tesla Mobile App or the Software tab in the vehicle's infotainment system. Some older model S/X are not supported.

## Configuring and running the service
As mentioned in prerequisites, you will need to own a public domain that will run this server in order for the vehicles to communicate with your Fleet Telemetry server. You may generate [self-signed certificates](https://en.wikipedia.org/wiki/Self-signed_certificate) for your domain and the generated tls certificate + private key pair will be used for the server to authenticate vehicles connecting as well as a means for devices to connect to your Fleet Telemetry server. Tesla devices will rely on a mutual TLS (mTLS) WebSocket to create a connection with the backend. Fleet Telemetry has been designed to operate on top of Kubernetes, but you can run it as a standalone binary if you prefer.

3. Ensure mTLS connections are terminated on the Fleet Telemetry service.
Once you've generated/obtained your a tls certificate + private key with your domain, aka `partner_domain`, you may onboard your Fleet Telemetry server through the [Fleet API documentation](https://developer.tesla.com/docs/fleet-api#fleet-telemetry).

4. Configure the server
You may generate self a self-signed certificate using OpenSSL (have your domain/CNAME ready):

```sh
openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
```

Before you deploy your Fleet Telemetry server, you will also need to configure your server (the docker image will check the config mounted on `/etc/fleet-telemetry/config.json`). Here is a template:
```
{
"host": string - hostname,
Expand All @@ -43,7 +64,7 @@ For ease of installation and operation, we recommend running Fleet Telemetry on
"flush_period": int - ms flush period
}
},
"kafka": { //librdkafka kafka config, seen here: https://raw.githubusercontent.com/confluentinc/librdkafka/master/CONFIGURATION.md
"kafka": { // librdkafka kafka config, seen here: https://raw.githubusercontent.com/confluentinc/librdkafka/master/CONFIGURATION.md
"bootstrap.servers": "kafka:9092",
"queue.buffering.max.messages": 1000000
},
Expand All @@ -57,7 +78,7 @@ For ease of installation and operation, we recommend running Fleet Telemetry on
"enabled": bool,
"message_limit": int - ex.: 1000
},
"records": { list of records and their dispatchers, currently: alerts, errors, and V(vehicle data)
"records": { // list of records and their dispatchers, currently: alerts, errors, and V(vehicle data)
"alerts": [
"logger"
],
Expand All @@ -77,7 +98,28 @@ For ease of installation and operation, we recommend running Fleet Telemetry on
```
Example: [server_config.json](./examples/server_config.json)

5. Deploy and run the server. Get the latest docker image information from our [docker hub](https://hub.docker.com/r/tesla/fleet-telemetry/tags). This can be run as a binary via `./fleet-telemetry -config=/etc/fleet-telemetry/config.json` directly on a server, or as a Kubernetes deployment. Example snippet:
### Deploy using Kubernetes with Helm Charts (recommended for large fleets)
If you already have a [Kubernetes](https://kubernetes.io/) stack or plan building a service that is used for large fleets, we recommend running Fleet Telemetry on Kubernetes. Helm Charts help you define, install, and upgrade applications on Kubernetes. You can find a reference helm chart [here](https://github.com/teslamotors/helm-charts/blob/main/charts/fleet-telemetry/README.md).

You must have [Kubernetes](https://kubernetes.io/docs/setup/) amd [Helm](https://helm.sh/docs/intro/install/) installed for this deployment method.

### Deploy using Docker

1. Pull the Tesla Fleet Telemetry image (you may find all images in [docker hub](https://hub.docker.com/r/tesla/fleet-telemetry/tags)):
```sh
docker pull tesla/fleet-telemetry:v0.1.8
```
2. Run and deploy your docker image with your mTLS certificate, private key, and config.json locally mounted on `/etc/fleet-telemetry`:
```sh
sudo docker run -v /etc/fleet-telemetry:/etc/fleet-telemetry tesla/fleet-telemetry:v0.1.8
```
### Deploy manually
1. Build the server
```sh
make install
```

2. Deploy and run the server. This can be run as a binary via `$GOPATH/bin/fleet-telemetry -config=/etc/fleet-telemetry/config.json` directly on a server, or as a Kubernetes deployment. Example snippet:
```yaml
---
apiVersion: apps/v1
Expand Down Expand Up @@ -115,7 +157,7 @@ spec:
type: LoadBalancer
```
6. Create and share a vehicle configuration with Tesla.
3. Create and share a vehicle configuration with Tesla.
```
{
"hostname": string - server hostname,
Expand All @@ -130,10 +172,6 @@ spec:
```
Example: [client_config.json](./examples/client_config.json)

## Vehicle Compatibility

Vehicles must be running firmware version 2023.20.6 or later. Some older model S/X are not supported.

## Backends/dispatchers
The following [dispatchers](./telemetry/producer.go#L10-L19) are supported
* Kafka (preferred): Configure with the config.json file. See implementation here: [config/config.go](./config/config.go)
Expand Down

0 comments on commit 777cc7e

Please sign in to comment.