Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DGC-306 keyword endpoints #13

Merged
merged 22 commits into from
Sep 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
e059fe8
add database package deps
N-Clerkx Jul 16, 2024
5d80b8a
alembic init
N-Clerkx Jul 16, 2024
7caa455
Add postgres db to stack and setup use of alembic
N-Clerkx Jul 22, 2024
c932504
consistent casing
N-Clerkx Jul 23, 2024
67e902e
add database engine
N-Clerkx Jul 23, 2024
cce3d24
override errors to return as json instead of python object strings
N-Clerkx Jul 23, 2024
e056e35
refactor creation of app settings to only 1 location
N-Clerkx Jul 24, 2024
c1cca26
Add test framework
N-Clerkx Jul 24, 2024
020f16b
initial keyword models, endpoints and first test
N-Clerkx Jul 24, 2024
d7f816f
add keyword group endpoints and tests
N-Clerkx Aug 19, 2024
00838b8
add facility endpoints
N-Clerkx Aug 19, 2024
3d3af30
add keywordgroup linking and unlinking to facilities
N-Clerkx Aug 20, 2024
7b40c52
fix delete link test
N-Clerkx Aug 20, 2024
86121df
add get all keyword groups endpoint
N-Clerkx Aug 20, 2024
bace74a
update sqlmodel to use cascade delete functionality; fix uuid type er…
N-Clerkx Aug 20, 2024
dab5909
add keyword endpoints
N-Clerkx Aug 20, 2024
24933e0
add null constraint migration; update commands in readme to use docke…
N-Clerkx Aug 26, 2024
cbcc2aa
add keywords endpoint with facility and keywordgroup filter
N-Clerkx Aug 26, 2024
4e38ffe
add automated pytest running on Github
N-Clerkx Aug 26, 2024
b105b15
fix working directory GH action
N-Clerkx Aug 26, 2024
bce3a67
update dependencies after stac fastapi v3.0 release
N-Clerkx Aug 26, 2024
1ab9353
use new method to add middleware after stac fastapi v3.0 update
N-Clerkx Aug 26, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 68 additions & 0 deletions .github/workflows/backend_ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
name: Backend CI
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
# When this action will be executed
on:
# Automatically trigger it when detected changes in repo
push:
branches:
- main
paths:
- "api/**"
- ".github/workflows/backend_ci.yml"
pull_request:
branches:
- main
paths:
- "api/**"
- ".github/workflows/backend_ci.yml"

# Allow manual trigger
workflow_dispatch:

permissions:
contents: read
issues: read
checks: write
pull-requests: write

jobs:
pytest:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Install Python
uses: actions/setup-python@v4
with:
python-version: 3.11

- name: Install poetry
uses: abatilo/actions-poetry@v2

- name: Setup a local virtual environment (if no poetry.toml file)
run: |
poetry config virtualenvs.create true --local
poetry config virtualenvs.in-project true --local

- name: Define a cache for the virtual environment based on the dependencies lock file
uses: actions/cache@v3
with:
path: ./api/.venv
key: venv-${{ hashFiles('poetry.lock') }}

- name: Install the project dependencies
working-directory: api
run: poetry install --with=dev

- name: Run Pytest
working-directory: api
run: poetry run pytest tests/ --junit-xml=test-results.xml

- name: Publish Test Results
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
files: |
api/test-results.xml
6 changes: 3 additions & 3 deletions api/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
# PYTHON-BASE
# Sets up all our shared environment variables
################################
FROM python:3.11-slim as python-base
FROM python:3.11-slim AS python-base

# python
ENV PYTHONUNBUFFERED=1 \
Expand Down Expand Up @@ -44,7 +44,7 @@ ENV PATH="$POETRY_HOME/bin:$VENV_PATH/bin:$PATH"
# BUILDER-BASE
# Used to build deps + create our virtual environment
################################
FROM python-base as builder-base
FROM python-base AS builder-base
RUN apt-get update \
&& apt-get install --no-install-recommends -y \
# deps for installing poetry
Expand Down Expand Up @@ -72,7 +72,7 @@ RUN --mount=type=cache,target=/root/.cache \
# DEVELOPMENT
# Image used during development / testing
################################
FROM python-base as development
FROM python-base AS development
ENV FASTAPI_ENV=development
WORKDIR $PYSETUP_PATH

Expand Down
46 changes: 45 additions & 1 deletion api/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,47 @@
# API

This folder contains the API backend for the project. It is a FastAPI application that uses the stac-fastapi library to provide a STAC compliant API. It uses OpenSearch to store the STAC objects and Postgres to store information about users.
This folder contains the API backend for the project. It is a FastAPI application that uses the stac-fastapi library to provide a STAC compliant API. It uses OpenSearch to store the STAC objects and Postgres to store information about users.

The following Python libraries are used in this project:

- [FastAPI](https://fastapi.tiangolo.com/)
- To generate the API including openapi documentation
- [stac-fastapi-opensearch]
- The base Stac API backed by OpenSearch
- [SQLAlchemy](https://www.sqlalchemy.org/)
- To interact with the Postgres database
- [alembic](https://alembic.sqlalchemy.org/en/latest/)
- To manage database migrations
- [sqlmodel](https://sqlmodel.tiangolo.com/)
- To define the database models that are Pydantic compatible allowing them to be returned by the FastAPI endpoints


## Database

To run the API you will need to have a Postgres database running. You can use the following command to start a Postgres database using Docker:

```bash
docker compose up postgres -d
```

This will start a Postgres database running on port 5432 with the username and password set to `postgres`.

To create the database tables you can run the following command:

```bash
docker compose exec stac-api alembic upgrade head
```

### Migrations

When developing new database models, you will need to create a new migration. You can do this by running the following command:
(replace `Add new table` with a description of the migration)
```bash
docker compose exec stac-api alembic revision --autogenerate -m "Add new table"
```

This will create a new migration file in the `alembic/versions` folder. You can then apply this migration by running:

```bash
docker compose exec stac-api alembic upgrade head
```
113 changes: 113 additions & 0 deletions api/alembic.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
# A generic, single database configuration.

[alembic]
# path to migration scripts
# Use forward slashes (/) also on windows to provide an os agnostic path
script_location = dmsapi/alembic

# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s
# Uncomment the line below if you want the files to be prepended with date and time
# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file
# for all available tokens
# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s

# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .

# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python>=3.9 or backports.zoneinfo library.
# Any required deps can installed by adding `alembic[tz]` to the pip requirements
# string value is passed to ZoneInfo()
# leave blank for localtime
# timezone =

# max length of characters to apply to the "slug" field
# truncate_slug_length = 40

# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false

# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false

# version location specification; This defaults
# to dmsapi/alembic/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:dmsapi/alembic/versions

# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.

# set to 'true' to search source files recursively
# in each "version_locations" directory
# new in Alembic version 1.10
# recursive_version_locations = false

# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8

[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples

# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME

# lint with attempts to fix using "ruff" - use the exec runner, execute a binary
# hooks = ruff
# ruff.type = exec
# ruff.executable = %(here)s/.venv/bin/ruff
# ruff.options = --fix REVISION_SCRIPT_FILENAME

# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic

[handlers]
keys = console

[formatters]
keys = generic

[logger_root]
level = WARN
handlers = console
qualname =

[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine

[logger_alembic]
level = INFO
handlers =
qualname = alembic

[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic

[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
1 change: 1 addition & 0 deletions api/dmsapi/alembic/README
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Generic single-database configuration.
86 changes: 86 additions & 0 deletions api/dmsapi/alembic/env.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
from logging.config import fileConfig

from sqlalchemy import engine_from_config
from sqlalchemy import pool

from alembic import context
from sqlmodel import SQLModel
import dmsapi.database.models
from dmsapi.app import settings

url = settings.db_connection_url

# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config

# Interpret the config file for Python logging.
# This line sets up loggers basically.
if config.config_file_name is not None:
fileConfig(config.config_file_name)

# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = SQLModel.metadata

# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.


def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode.

This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.

Calls to context.execute() here emit the given string to the
script output.

"""
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)

with context.begin_transaction():
context.run_migrations()


def run_migrations_online() -> None:
"""Run migrations in 'online' mode.

In this scenario we need to create an Engine
and associate a connection with the context.

"""

connectable = engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
url=url,
)

with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
user_module_prefix="sqlmodel.",
)

with context.begin_transaction():
context.run_migrations()


if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
27 changes: 27 additions & 0 deletions api/dmsapi/alembic/script.py.mako
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
"""${message}

Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}

"""
from typing import Sequence, Union

from alembic import op
import sqlalchemy as sa
import sqlmodel
${imports if imports else ""}

# revision identifiers, used by Alembic.
revision: str = ${repr(up_revision)}
down_revision: Union[str, None] = ${repr(down_revision)}
branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)}
depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)}


def upgrade() -> None:
${upgrades if upgrades else "pass"}


def downgrade() -> None:
${downgrades if downgrades else "pass"}
Loading
Loading