Compare commits
28 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 2b455dece6 | |||
| b1b11b47f7 | |||
| c70d05b583 | |||
| e3ccd9d252 | |||
| 63e7e73360 | |||
| d4464461b2 | |||
| aec12ff6c5 | |||
| 7a16083a2e | |||
| 800ca832e7 | |||
| e1107256e8 | |||
| 6f873100ef | |||
| 5716d4adf6 | |||
| 59c9a719e0 | |||
| b833c2ac6e | |||
| 50fa9a5a22 | |||
| dda20f82e6 | |||
| 111ed726d8 | |||
| 2d1d93c455 | |||
| 26de435ff1 | |||
| cd63ea142f | |||
| 0c76e867ae | |||
| a666f1233d | |||
| e981a334ea | |||
| 6cbaab73dc | |||
| 96b5e8f40f | |||
| efe5907adc | |||
| e00280b653 | |||
| 6c26135cad |
Vendored
+2
@@ -1,2 +1,4 @@
|
|||||||
libs/geo-api-client/node_modules/
|
libs/geo-api-client/node_modules/
|
||||||
api
|
api
|
||||||
|
var/
|
||||||
|
var/logs/
|
||||||
|
|||||||
@@ -47,8 +47,23 @@ bun run build
|
|||||||
## Editing guidance for agents
|
## Editing guidance for agents
|
||||||
|
|
||||||
- Prefer minimal changes and avoid unrelated refactors.
|
- Prefer minimal changes and avoid unrelated refactors.
|
||||||
|
- Treat `var/` as runtime data only; ignore it for code changes and commits.
|
||||||
|
- Every change must be committed and pushed.
|
||||||
- Add tests when behavior changes.
|
- Add tests when behavior changes.
|
||||||
- Verify Go tests after backend changes.
|
- Verify Go tests after backend changes.
|
||||||
- Verify Bun tests after TS client changes.
|
- Verify Bun tests after TS client changes.
|
||||||
- For DB-required tests, prefer embedded/ephemeral Postgres fixtures over relying on an externally managed database.
|
- For DB-required tests, prefer embedded/ephemeral Postgres fixtures over relying on an externally managed database.
|
||||||
- If CI fails due runner/network infrastructure, keep logs explicit in workflow output.
|
- If CI fails due runner/network infrastructure, keep logs explicit in workflow output.
|
||||||
|
|
||||||
|
## Agent skill memory (current behavior)
|
||||||
|
|
||||||
|
- **Asset downloads stay on backend domain:** `GET /v1/assets/{id}/download` streams bytes from backend (no redirect to MinIO/internal URL).
|
||||||
|
- **Asset uploads are backend-routed:** signed upload endpoint returns backend URL (`/v1/assets/{id}/upload`), browser never uploads directly to MinIO.
|
||||||
|
- **Public features API exists:** use `GET /v1/features/public` with optional `kind` query (`3d` or `image`) to fetch globally visible features/assets.
|
||||||
|
- **Feature geometry update API exists:** `PATCH /v1/features/{id}` updates point geometry (owner only).
|
||||||
|
- **MapLibre demo expectations (`web/maplibre-demo.js`):**
|
||||||
|
- uses raster OSM tiles (not vector style),
|
||||||
|
- loads all public 3D features on map start,
|
||||||
|
- after login merges all owner collections,
|
||||||
|
- owner feature markers are draggable and persisted via `PATCH /v1/features/{id}`.
|
||||||
|
- **Share-link behavior in demos:** "Copy Share Link" generates map URLs with coordinates so recipients open map context, not only raw asset URL.
|
||||||
|
|||||||
@@ -20,13 +20,15 @@ go test ./...
|
|||||||
go run ./cmd/api
|
go run ./cmd/api
|
||||||
```
|
```
|
||||||
|
|
||||||
|
`var/` is runtime data and is ignored by git.
|
||||||
|
|
||||||
Run tests via Docker (avoids local permission issues, e.g. `var/`):
|
Run tests via Docker (avoids local permission issues, e.g. `var/`):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker compose --profile test run --rm test
|
docker compose --profile test run --rm test
|
||||||
```
|
```
|
||||||
|
|
||||||
Primary deployed base URL: `https://momswap.produktor.duckdns.org/`.
|
Primary deployed base URL: `https://tenerife.baby/`.
|
||||||
|
|
||||||
Local default (for development): `http://localhost:8122`.
|
Local default (for development): `http://localhost:8122`.
|
||||||
|
|
||||||
@@ -51,25 +53,28 @@ Or use `./bin/up.sh` which runs the key generation if needed.
|
|||||||
|
|
||||||
This starts:
|
This starts:
|
||||||
|
|
||||||
- `db` (`postgis/postgis`) on `5432`
|
- `db` (`postgis/postgis`) on `5432` inside the container, exposed as **`7721`** on the host for remote access
|
||||||
- `api` on `8122` — uses PostgreSQL via `DATABASE_URL` (migrations run on startup)
|
- `api` on `8122` — uses PostgreSQL via `DATABASE_URL` (migrations run on startup)
|
||||||
- `minio` (S3-compatible storage) with admin UI on `8774` and internal S3 API on `9000`
|
- `minio` (S3-compatible storage) with admin UI on `8774` and internal S3 API on `9000`
|
||||||
|
|
||||||
|
**Remote DB access** (e.g. `postgres://momswap:momswap@HOST_IP:7721/momswap?sslmode=disable`): The init script `etc/pg-init-remote.sh` configures `pg_hba.conf` for remote connections on fresh installs. If the DB was initialized before that was added, run once: `./bin/fix-pg-remote.sh`
|
||||||
|
|
||||||
Stop the service:
|
Stop the service:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker compose down
|
docker compose down
|
||||||
```
|
```
|
||||||
|
|
||||||
For local development with auto-rebuild on file changes:
|
For local development with file watching/rebuild:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
COMPOSE_BAKE=true docker compose --profile dev up --watch
|
COMPOSE_BAKE=true docker compose up -d
|
||||||
|
COMPOSE_BAKE=true docker compose watch
|
||||||
```
|
```
|
||||||
|
|
||||||
Notes:
|
Notes:
|
||||||
|
|
||||||
- `api` service listens on `8122` inside the container, mapped to host `8122` (reverse proxy at `https://momswap.produktor.duckdns.org`).
|
- `api` service listens on `8122` inside the container, mapped to host `8122` (reverse proxy at `https://tenerife.baby`).
|
||||||
- `api` service uses the production `runtime` image target.
|
- `api` service uses the production `runtime` image target.
|
||||||
- `api-dev` profile uses the `dev` image target and Docker Compose watch.
|
- `api-dev` profile uses the `dev` image target and Docker Compose watch.
|
||||||
- DB defaults can be overridden via `POSTGRES_DB`, `POSTGRES_USER`, `POSTGRES_PASSWORD`.
|
- DB defaults can be overridden via `POSTGRES_DB`, `POSTGRES_USER`, `POSTGRES_PASSWORD`.
|
||||||
@@ -87,9 +92,10 @@ go run ./cmd/api
|
|||||||
|
|
||||||
Then visit:
|
Then visit:
|
||||||
|
|
||||||
- Production: `https://momswap.produktor.duckdns.org/web/`
|
- Production: `https://tenerife.baby/web/`
|
||||||
- Local: `http://localhost:8122/web/`
|
- Local: `http://localhost:8122/web/`
|
||||||
- Local Leaflet demo: `http://localhost:8122/web/leaflet-demo.html`
|
- Local Leaflet demo: `http://localhost:8122/web/leaflet-demo.html`
|
||||||
|
- Local MapLibre GL + Three.js demo: `http://localhost:8122/web/maplibre-demo.html`
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
|
|||||||
Executable
+20
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Add pg_hba entries for remote connections to an already-initialized DB.
|
||||||
|
# Run this once if you started the DB before adding etc/pg-init-remote.sh.
|
||||||
|
set -e
|
||||||
|
CONTAINER="${1:-momswap-backend-db}"
|
||||||
|
if ! docker ps --format '{{.Names}}' | grep -q "^${CONTAINER}$"; then
|
||||||
|
echo "Container ${CONTAINER} is not running. Start it with: docker compose up -d db"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
docker exec "$CONTAINER" bash -c '
|
||||||
|
PGDATA=/var/lib/postgresql/data
|
||||||
|
if grep -q "0.0.0.0/0" "$PGDATA/pg_hba.conf" 2>/dev/null; then
|
||||||
|
echo "Remote access already configured."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
echo "host all all 0.0.0.0/0 scram-sha-256" >> "$PGDATA/pg_hba.conf"
|
||||||
|
echo "host all all ::/0 scram-sha-256" >> "$PGDATA/pg_hba.conf"
|
||||||
|
psql -U "${POSTGRES_USER:-momswap}" -d "${POSTGRES_DB:-momswap}" -c "SELECT pg_reload_conf();"
|
||||||
|
echo "Remote access configured. Reloaded PostgreSQL."
|
||||||
|
'
|
||||||
+9
-1
@@ -56,8 +56,16 @@ func main() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
api := httpapi.NewAPI(service)
|
api := httpapi.NewAPI(service)
|
||||||
|
h := api.Routes()
|
||||||
|
if logDir := getEnv("LOG_DIR", "var/logs"); logDir != "" {
|
||||||
|
if wrapped, err := httpapi.WithRequestLogging(logDir, h); err != nil {
|
||||||
|
log.Printf("request logging disabled: %v", err)
|
||||||
|
} else {
|
||||||
|
h = wrapped
|
||||||
|
}
|
||||||
|
}
|
||||||
log.Printf("listening on %s", addr)
|
log.Printf("listening on %s", addr)
|
||||||
if err := http.ListenAndServe(addr, api.Routes()); err != nil {
|
if err := http.ListenAndServe(addr, h); err != nil {
|
||||||
log.Fatalf("listen: %v", err)
|
log.Fatalf("listen: %v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,12 +2,15 @@ services:
|
|||||||
db:
|
db:
|
||||||
image: postgis/postgis:17-3.5
|
image: postgis/postgis:17-3.5
|
||||||
container_name: momswap-backend-db
|
container_name: momswap-backend-db
|
||||||
|
ports:
|
||||||
|
- "7721:5432"
|
||||||
environment:
|
environment:
|
||||||
POSTGRES_DB: "${POSTGRES_DB:-momswap}"
|
POSTGRES_DB: "${POSTGRES_DB:-momswap}"
|
||||||
POSTGRES_USER: "${POSTGRES_USER:-momswap}"
|
POSTGRES_USER: "${POSTGRES_USER:-momswap}"
|
||||||
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD:-momswap}"
|
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD:-momswap}"
|
||||||
volumes:
|
volumes:
|
||||||
- ./var/posrgres:/var/lib/postgresql/data
|
- ./var/posrgres:/var/lib/postgresql/data
|
||||||
|
- ./etc/pg-init-remote.sh:/docker-entrypoint-initdb.d/99-remote.sh:ro
|
||||||
healthcheck:
|
healthcheck:
|
||||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-momswap} -d ${POSTGRES_DB:-momswap}"]
|
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-momswap} -d ${POSTGRES_DB:-momswap}"]
|
||||||
interval: 10s
|
interval: 10s
|
||||||
@@ -74,6 +77,7 @@ services:
|
|||||||
S3_USE_TLS: "${S3_USE_TLS:-false}"
|
S3_USE_TLS: "${S3_USE_TLS:-false}"
|
||||||
volumes:
|
volumes:
|
||||||
- ./etc:/app/etc:ro
|
- ./etc:/app/etc:ro
|
||||||
|
- ./var/logs:/app/var/logs
|
||||||
depends_on:
|
depends_on:
|
||||||
db:
|
db:
|
||||||
condition: service_healthy
|
condition: service_healthy
|
||||||
@@ -84,6 +88,10 @@ services:
|
|||||||
ports:
|
ports:
|
||||||
- "8122:8122"
|
- "8122:8122"
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
|
develop:
|
||||||
|
watch:
|
||||||
|
- action: rebuild
|
||||||
|
path: .
|
||||||
|
|
||||||
api-dev:
|
api-dev:
|
||||||
profiles: ["dev"]
|
profiles: ["dev"]
|
||||||
@@ -108,6 +116,7 @@ services:
|
|||||||
S3_USE_TLS: "${S3_USE_TLS:-false}"
|
S3_USE_TLS: "${S3_USE_TLS:-false}"
|
||||||
volumes:
|
volumes:
|
||||||
- ./etc:/src/etc:ro
|
- ./etc:/src/etc:ro
|
||||||
|
- ./var/logs:/src/var/logs
|
||||||
depends_on:
|
depends_on:
|
||||||
db:
|
db:
|
||||||
condition: service_healthy
|
condition: service_healthy
|
||||||
|
|||||||
@@ -30,7 +30,8 @@ Each `properties.assets` item includes:
|
|||||||
1. Create or reuse an asset record and link it to a feature:
|
1. Create or reuse an asset record and link it to a feature:
|
||||||
- `POST /v1/assets`
|
- `POST /v1/assets`
|
||||||
2. Upload the binary to object storage:
|
2. Upload the binary to object storage:
|
||||||
- `POST /v1/assets/{id}/signed-upload` (returns signed PUT URL)
|
- `POST /v1/assets/{id}/signed-upload` (returns backend upload URL)
|
||||||
|
- `PUT /v1/assets/{id}/upload` (backend streams content to object storage)
|
||||||
3. Read linked assets from feature responses:
|
3. Read linked assets from feature responses:
|
||||||
- `GET /v1/collections/{id}/features` (`properties.assets`)
|
- `GET /v1/collections/{id}/features` (`properties.assets`)
|
||||||
4. Download via service-relative link:
|
4. Download via service-relative link:
|
||||||
|
|||||||
@@ -40,8 +40,8 @@ docker compose up --build -d
|
|||||||
- `http://localhost:8774`
|
- `http://localhost:8774`
|
||||||
3. Confirm bucket exists (`momswap-assets` by default).
|
3. Confirm bucket exists (`momswap-assets` by default).
|
||||||
4. Use API flow:
|
4. Use API flow:
|
||||||
- create asset and get signed upload URL
|
- create asset and request backend upload URL
|
||||||
- upload file with PUT
|
- upload file with `PUT /v1/assets/{id}/upload`
|
||||||
- request `/v1/assets/{id}/download`
|
- request `/v1/assets/{id}/download`
|
||||||
|
|
||||||
## Quick verification script
|
## Quick verification script
|
||||||
@@ -67,6 +67,6 @@ fi
|
|||||||
- If bucket bootstrap fails, inspect:
|
- If bucket bootstrap fails, inspect:
|
||||||
- `docker compose logs minio`
|
- `docker compose logs minio`
|
||||||
- `docker compose logs minio-init`
|
- `docker compose logs minio-init`
|
||||||
- If signed URLs are generated but upload fails, check:
|
- If backend upload endpoint fails, check:
|
||||||
- object key path style (`S3_USE_PATH_STYLE=true` for MinIO)
|
- object key path style (`S3_USE_PATH_STYLE=true` for MinIO)
|
||||||
- MinIO credentials (`S3_ACCESS_KEY`, `S3_SECRET_KEY`)
|
- MinIO credentials (`S3_ACCESS_KEY`, `S3_SECRET_KEY`)
|
||||||
|
|||||||
@@ -51,13 +51,13 @@ sequenceDiagram
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Step 1: Get challenge
|
# Step 1: Get challenge
|
||||||
CHALLENGE=$(curl -s -X POST https://momswap.produktor.duckdns.org/v1/auth/challenge \
|
CHALLENGE=$(curl -s -X POST https://tenerife.baby/v1/auth/challenge \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
-d '{"publicKey":"txdkGKNdcZIEoQMJ0dqum3msjT6-2mO4yLVhtidRFJI"}')
|
-d '{"publicKey":"txdkGKNdcZIEoQMJ0dqum3msjT6-2mO4yLVhtidRFJI"}')
|
||||||
NONCE=$(echo "$CHALLENGE" | jq -r '.nonce')
|
NONCE=$(echo "$CHALLENGE" | jq -r '.nonce')
|
||||||
|
|
||||||
# Step 2: Sign "login:$NONCE" with your private key, then:
|
# Step 2: Sign "login:$NONCE" with your private key, then:
|
||||||
curl -s -X POST https://momswap.produktor.duckdns.org/v1/auth/login \
|
curl -s -X POST https://tenerife.baby/v1/auth/login \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
-d "{\"publicKey\":\"txdkGKNdcZIEoQMJ0dqum3msjT6-2mO4yLVhtidRFJI\",\"nonce\":\"$NONCE\",\"signature\":\"<your_signature_base64url>\"}"
|
-d "{\"publicKey\":\"txdkGKNdcZIEoQMJ0dqum3msjT6-2mO4yLVhtidRFJI\",\"nonce\":\"$NONCE\",\"signature\":\"<your_signature_base64url>\"}"
|
||||||
```
|
```
|
||||||
@@ -91,10 +91,10 @@ Registers a new user without an invitation. User proves key ownership by signing
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Step 1: Fetch service key
|
# Step 1: Fetch service key
|
||||||
SERVICE_KEY=$(curl -s https://momswap.produktor.duckdns.org/v1/service-key | jq -r '.publicKey')
|
SERVICE_KEY=$(curl -s https://tenerife.baby/v1/service-key | jq -r '.publicKey')
|
||||||
|
|
||||||
# Step 2: Sign $SERVICE_KEY with your private key, then:
|
# Step 2: Sign $SERVICE_KEY with your private key, then:
|
||||||
curl -s -X POST https://momswap.produktor.duckdns.org/v1/auth/register-by-signature \
|
curl -s -X POST https://tenerife.baby/v1/auth/register-by-signature \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
-d "{\"publicKey\":\"<your_pubkey_base64url>\",\"signature\":\"<signature_base64url>\"}"
|
-d "{\"publicKey\":\"<your_pubkey_base64url>\",\"signature\":\"<signature_base64url>\"}"
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -32,6 +32,7 @@ web/
|
|||||||
2. Open:
|
2. Open:
|
||||||
- `http://localhost:8122/web/`
|
- `http://localhost:8122/web/`
|
||||||
- `http://localhost:8122/web/leaflet-demo.html` (Leaflet map demo for 3D/image placement + sharing)
|
- `http://localhost:8122/web/leaflet-demo.html` (Leaflet map demo for 3D/image placement + sharing)
|
||||||
|
- `http://localhost:8122/web/maplibre-demo.html` (MapLibre GL raster tiles + Three.js GLB/GLTF object rendering/placement)
|
||||||
|
|
||||||
### Runtime dependencies
|
### Runtime dependencies
|
||||||
|
|
||||||
@@ -59,7 +60,15 @@ web/
|
|||||||
- Leaflet map example:
|
- Leaflet map example:
|
||||||
- click map to place object coordinates
|
- click map to place object coordinates
|
||||||
- create feature + upload/link `gltf`/`glb`/image asset
|
- create feature + upload/link `gltf`/`glb`/image asset
|
||||||
|
- upload via backend endpoint (`/v1/assets/{id}/upload`)
|
||||||
- copy/open share link and toggle public/private visibility
|
- copy/open share link and toggle public/private visibility
|
||||||
|
- MapLibre GL + Three.js example:
|
||||||
|
- raster OSM basemap via MapLibre style
|
||||||
|
- map click to place object position
|
||||||
|
- custom Three.js layer loads real `glb`/`gltf` assets via `GLTFLoader`
|
||||||
|
- private assets are loaded with bearer auth header when user is logged in
|
||||||
|
- fallback primitive is rendered if model load fails or no 3D asset is linked
|
||||||
|
- asset upload/link and share/visibility controls backed by API
|
||||||
|
|
||||||
## TypeScript client (`libs/geo-api-client`)
|
## TypeScript client (`libs/geo-api-client`)
|
||||||
|
|
||||||
|
|||||||
@@ -8,9 +8,9 @@ This document explains how frontend developers should integrate with the backend
|
|||||||
|
|
||||||
Primary backend URL for integration:
|
Primary backend URL for integration:
|
||||||
|
|
||||||
- `https://momswap.produktor.duckdns.org/`
|
- `https://tenerife.baby/`
|
||||||
|
|
||||||
Deployment: API is proxied via reverse proxy from `https://momswap.produktor.duckdns.org` to backend at `172.17.0.1:8122`. Docker Compose maps port 8122 for the reverse proxy.
|
Deployment: API is proxied via reverse proxy from `https://tenerife.baby` to backend at `172.17.0.1:8122`. Docker Compose maps port 8122 for the reverse proxy.
|
||||||
|
|
||||||
## Goals
|
## Goals
|
||||||
|
|
||||||
@@ -33,60 +33,61 @@ bun test # unit + integration tests (docs flow)
|
|||||||
bun run build
|
bun run build
|
||||||
```
|
```
|
||||||
|
|
||||||
Integration tests in `test/integration.test.ts` cover the recommended flow: register, login, create collection, create point feature, list features.
|
Integration tests in `test/integration.test.ts` cover both:
|
||||||
|
|
||||||
|
- Base flow: register, login, collection and feature CRUD
|
||||||
|
- Asset flow: create/link asset, request backend upload URL, upload binary, and toggle visibility
|
||||||
|
|
||||||
## Public API (current)
|
## Public API (current)
|
||||||
|
|
||||||
### Class: `GeoApiClient`
|
### Class: `GeoApiClient`
|
||||||
|
|
||||||
Source: [GeoApiClient.ts](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts)
|
Source: `libs/geo-api-client/src/GeoApiClient.ts`
|
||||||
|
|
||||||
Constructor:
|
Constructor:
|
||||||
|
|
||||||
- [`new GeoApiClient(baseUrl, storage, storageKey?)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L14)
|
- `new GeoApiClient(baseUrl, storage, storageKey?)`
|
||||||
|
|
||||||
Key methods:
|
Key methods:
|
||||||
|
|
||||||
**Key storage**
|
- **Key storage**
|
||||||
|
- `ensureKeysInStorage()`
|
||||||
|
- `getStoredKeys()`
|
||||||
|
- `derivePublicKey(privateKey)`
|
||||||
|
- `importKeys(keys)`
|
||||||
|
- `exportKeys()`
|
||||||
|
- `setAccessToken(token)`
|
||||||
|
|
||||||
- [`ensureKeysInStorage()`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L20) — Ensure a keypair exists; if none found, generate and save. Use on app init.
|
- **Auth**
|
||||||
- [`getStoredKeys()`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L29) — Get stored keypair without generating. Returns null if none.
|
- `getServicePublicKey()`
|
||||||
- [`derivePublicKey(privateKey)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L33) — Derive public key from private key (Ed25519). Use when importing pk from backup/QR.
|
- `createChallenge(publicKey)`
|
||||||
- [`importKeys(keys)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L37) — Overwrite stored keypair (e.g. after import or restore from QR).
|
- `loginWithSignature(publicKey, privateKey)`
|
||||||
- [`exportKeys()`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L41) — Read stored keypair for export/backup.
|
- `registerBySigningServiceKey(publicKey, privateKey)`
|
||||||
- [`setAccessToken(token)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L45) — Set bearer token for authenticated requests. Call after login.
|
- `createInvitation(payload, inviterPrivateKey)`
|
||||||
|
- `registerWithInvitation(...)`
|
||||||
|
|
||||||
**Auth**
|
- **Collections and features**
|
||||||
|
- `listCollections()`
|
||||||
|
- `createCollection(name)`
|
||||||
|
- `updateCollection(collectionId, name)`
|
||||||
|
- `deleteCollection(collectionId)`
|
||||||
|
- `listFeatures(collectionId)` (typed with `properties.assets`)
|
||||||
|
- `createPointFeature(collectionId, lon, lat, properties)`
|
||||||
|
- `deleteFeature(featureId)`
|
||||||
|
|
||||||
- [`getServicePublicKey()`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L69) — Fetch API service public key (for register-by-signature).
|
- **Assets (new)**
|
||||||
- [`createChallenge(publicKey)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L73) — Request login challenge; returns nonce and messageToSign.
|
- `createOrLinkAsset({...})` — create metadata or reuse existing asset by checksum/ext and link it to a feature
|
||||||
- [`loginWithSignature(publicKey, privateKey)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L86) — Login via challenge-response. Returns bearer token; stores it internally.
|
- `getAssetSignedUploadUrl(assetId, contentType?)` — get backend upload endpoint (`PUT /v1/assets/{id}/upload`)
|
||||||
- [`registerBySigningServiceKey(publicKey, privateKey)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L76) — Register without invitation by signing the API service key. 409 if already registered.
|
- `uploadAssetBinary(assetId, payload, contentType?)` — upload binary through backend endpoint
|
||||||
- [`createInvitation(payload, inviterPrivateKey)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L101) — Create invitation for new users. Inviter signs the payload.
|
- `setAssetVisibility(assetId, isPublic)` — owner toggles public/private access
|
||||||
- [`registerWithInvitation(...)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L114) — Register using an invitation. Proves key ownership and redeems invite.
|
- `resolveRelativeLink(path)` — converts backend-relative asset links to absolute URLs for browser usage
|
||||||
|
|
||||||
**Collections**
|
## Asset frontend contract
|
||||||
|
|
||||||
- [`listCollections()`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L133) — List collections for the authenticated user.
|
- Feature responses include linked assets in `feature.properties.assets`.
|
||||||
- [`createCollection(name)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L137) — Create a new collection. Returns id and name.
|
- Asset item fields include: `id`, `kind`, `name`, `description`, `checksum`, `ext`, `isPublic`, `link`.
|
||||||
- [`updateCollection(collectionId, name)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L140) — Rename a collection.
|
- `link` is always backend-relative (for example `/v1/assets/{id}/download`).
|
||||||
- [`deleteCollection(collectionId)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L148) — Delete a collection and its features.
|
- Frontend should call `resolveRelativeLink(link)` and must not build direct S3 URLs.
|
||||||
|
|
||||||
**Features**
|
|
||||||
|
|
||||||
- [`listFeatures(collectionId)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L153) — List GeoJSON features in a collection.
|
|
||||||
- [`createPointFeature(collectionId, lon, lat, properties)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L156) — Add a Point. lon ∈ [-180,180], lat ∈ [-90,90]. Returns feature id.
|
|
||||||
- [`deleteFeature(featureId)`](https://git.produktor.io/momswap/backend/src/branch/main/libs/geo-api-client/src/GeoApiClient.ts#L172) — Delete a feature.
|
|
||||||
|
|
||||||
## Asset API integration note
|
|
||||||
|
|
||||||
Asset endpoints are currently available at backend API level (`/v1/assets...`) and can be called from frontend apps directly with authenticated `fetch` requests.
|
|
||||||
|
|
||||||
Current frontend contract points:
|
|
||||||
|
|
||||||
- Feature list responses include linked media under `feature.properties.assets`.
|
|
||||||
- Each asset includes a backend-relative download path (`link`) like `/v1/assets/{id}/download`.
|
|
||||||
- Frontend should use this relative path and avoid constructing direct S3 URLs.
|
|
||||||
|
|
||||||
## Recommended integration flow
|
## Recommended integration flow
|
||||||
|
|
||||||
@@ -94,8 +95,15 @@ Current frontend contract points:
|
|||||||
2. Call `ensureKeysInStorage()` when app initializes.
|
2. Call `ensureKeysInStorage()` when app initializes.
|
||||||
3. If not yet registered: call `registerBySigningServiceKey(publicKey, privateKey)` (signs the API service key and publishes your public key).
|
3. If not yet registered: call `registerBySigningServiceKey(publicKey, privateKey)` (signs the API service key and publishes your public key).
|
||||||
4. Use `loginWithSignature()` to obtain and set a bearer token.
|
4. Use `loginWithSignature()` to obtain and set a bearer token.
|
||||||
5. Call collection/feature methods after authentication.
|
5. Create collection and map features.
|
||||||
6. Use `importKeys`/`exportKeys` in profile settings UX.
|
6. For media upload:
|
||||||
|
- compute file checksum
|
||||||
|
- call `createOrLinkAsset`
|
||||||
|
- call `uploadAssetBinary` (or call `getAssetSignedUploadUrl` + manual `fetch`)
|
||||||
|
- upload file to backend endpoint
|
||||||
|
7. Render and share assets from `properties.assets` links.
|
||||||
|
8. Use `setAssetVisibility` to toggle sharing.
|
||||||
|
9. Use `importKeys`/`exportKeys` in profile settings UX.
|
||||||
|
|
||||||
## Registration by signing service key
|
## Registration by signing service key
|
||||||
|
|
||||||
@@ -107,7 +115,7 @@ When `SERVICE_PUBLIC_KEY` (or `ADMIN_PUBLIC_KEY`) is set, users can register wit
|
|||||||
|
|
||||||
Server keys are generated with `./bin/gen-server-keys.sh` and stored in `etc/`.
|
Server keys are generated with `./bin/gen-server-keys.sh` and stored in `etc/`.
|
||||||
|
|
||||||
## Example (TypeScript app)
|
## Example: place and upload a 3D object (`.glb`)
|
||||||
|
|
||||||
```ts
|
```ts
|
||||||
import { GeoApiClient } from "../libs/geo-api-client/dist/index.js";
|
import { GeoApiClient } from "../libs/geo-api-client/dist/index.js";
|
||||||
@@ -119,7 +127,7 @@ const storageLike = {
|
|||||||
removeItem: (key: string) => storage.removeItem(key),
|
removeItem: (key: string) => storage.removeItem(key),
|
||||||
};
|
};
|
||||||
|
|
||||||
const client = new GeoApiClient("https://momswap.produktor.duckdns.org", storageLike);
|
const client = new GeoApiClient("https://tenerife.baby", storageLike);
|
||||||
const keys = await client.ensureKeysInStorage();
|
const keys = await client.ensureKeysInStorage();
|
||||||
|
|
||||||
// Register (ignored if already registered); then login
|
// Register (ignored if already registered); then login
|
||||||
@@ -131,9 +139,54 @@ try {
|
|||||||
await client.loginWithSignature(keys.publicKey, keys.privateKey);
|
await client.loginWithSignature(keys.publicKey, keys.privateKey);
|
||||||
|
|
||||||
const created = await client.createCollection("My Places");
|
const created = await client.createCollection("My Places");
|
||||||
await client.createPointFeature(created.id, -16.6291, 28.4636, { name: "Santa Cruz" });
|
const feature = await client.createPointFeature(created.id, -16.6291, 28.4636, { name: "Santa Cruz" });
|
||||||
|
|
||||||
|
// Assume this comes from: <input id="modelFile" type="file" accept=".glb,.gltf" />
|
||||||
|
const fileInput = document.getElementById("modelFile") as HTMLInputElement;
|
||||||
|
const file = fileInput.files?.[0];
|
||||||
|
if (!file) throw new Error("Select a .glb/.gltf file first");
|
||||||
|
|
||||||
|
const toSha256Hex = async (f: File): Promise<string> => {
|
||||||
|
const buffer = await f.arrayBuffer();
|
||||||
|
const digest = await crypto.subtle.digest("SHA-256", buffer);
|
||||||
|
return Array.from(new Uint8Array(digest))
|
||||||
|
.map((b) => b.toString(16).padStart(2, "0"))
|
||||||
|
.join("");
|
||||||
|
};
|
||||||
|
|
||||||
|
const checksum = await toSha256Hex(file);
|
||||||
|
const ext = file.name.toLowerCase().endsWith(".gltf") ? "gltf" : "glb";
|
||||||
|
|
||||||
|
// Create metadata (or reuse existing by checksum+ext) and link to feature
|
||||||
|
const asset = await client.createOrLinkAsset({
|
||||||
|
featureId: feature.id,
|
||||||
|
checksum,
|
||||||
|
ext,
|
||||||
|
kind: "3d",
|
||||||
|
mimeType: file.type || (ext === "gltf" ? "model/gltf+json" : "model/gltf-binary"),
|
||||||
|
sizeBytes: file.size,
|
||||||
|
name: "Palm Tree Model",
|
||||||
|
description: "3D object placed on map",
|
||||||
|
isPublic: true,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Upload binary through backend upload endpoint
|
||||||
|
await client.uploadAssetBinary(
|
||||||
|
asset.asset.id,
|
||||||
|
file,
|
||||||
|
asset.asset.mimeType || "application/octet-stream"
|
||||||
|
);
|
||||||
|
|
||||||
|
// Read shareable relative link from feature payload
|
||||||
const features = await client.listFeatures(created.id);
|
const features = await client.listFeatures(created.id);
|
||||||
console.log(features);
|
const firstAsset = features.features[0]?.properties?.assets?.[0];
|
||||||
|
if (firstAsset) {
|
||||||
|
const shareUrl = client.resolveRelativeLink(firstAsset.link);
|
||||||
|
console.log("Share URL:", shareUrl);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Optional: owner can disable public access later
|
||||||
|
await client.setAssetVisibility(asset.asset.id, false);
|
||||||
```
|
```
|
||||||
|
|
||||||
## Security notes
|
## Security notes
|
||||||
|
|||||||
Executable
+5
@@ -0,0 +1,5 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Allow remote connections to PostgreSQL (for clients connecting via host IP:7721)
|
||||||
|
set -e
|
||||||
|
echo "host all all 0.0.0.0/0 scram-sha-256" >> "$PGDATA/pg_hba.conf"
|
||||||
|
echo "host all all ::/0 scram-sha-256" >> "$PGDATA/pg_hba.conf"
|
||||||
+96
-7
@@ -6,6 +6,7 @@ import (
|
|||||||
"encoding/json"
|
"encoding/json"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -36,8 +37,9 @@ type Config struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type AssetURLSigner interface {
|
type AssetURLSigner interface {
|
||||||
SignedPutObjectURL(ctx context.Context, objectKey string, expiry time.Duration, contentType string) (string, error)
|
|
||||||
SignedGetObjectURL(ctx context.Context, objectKey string, expiry time.Duration) (string, error)
|
SignedGetObjectURL(ctx context.Context, objectKey string, expiry time.Duration) (string, error)
|
||||||
|
PutObject(ctx context.Context, objectKey, contentType string, body io.Reader, size int64) error
|
||||||
|
GetObject(ctx context.Context, objectKey string) (io.ReadCloser, string, int64, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
type Service struct {
|
type Service struct {
|
||||||
@@ -103,7 +105,7 @@ func (s *Service) RegisterBySignature(publicKey, signature string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Service) CreateChallenge(publicKey string) (string, error) {
|
func (s *Service) CreateChallenge(publicKey, clientIP string) (string, error) {
|
||||||
if publicKey == "" {
|
if publicKey == "" {
|
||||||
return "", fmt.Errorf("%w: missing public key", ErrBadRequest)
|
return "", fmt.Errorf("%w: missing public key", ErrBadRequest)
|
||||||
}
|
}
|
||||||
@@ -115,6 +117,7 @@ func (s *Service) CreateChallenge(publicKey string) (string, error) {
|
|||||||
err = s.store.CreateChallenge(store.Challenge{
|
err = s.store.CreateChallenge(store.Challenge{
|
||||||
Nonce: nonce,
|
Nonce: nonce,
|
||||||
PublicKey: publicKey,
|
PublicKey: publicKey,
|
||||||
|
IP: clientIP,
|
||||||
ExpiresAt: time.Now().UTC().Add(s.config.ChallengeTTL),
|
ExpiresAt: time.Now().UTC().Add(s.config.ChallengeTTL),
|
||||||
Used: false,
|
Used: false,
|
||||||
})
|
})
|
||||||
@@ -124,7 +127,7 @@ func (s *Service) CreateChallenge(publicKey string) (string, error) {
|
|||||||
return nonce, nil
|
return nonce, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Service) Login(publicKey, nonce, signature string) (string, error) {
|
func (s *Service) Login(publicKey, nonce, signature, clientIP string) (string, error) {
|
||||||
ch, err := s.store.GetChallenge(nonce)
|
ch, err := s.store.GetChallenge(nonce)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", fmt.Errorf("%w: challenge not found", ErrUnauthorized)
|
return "", fmt.Errorf("%w: challenge not found", ErrUnauthorized)
|
||||||
@@ -154,6 +157,11 @@ func (s *Service) Login(publicKey, nonce, signature string) (string, error) {
|
|||||||
PublicKey: publicKey,
|
PublicKey: publicKey,
|
||||||
ExpiresAt: time.Now().UTC().Add(s.config.SessionTTL),
|
ExpiresAt: time.Now().UTC().Add(s.config.SessionTTL),
|
||||||
})
|
})
|
||||||
|
s.store.SaveUserLogin(store.UserLogin{
|
||||||
|
PublicKey: publicKey,
|
||||||
|
IP: clientIP,
|
||||||
|
CreatedAt: time.Now().UTC(),
|
||||||
|
})
|
||||||
return token, nil
|
return token, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -387,6 +395,43 @@ func (s *Service) ListFeatures(ownerKey, collectionID string) ([]store.Feature,
|
|||||||
return features, nil
|
return features, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *Service) ListPublicFeatures(kind string) []store.Feature {
|
||||||
|
filterKind := strings.TrimSpace(strings.ToLower(kind))
|
||||||
|
features := s.store.ListFeaturesAll()
|
||||||
|
result := make([]store.Feature, 0, len(features))
|
||||||
|
for idx := range features {
|
||||||
|
featureAssets := s.store.ListAssetsByFeature(features[idx].ID)
|
||||||
|
assets := make([]map[string]interface{}, 0, len(featureAssets))
|
||||||
|
for _, linkedAsset := range featureAssets {
|
||||||
|
if !linkedAsset.IsPublic {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if filterKind != "" && linkedAsset.Kind != filterKind {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
assets = append(assets, map[string]interface{}{
|
||||||
|
"id": linkedAsset.ID,
|
||||||
|
"kind": linkedAsset.Kind,
|
||||||
|
"name": linkedAsset.Name,
|
||||||
|
"description": linkedAsset.Description,
|
||||||
|
"checksum": linkedAsset.Checksum,
|
||||||
|
"ext": linkedAsset.Ext,
|
||||||
|
"isPublic": linkedAsset.IsPublic,
|
||||||
|
"link": "/v1/assets/" + linkedAsset.ID + "/download",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
if len(assets) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if features[idx].Properties == nil {
|
||||||
|
features[idx].Properties = map[string]interface{}{}
|
||||||
|
}
|
||||||
|
features[idx].Properties["assets"] = assets
|
||||||
|
result = append(result, features[idx])
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
func (s *Service) DeleteFeature(ownerKey, featureID string) error {
|
func (s *Service) DeleteFeature(ownerKey, featureID string) error {
|
||||||
feature, err := s.store.GetFeature(featureID)
|
feature, err := s.store.GetFeature(featureID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -398,6 +443,23 @@ func (s *Service) DeleteFeature(ownerKey, featureID string) error {
|
|||||||
return s.store.DeleteFeature(featureID)
|
return s.store.DeleteFeature(featureID)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *Service) UpdateFeatureGeometry(ownerKey, featureID string, geometry store.Point) (store.Feature, error) {
|
||||||
|
feature, err := s.store.GetFeature(featureID)
|
||||||
|
if err != nil {
|
||||||
|
return store.Feature{}, ErrFeatureMiss
|
||||||
|
}
|
||||||
|
if feature.OwnerKey != ownerKey {
|
||||||
|
return store.Feature{}, ErrForbidden
|
||||||
|
}
|
||||||
|
if err := validatePoint(geometry); err != nil {
|
||||||
|
return store.Feature{}, err
|
||||||
|
}
|
||||||
|
feature.Geometry = geometry
|
||||||
|
feature.UpdatedAt = time.Now().UTC()
|
||||||
|
s.store.SaveFeature(feature)
|
||||||
|
return feature, nil
|
||||||
|
}
|
||||||
|
|
||||||
type CreateAssetInput struct {
|
type CreateAssetInput struct {
|
||||||
FeatureID string
|
FeatureID string
|
||||||
Checksum string
|
Checksum string
|
||||||
@@ -507,11 +569,24 @@ func (s *Service) SignedUploadURL(ownerKey, assetID, contentType string) (string
|
|||||||
if asset.OwnerKey != ownerKey {
|
if asset.OwnerKey != ownerKey {
|
||||||
return "", ErrForbidden
|
return "", ErrForbidden
|
||||||
}
|
}
|
||||||
url, err := s.assetSigner.SignedPutObjectURL(context.Background(), asset.ObjectKey, s.config.UploadURLTTL, contentType)
|
return "/v1/assets/" + asset.ID + "/upload", nil
|
||||||
if err != nil {
|
}
|
||||||
return "", err
|
|
||||||
|
func (s *Service) UploadAsset(ownerKey, assetID, contentType string, body io.Reader, size int64) error {
|
||||||
|
if s.assetSigner == nil {
|
||||||
|
return ErrStorageNotConfigured
|
||||||
}
|
}
|
||||||
return url, nil
|
asset, err := s.store.GetAsset(assetID)
|
||||||
|
if err != nil {
|
||||||
|
return ErrAssetMiss
|
||||||
|
}
|
||||||
|
if asset.OwnerKey != ownerKey {
|
||||||
|
return ErrForbidden
|
||||||
|
}
|
||||||
|
if contentType == "" {
|
||||||
|
contentType = "application/octet-stream"
|
||||||
|
}
|
||||||
|
return s.assetSigner.PutObject(context.Background(), asset.ObjectKey, contentType, body, size)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Service) SignedDownloadURL(requesterKey, assetID string) (string, error) {
|
func (s *Service) SignedDownloadURL(requesterKey, assetID string) (string, error) {
|
||||||
@@ -531,3 +606,17 @@ func (s *Service) SignedDownloadURL(requesterKey, assetID string) (string, error
|
|||||||
}
|
}
|
||||||
return url, nil
|
return url, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *Service) OpenAssetDownload(requesterKey, assetID string) (io.ReadCloser, string, int64, error) {
|
||||||
|
if s.assetSigner == nil {
|
||||||
|
return nil, "", 0, ErrStorageNotConfigured
|
||||||
|
}
|
||||||
|
asset, err := s.store.GetAsset(assetID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, "", 0, ErrAssetMiss
|
||||||
|
}
|
||||||
|
if asset.OwnerKey != requesterKey && !asset.IsPublic {
|
||||||
|
return nil, "", 0, ErrForbidden
|
||||||
|
}
|
||||||
|
return s.assetSigner.GetObject(context.Background(), asset.ObjectKey)
|
||||||
|
}
|
||||||
|
|||||||
+168
-13
@@ -8,6 +8,7 @@ import (
|
|||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/http/httptest"
|
"net/http/httptest"
|
||||||
"testing"
|
"testing"
|
||||||
@@ -32,14 +33,19 @@ func newTestServer(adminPublicKey string) *httptest.Server {
|
|||||||
|
|
||||||
type fakeSigner struct{}
|
type fakeSigner struct{}
|
||||||
|
|
||||||
func (fakeSigner) SignedPutObjectURL(_ context.Context, objectKey string, _ time.Duration, _ string) (string, error) {
|
|
||||||
return "http://files.local/upload/" + objectKey, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (fakeSigner) SignedGetObjectURL(_ context.Context, objectKey string, _ time.Duration) (string, error) {
|
func (fakeSigner) SignedGetObjectURL(_ context.Context, objectKey string, _ time.Duration) (string, error) {
|
||||||
return "http://files.local/download/" + objectKey, nil
|
return "http://files.local/download/" + objectKey, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (fakeSigner) PutObject(_ context.Context, _ string, _ string, _ io.Reader, _ int64) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (fakeSigner) GetObject(_ context.Context, objectKey string) (io.ReadCloser, string, int64, error) {
|
||||||
|
payload := []byte("fake-download:" + objectKey)
|
||||||
|
return io.NopCloser(bytes.NewReader(payload)), "application/octet-stream", int64(len(payload)), nil
|
||||||
|
}
|
||||||
|
|
||||||
func mustJSON(t *testing.T, value interface{}) []byte {
|
func mustJSON(t *testing.T, value interface{}) []byte {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
b, err := json.Marshal(value)
|
b, err := json.Marshal(value)
|
||||||
@@ -105,6 +111,25 @@ func patchJSON(t *testing.T, client *http.Client, url string, body interface{},
|
|||||||
return resp, out
|
return resp, out
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func putRaw(t *testing.T, client *http.Client, url string, payload []byte, contentType string, token string) *http.Response {
|
||||||
|
t.Helper()
|
||||||
|
req, err := http.NewRequest(http.MethodPut, url, bytes.NewReader(payload))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("new request: %v", err)
|
||||||
|
}
|
||||||
|
if contentType != "" {
|
||||||
|
req.Header.Set("Content-Type", contentType)
|
||||||
|
}
|
||||||
|
if token != "" {
|
||||||
|
req.Header.Set("Authorization", "Bearer "+token)
|
||||||
|
}
|
||||||
|
resp, err := client.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("do request: %v", err)
|
||||||
|
}
|
||||||
|
return resp
|
||||||
|
}
|
||||||
|
|
||||||
func loginUser(t *testing.T, client *http.Client, baseURL, pubB64 string, priv ed25519.PrivateKey) string {
|
func loginUser(t *testing.T, client *http.Client, baseURL, pubB64 string, priv ed25519.PrivateKey) string {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
chResp, chData := postJSON(t, client, baseURL+"/v1/auth/challenge", map[string]string{"publicKey": pubB64}, "")
|
chResp, chData := postJSON(t, client, baseURL+"/v1/auth/challenge", map[string]string{"publicKey": pubB64}, "")
|
||||||
@@ -283,6 +308,28 @@ func TestCollectionOwnershipIsolation(t *testing.T) {
|
|||||||
if resp.StatusCode != http.StatusForbidden {
|
if resp.StatusCode != http.StatusForbidden {
|
||||||
t.Fatalf("expected 403, got %d", resp.StatusCode)
|
t.Fatalf("expected 403, got %d", resp.StatusCode)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
featureID := createFeatureData["id"].(string)
|
||||||
|
|
||||||
|
patchOwnResp, patchOwnData := patchJSON(t, client, server.URL+"/v1/features/"+featureID, map[string]interface{}{
|
||||||
|
"geometry": map[string]interface{}{
|
||||||
|
"type": "Point",
|
||||||
|
"coordinates": []float64{-16.6299, 28.4639, 11},
|
||||||
|
},
|
||||||
|
}, user1Token)
|
||||||
|
if patchOwnResp.StatusCode != http.StatusOK {
|
||||||
|
t.Fatalf("owner patch feature status=%d body=%v", patchOwnResp.StatusCode, patchOwnData)
|
||||||
|
}
|
||||||
|
|
||||||
|
patchOtherResp, patchOtherData := patchJSON(t, client, server.URL+"/v1/features/"+featureID, map[string]interface{}{
|
||||||
|
"geometry": map[string]interface{}{
|
||||||
|
"type": "Point",
|
||||||
|
"coordinates": []float64{-16.6301, 28.4641, 12},
|
||||||
|
},
|
||||||
|
}, user2Token)
|
||||||
|
if patchOtherResp.StatusCode != http.StatusForbidden {
|
||||||
|
t.Fatalf("non-owner patch feature status=%d body=%v", patchOtherResp.StatusCode, patchOtherData)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestAssetLifecycleAndVisibility(t *testing.T) {
|
func TestAssetLifecycleAndVisibility(t *testing.T) {
|
||||||
@@ -294,7 +341,6 @@ func TestAssetLifecycleAndVisibility(t *testing.T) {
|
|||||||
server := newTestServer(adminPubB64)
|
server := newTestServer(adminPubB64)
|
||||||
defer server.Close()
|
defer server.Close()
|
||||||
client := server.Client()
|
client := server.Client()
|
||||||
client.CheckRedirect = func(_ *http.Request, _ []*http.Request) error { return http.ErrUseLastResponse }
|
|
||||||
|
|
||||||
adminToken := loginUser(t, client, server.URL, adminPubB64, adminPriv)
|
adminToken := loginUser(t, client, server.URL, adminPubB64, adminPriv)
|
||||||
|
|
||||||
@@ -306,7 +352,6 @@ func TestAssetLifecycleAndVisibility(t *testing.T) {
|
|||||||
user2Pub, user2Priv, _ := ed25519.GenerateKey(rand.Reader)
|
user2Pub, user2Priv, _ := ed25519.GenerateKey(rand.Reader)
|
||||||
user2PubB64 := base64.RawURLEncoding.EncodeToString(user2Pub)
|
user2PubB64 := base64.RawURLEncoding.EncodeToString(user2Pub)
|
||||||
registerUserViaAdmin(t, client, server.URL, adminPubB64, adminPriv, adminToken, user2PubB64, user2Priv, "invite-asset-u2")
|
registerUserViaAdmin(t, client, server.URL, adminPubB64, adminPriv, adminToken, user2PubB64, user2Priv, "invite-asset-u2")
|
||||||
user2Token := loginUser(t, client, server.URL, user2PubB64, user2Priv)
|
|
||||||
|
|
||||||
createCollectionResp, createCollectionData := postJSON(t, client, server.URL+"/v1/collections", map[string]string{
|
createCollectionResp, createCollectionData := postJSON(t, client, server.URL+"/v1/collections", map[string]string{
|
||||||
"name": "assets",
|
"name": "assets",
|
||||||
@@ -368,6 +413,15 @@ func TestAssetLifecycleAndVisibility(t *testing.T) {
|
|||||||
if uploadResp.StatusCode != http.StatusOK {
|
if uploadResp.StatusCode != http.StatusOK {
|
||||||
t.Fatalf("signed upload status=%d body=%v", uploadResp.StatusCode, uploadData)
|
t.Fatalf("signed upload status=%d body=%v", uploadResp.StatusCode, uploadData)
|
||||||
}
|
}
|
||||||
|
if uploadData["url"] != "/v1/assets/"+assetID+"/upload" {
|
||||||
|
t.Fatalf("unexpected signed-upload backend url: %v", uploadData["url"])
|
||||||
|
}
|
||||||
|
|
||||||
|
putResp := putRaw(t, client, server.URL+"/v1/assets/"+assetID+"/upload", []byte("glb-bytes"), "model/gltf-binary", user1Token)
|
||||||
|
defer putResp.Body.Close()
|
||||||
|
if putResp.StatusCode != http.StatusNoContent {
|
||||||
|
t.Fatalf("upload proxy status=%d", putResp.StatusCode)
|
||||||
|
}
|
||||||
|
|
||||||
featuresResp, featuresData := getJSON(t, client, server.URL+"/v1/collections/"+collectionID+"/features", user1Token)
|
featuresResp, featuresData := getJSON(t, client, server.URL+"/v1/collections/"+collectionID+"/features", user1Token)
|
||||||
if featuresResp.StatusCode != http.StatusOK {
|
if featuresResp.StatusCode != http.StatusOK {
|
||||||
@@ -386,17 +440,21 @@ func TestAssetLifecycleAndVisibility(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
reqDownloadPublic, _ := http.NewRequest(http.MethodGet, server.URL+"/v1/assets/"+assetID+"/download", nil)
|
reqDownloadPublic, _ := http.NewRequest(http.MethodGet, server.URL+"/v1/assets/"+assetID+"/download", nil)
|
||||||
reqDownloadPublic.Header.Set("Authorization", "Bearer "+user2Token)
|
|
||||||
downloadPublicResp, err := client.Do(reqDownloadPublic)
|
downloadPublicResp, err := client.Do(reqDownloadPublic)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("download public request failed: %v", err)
|
t.Fatalf("download public request failed: %v", err)
|
||||||
}
|
}
|
||||||
if downloadPublicResp.StatusCode != http.StatusFound {
|
defer downloadPublicResp.Body.Close()
|
||||||
t.Fatalf("expected public asset redirect status, got %d", downloadPublicResp.StatusCode)
|
if downloadPublicResp.StatusCode != http.StatusOK {
|
||||||
|
t.Fatalf("expected public asset stream status, got %d", downloadPublicResp.StatusCode)
|
||||||
}
|
}
|
||||||
expectedLocation := fmt.Sprintf("http://files.local/download/%s/%s.%s", user1PubB64, "abcdef1234", "glb")
|
body, err := io.ReadAll(downloadPublicResp.Body)
|
||||||
if downloadPublicResp.Header.Get("Location") != expectedLocation {
|
if err != nil {
|
||||||
t.Fatalf("unexpected redirect location: %s", downloadPublicResp.Header.Get("Location"))
|
t.Fatalf("read public download body: %v", err)
|
||||||
|
}
|
||||||
|
expectedBody := fmt.Sprintf("fake-download:%s/%s.%s", user1PubB64, "abcdef1234", "glb")
|
||||||
|
if string(body) != expectedBody {
|
||||||
|
t.Fatalf("unexpected download body: %q", string(body))
|
||||||
}
|
}
|
||||||
|
|
||||||
patchResp, patchData := patchJSON(t, client, server.URL+"/v1/assets/"+assetID, map[string]interface{}{
|
patchResp, patchData := patchJSON(t, client, server.URL+"/v1/assets/"+assetID, map[string]interface{}{
|
||||||
@@ -407,7 +465,6 @@ func TestAssetLifecycleAndVisibility(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
reqDownloadPrivate, _ := http.NewRequest(http.MethodGet, server.URL+"/v1/assets/"+assetID+"/download", nil)
|
reqDownloadPrivate, _ := http.NewRequest(http.MethodGet, server.URL+"/v1/assets/"+assetID+"/download", nil)
|
||||||
reqDownloadPrivate.Header.Set("Authorization", "Bearer "+user2Token)
|
|
||||||
downloadPrivateResp, err := client.Do(reqDownloadPrivate)
|
downloadPrivateResp, err := client.Do(reqDownloadPrivate)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("download private request failed: %v", err)
|
t.Fatalf("download private request failed: %v", err)
|
||||||
@@ -416,3 +473,101 @@ func TestAssetLifecycleAndVisibility(t *testing.T) {
|
|||||||
t.Fatalf("expected 403 for private asset, got %d", downloadPrivateResp.StatusCode)
|
t.Fatalf("expected 403 for private asset, got %d", downloadPrivateResp.StatusCode)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestListPublicFeatures(t *testing.T) {
|
||||||
|
adminPub, adminPriv, err := ed25519.GenerateKey(rand.Reader)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("generate admin key: %v", err)
|
||||||
|
}
|
||||||
|
adminPubB64 := base64.RawURLEncoding.EncodeToString(adminPub)
|
||||||
|
server := newTestServer(adminPubB64)
|
||||||
|
defer server.Close()
|
||||||
|
client := server.Client()
|
||||||
|
|
||||||
|
adminToken := loginUser(t, client, server.URL, adminPubB64, adminPriv)
|
||||||
|
|
||||||
|
user1Pub, user1Priv, _ := ed25519.GenerateKey(rand.Reader)
|
||||||
|
user1PubB64 := base64.RawURLEncoding.EncodeToString(user1Pub)
|
||||||
|
registerUserViaAdmin(t, client, server.URL, adminPubB64, adminPriv, adminToken, user1PubB64, user1Priv, "invite-public-u1")
|
||||||
|
user1Token := loginUser(t, client, server.URL, user1PubB64, user1Priv)
|
||||||
|
|
||||||
|
user2Pub, user2Priv, _ := ed25519.GenerateKey(rand.Reader)
|
||||||
|
user2PubB64 := base64.RawURLEncoding.EncodeToString(user2Pub)
|
||||||
|
registerUserViaAdmin(t, client, server.URL, adminPubB64, adminPriv, adminToken, user2PubB64, user2Priv, "invite-public-u2")
|
||||||
|
user2Token := loginUser(t, client, server.URL, user2PubB64, user2Priv)
|
||||||
|
|
||||||
|
c1Resp, c1Data := postJSON(t, client, server.URL+"/v1/collections", map[string]string{"name": "u1-public"}, user1Token)
|
||||||
|
if c1Resp.StatusCode != http.StatusCreated {
|
||||||
|
t.Fatalf("u1 create collection status=%d body=%v", c1Resp.StatusCode, c1Data)
|
||||||
|
}
|
||||||
|
c1ID := c1Data["id"].(string)
|
||||||
|
f1Resp, f1Data := postJSON(t, client, server.URL+"/v1/collections/"+c1ID+"/features", map[string]interface{}{
|
||||||
|
"geometry": map[string]interface{}{"type": "Point", "coordinates": []float64{-16.25, 28.46, 5}},
|
||||||
|
"properties": map[string]interface{}{
|
||||||
|
"name": "u1-public-feature",
|
||||||
|
},
|
||||||
|
}, user1Token)
|
||||||
|
if f1Resp.StatusCode != http.StatusCreated {
|
||||||
|
t.Fatalf("u1 create feature status=%d body=%v", f1Resp.StatusCode, f1Data)
|
||||||
|
}
|
||||||
|
f1ID := f1Data["id"].(string)
|
||||||
|
a1Resp, a1Data := postJSON(t, client, server.URL+"/v1/assets", map[string]interface{}{
|
||||||
|
"featureId": f1ID,
|
||||||
|
"checksum": "pub3d111",
|
||||||
|
"ext": "glb",
|
||||||
|
"kind": "3d",
|
||||||
|
"isPublic": true,
|
||||||
|
}, user1Token)
|
||||||
|
if a1Resp.StatusCode != http.StatusCreated {
|
||||||
|
t.Fatalf("u1 create public asset status=%d body=%v", a1Resp.StatusCode, a1Data)
|
||||||
|
}
|
||||||
|
a1ID := a1Data["asset"].(map[string]interface{})["id"].(string)
|
||||||
|
|
||||||
|
c2Resp, c2Data := postJSON(t, client, server.URL+"/v1/collections", map[string]string{"name": "u2-private"}, user2Token)
|
||||||
|
if c2Resp.StatusCode != http.StatusCreated {
|
||||||
|
t.Fatalf("u2 create collection status=%d body=%v", c2Resp.StatusCode, c2Data)
|
||||||
|
}
|
||||||
|
c2ID := c2Data["id"].(string)
|
||||||
|
f2Resp, f2Data := postJSON(t, client, server.URL+"/v1/collections/"+c2ID+"/features", map[string]interface{}{
|
||||||
|
"geometry": map[string]interface{}{"type": "Point", "coordinates": []float64{-16.3, 28.47, 7}},
|
||||||
|
"properties": map[string]interface{}{
|
||||||
|
"name": "u2-private-feature",
|
||||||
|
},
|
||||||
|
}, user2Token)
|
||||||
|
if f2Resp.StatusCode != http.StatusCreated {
|
||||||
|
t.Fatalf("u2 create feature status=%d body=%v", f2Resp.StatusCode, f2Data)
|
||||||
|
}
|
||||||
|
f2ID := f2Data["id"].(string)
|
||||||
|
a2Resp, a2Data := postJSON(t, client, server.URL+"/v1/assets", map[string]interface{}{
|
||||||
|
"featureId": f2ID,
|
||||||
|
"checksum": "priv3d222",
|
||||||
|
"ext": "glb",
|
||||||
|
"kind": "3d",
|
||||||
|
"isPublic": false,
|
||||||
|
}, user2Token)
|
||||||
|
if a2Resp.StatusCode != http.StatusCreated {
|
||||||
|
t.Fatalf("u2 create private asset status=%d body=%v", a2Resp.StatusCode, a2Data)
|
||||||
|
}
|
||||||
|
|
||||||
|
publicResp, publicData := getJSON(t, client, server.URL+"/v1/features/public?kind=3d", "")
|
||||||
|
if publicResp.StatusCode != http.StatusOK {
|
||||||
|
t.Fatalf("list public features status=%d body=%v", publicResp.StatusCode, publicData)
|
||||||
|
}
|
||||||
|
publicFeatures := publicData["features"].([]interface{})
|
||||||
|
if len(publicFeatures) != 1 {
|
||||||
|
t.Fatalf("expected 1 public feature, got %d", len(publicFeatures))
|
||||||
|
}
|
||||||
|
publicFeature := publicFeatures[0].(map[string]interface{})
|
||||||
|
if publicFeature["id"].(string) != f1ID {
|
||||||
|
t.Fatalf("expected public feature id=%s got=%v", f1ID, publicFeature["id"])
|
||||||
|
}
|
||||||
|
properties := publicFeature["properties"].(map[string]interface{})
|
||||||
|
assets := properties["assets"].([]interface{})
|
||||||
|
if len(assets) != 1 {
|
||||||
|
t.Fatalf("expected 1 public asset, got %d", len(assets))
|
||||||
|
}
|
||||||
|
publicAsset := assets[0].(map[string]interface{})
|
||||||
|
if publicAsset["id"].(string) != a1ID {
|
||||||
|
t.Fatalf("expected public asset id=%s got=%v", a1ID, publicAsset["id"])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -3,6 +3,8 @@ package httpapi
|
|||||||
import (
|
import (
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"errors"
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
@@ -39,10 +41,13 @@ func (a *API) Routes() http.Handler {
|
|||||||
mux.HandleFunc("DELETE /v1/collections/{id}", a.deleteCollection)
|
mux.HandleFunc("DELETE /v1/collections/{id}", a.deleteCollection)
|
||||||
mux.HandleFunc("POST /v1/collections/{id}/features", a.createFeature)
|
mux.HandleFunc("POST /v1/collections/{id}/features", a.createFeature)
|
||||||
mux.HandleFunc("GET /v1/collections/{id}/features", a.listFeatures)
|
mux.HandleFunc("GET /v1/collections/{id}/features", a.listFeatures)
|
||||||
|
mux.HandleFunc("GET /v1/features/public", a.listPublicFeatures)
|
||||||
|
mux.HandleFunc("PATCH /v1/features/{id}", a.patchFeature)
|
||||||
mux.HandleFunc("DELETE /v1/features/{id}", a.deleteFeature)
|
mux.HandleFunc("DELETE /v1/features/{id}", a.deleteFeature)
|
||||||
mux.HandleFunc("POST /v1/assets", a.createAsset)
|
mux.HandleFunc("POST /v1/assets", a.createAsset)
|
||||||
mux.HandleFunc("PATCH /v1/assets/{id}", a.patchAsset)
|
mux.HandleFunc("PATCH /v1/assets/{id}", a.patchAsset)
|
||||||
mux.HandleFunc("POST /v1/assets/{id}/signed-upload", a.signedUpload)
|
mux.HandleFunc("POST /v1/assets/{id}/signed-upload", a.signedUpload)
|
||||||
|
mux.HandleFunc("PUT /v1/assets/{id}/upload", a.uploadAsset)
|
||||||
mux.HandleFunc("GET /v1/assets/{id}/download", a.downloadAsset)
|
mux.HandleFunc("GET /v1/assets/{id}/download", a.downloadAsset)
|
||||||
|
|
||||||
mux.Handle("/web/", http.StripPrefix("/web/", staticFiles))
|
mux.Handle("/web/", http.StripPrefix("/web/", staticFiles))
|
||||||
@@ -140,6 +145,17 @@ func (a *API) authUser(r *http.Request) (string, error) {
|
|||||||
return a.service.AuthenticateSession(token)
|
return a.service.AuthenticateSession(token)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (a *API) authUserOptional(r *http.Request) (string, error) {
|
||||||
|
if strings.TrimSpace(r.Header.Get("Authorization")) == "" {
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
token, err := bearerToken(r)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return a.service.AuthenticateSession(token)
|
||||||
|
}
|
||||||
|
|
||||||
func (a *API) health(w http.ResponseWriter, _ *http.Request) {
|
func (a *API) health(w http.ResponseWriter, _ *http.Request) {
|
||||||
writeJSON(w, http.StatusOK, map[string]string{"status": "ok", "time": time.Now().UTC().Format(time.RFC3339)})
|
writeJSON(w, http.StatusOK, map[string]string{"status": "ok", "time": time.Now().UTC().Format(time.RFC3339)})
|
||||||
}
|
}
|
||||||
@@ -152,7 +168,7 @@ func (a *API) createChallenge(w http.ResponseWriter, r *http.Request) {
|
|||||||
writeErr(w, app.ErrBadRequest)
|
writeErr(w, app.ErrBadRequest)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
nonce, err := a.service.CreateChallenge(req.PublicKey)
|
nonce, err := a.service.CreateChallenge(req.PublicKey, clientIP(r))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
writeErr(w, err)
|
writeErr(w, err)
|
||||||
return
|
return
|
||||||
@@ -170,7 +186,7 @@ func (a *API) login(w http.ResponseWriter, r *http.Request) {
|
|||||||
writeErr(w, app.ErrBadRequest)
|
writeErr(w, app.ErrBadRequest)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
token, err := a.service.Login(req.PublicKey, req.Nonce, req.Signature)
|
token, err := a.service.Login(req.PublicKey, req.Nonce, req.Signature, clientIP(r))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
writeErr(w, err)
|
writeErr(w, err)
|
||||||
return
|
return
|
||||||
@@ -355,6 +371,15 @@ func (a *API) listFeatures(w http.ResponseWriter, r *http.Request) {
|
|||||||
writeJSON(w, http.StatusOK, map[string]interface{}{"features": features})
|
writeJSON(w, http.StatusOK, map[string]interface{}{"features": features})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (a *API) listPublicFeatures(w http.ResponseWriter, r *http.Request) {
|
||||||
|
kind := r.URL.Query().Get("kind")
|
||||||
|
if kind != "" && kind != "3d" && kind != "image" {
|
||||||
|
writeErr(w, app.ErrBadRequest)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
writeJSON(w, http.StatusOK, map[string]interface{}{"features": a.service.ListPublicFeatures(kind)})
|
||||||
|
}
|
||||||
|
|
||||||
func (a *API) deleteFeature(w http.ResponseWriter, r *http.Request) {
|
func (a *API) deleteFeature(w http.ResponseWriter, r *http.Request) {
|
||||||
user, err := a.authUser(r)
|
user, err := a.authUser(r)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -369,6 +394,28 @@ func (a *API) deleteFeature(w http.ResponseWriter, r *http.Request) {
|
|||||||
w.WriteHeader(http.StatusNoContent)
|
w.WriteHeader(http.StatusNoContent)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (a *API) patchFeature(w http.ResponseWriter, r *http.Request) {
|
||||||
|
user, err := a.authUser(r)
|
||||||
|
if err != nil {
|
||||||
|
writeErr(w, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
featureID := r.PathValue("id")
|
||||||
|
var req struct {
|
||||||
|
Geometry store.Point `json:"geometry"`
|
||||||
|
}
|
||||||
|
if err := readJSON(r, &req); err != nil {
|
||||||
|
writeErr(w, app.ErrBadRequest)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
feature, err := a.service.UpdateFeatureGeometry(user, featureID, req.Geometry)
|
||||||
|
if err != nil {
|
||||||
|
writeErr(w, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
writeJSON(w, http.StatusOK, feature)
|
||||||
|
}
|
||||||
|
|
||||||
func (a *API) createAsset(w http.ResponseWriter, r *http.Request) {
|
func (a *API) createAsset(w http.ResponseWriter, r *http.Request) {
|
||||||
user, err := a.authUser(r)
|
user, err := a.authUser(r)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -459,17 +506,41 @@ func (a *API) signedUpload(w http.ResponseWriter, r *http.Request) {
|
|||||||
writeJSON(w, http.StatusOK, map[string]string{"url": url, "method": http.MethodPut})
|
writeJSON(w, http.StatusOK, map[string]string{"url": url, "method": http.MethodPut})
|
||||||
}
|
}
|
||||||
|
|
||||||
func (a *API) downloadAsset(w http.ResponseWriter, r *http.Request) {
|
func (a *API) uploadAsset(w http.ResponseWriter, r *http.Request) {
|
||||||
user, err := a.authUser(r)
|
user, err := a.authUser(r)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
writeErr(w, err)
|
writeErr(w, err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
assetID := r.PathValue("id")
|
assetID := r.PathValue("id")
|
||||||
url, err := a.service.SignedDownloadURL(user, assetID)
|
if err := a.service.UploadAsset(user, assetID, r.Header.Get("Content-Type"), r.Body, r.ContentLength); err != nil {
|
||||||
|
writeErr(w, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
w.WriteHeader(http.StatusNoContent)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (a *API) downloadAsset(w http.ResponseWriter, r *http.Request) {
|
||||||
|
user, err := a.authUserOptional(r)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
writeErr(w, err)
|
writeErr(w, err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
http.Redirect(w, r, url, http.StatusFound)
|
assetID := r.PathValue("id")
|
||||||
|
reader, contentType, size, err := a.service.OpenAssetDownload(user, assetID)
|
||||||
|
if err != nil {
|
||||||
|
writeErr(w, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer reader.Close()
|
||||||
|
if contentType != "" {
|
||||||
|
w.Header().Set("Content-Type", contentType)
|
||||||
|
}
|
||||||
|
if size >= 0 {
|
||||||
|
w.Header().Set("Content-Length", fmt.Sprintf("%d", size))
|
||||||
|
}
|
||||||
|
if _, err := io.Copy(w, reader); err != nil {
|
||||||
|
// Response stream may be interrupted by client disconnects; ignore write errors here.
|
||||||
|
return
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -0,0 +1,95 @@
|
|||||||
|
package httpapi
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// WithRequestLogging wraps h with a handler that logs each request to dir/access.log.
|
||||||
|
// Uses dir "var/logs" by default. Returns h unchanged if dir is empty.
|
||||||
|
func WithRequestLogging(dir string, h http.Handler) (http.Handler, error) {
|
||||||
|
if dir == "" {
|
||||||
|
return h, nil
|
||||||
|
}
|
||||||
|
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
f, err := os.OpenFile(filepath.Join(dir, "access.log"), os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &requestLogger{handler: h, file: f}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type requestLogger struct {
|
||||||
|
handler http.Handler
|
||||||
|
file *os.File
|
||||||
|
mu sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *requestLogger) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||||
|
start := time.Now()
|
||||||
|
ip := clientIP(r)
|
||||||
|
method := r.Method
|
||||||
|
path := r.URL.Path
|
||||||
|
userAgent := r.Header.Get("User-Agent")
|
||||||
|
if userAgent == "" {
|
||||||
|
userAgent = "-"
|
||||||
|
}
|
||||||
|
|
||||||
|
wrapped := &responseRecorder{ResponseWriter: w, status: http.StatusOK}
|
||||||
|
l.handler.ServeHTTP(wrapped, r)
|
||||||
|
|
||||||
|
elapsed := time.Since(start).Milliseconds()
|
||||||
|
line := formatLogLine(ip, method, path, wrapped.status, elapsed, strings.ReplaceAll(userAgent, "\"", "'"))
|
||||||
|
|
||||||
|
l.mu.Lock()
|
||||||
|
_, _ = l.file.WriteString(line)
|
||||||
|
l.mu.Unlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
type responseRecorder struct {
|
||||||
|
http.ResponseWriter
|
||||||
|
status int
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *responseRecorder) WriteHeader(code int) {
|
||||||
|
r.status = code
|
||||||
|
r.ResponseWriter.WriteHeader(code)
|
||||||
|
}
|
||||||
|
|
||||||
|
func formatLogLine(ip, method, path string, status int, elapsedMs int64, userAgent string) string {
|
||||||
|
// Common log format: ip - - [timestamp] "method path protocol" status size "referer" "user-agent" elapsed_ms
|
||||||
|
t := time.Now().UTC().Format("02/Jan/2006:15:04:05 -0700")
|
||||||
|
return fmt.Sprintf("%s - - [%s] \"%s %s\" %d %d \"-\" \"%s\" %dms\n",
|
||||||
|
ip, t, method, path, status, 0, userAgent, elapsedMs)
|
||||||
|
}
|
||||||
|
|
||||||
|
func clientIP(r *http.Request) string {
|
||||||
|
if xff := r.Header.Get("X-Forwarded-For"); xff != "" {
|
||||||
|
// First IP in the list is the original client
|
||||||
|
for i := 0; i < len(xff); i++ {
|
||||||
|
if xff[i] == ',' {
|
||||||
|
return xff[:i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return xff
|
||||||
|
}
|
||||||
|
if xri := r.Header.Get("X-Real-IP"); xri != "" {
|
||||||
|
return xri
|
||||||
|
}
|
||||||
|
host, _ := splitHostPort(r.RemoteAddr)
|
||||||
|
return host
|
||||||
|
}
|
||||||
|
|
||||||
|
func splitHostPort(addr string) (host, port string) {
|
||||||
|
if idx := strings.LastIndex(addr, ":"); idx >= 0 {
|
||||||
|
return addr[:idx], addr[idx+1:]
|
||||||
|
}
|
||||||
|
return addr, ""
|
||||||
|
}
|
||||||
@@ -3,6 +3,7 @@ package storage
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
|
"io"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/minio/minio-go/v7"
|
"github.com/minio/minio-go/v7"
|
||||||
@@ -47,14 +48,6 @@ func bucketLookup(pathStyle bool) minio.BucketLookupType {
|
|||||||
return minio.BucketLookupAuto
|
return minio.BucketLookupAuto
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *S3Signer) SignedPutObjectURL(ctx context.Context, objectKey string, expiry time.Duration, _ string) (string, error) {
|
|
||||||
u, err := s.client.PresignedPutObject(ctx, s.bucket, objectKey, expiry)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return u.String(), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *S3Signer) SignedGetObjectURL(ctx context.Context, objectKey string, expiry time.Duration) (string, error) {
|
func (s *S3Signer) SignedGetObjectURL(ctx context.Context, objectKey string, expiry time.Duration) (string, error) {
|
||||||
u, err := s.client.PresignedGetObject(ctx, s.bucket, objectKey, expiry, nil)
|
u, err := s.client.PresignedGetObject(ctx, s.bucket, objectKey, expiry, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -62,3 +55,27 @@ func (s *S3Signer) SignedGetObjectURL(ctx context.Context, objectKey string, exp
|
|||||||
}
|
}
|
||||||
return u.String(), nil
|
return u.String(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *S3Signer) PutObject(ctx context.Context, objectKey, contentType string, body io.Reader, size int64) error {
|
||||||
|
_, err := s.client.PutObject(ctx, s.bucket, objectKey, body, size, minio.PutObjectOptions{
|
||||||
|
ContentType: contentType,
|
||||||
|
})
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *S3Signer) GetObject(ctx context.Context, objectKey string) (io.ReadCloser, string, int64, error) {
|
||||||
|
obj, err := s.client.GetObject(ctx, s.bucket, objectKey, minio.GetObjectOptions{})
|
||||||
|
if err != nil {
|
||||||
|
return nil, "", 0, err
|
||||||
|
}
|
||||||
|
info, err := obj.Stat()
|
||||||
|
if err != nil {
|
||||||
|
_ = obj.Close()
|
||||||
|
return nil, "", 0, err
|
||||||
|
}
|
||||||
|
contentType := info.ContentType
|
||||||
|
if contentType == "" {
|
||||||
|
contentType = "application/octet-stream"
|
||||||
|
}
|
||||||
|
return obj, contentType, info.Size, nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -8,6 +8,7 @@ type Store interface {
|
|||||||
CreateChallenge(ch Challenge) error
|
CreateChallenge(ch Challenge) error
|
||||||
GetChallenge(nonce string) (Challenge, error)
|
GetChallenge(nonce string) (Challenge, error)
|
||||||
MarkChallengeUsed(nonce string) error
|
MarkChallengeUsed(nonce string) error
|
||||||
|
SaveUserLogin(ul UserLogin)
|
||||||
SaveSession(session Session)
|
SaveSession(session Session)
|
||||||
GetSession(token string) (Session, error)
|
GetSession(token string) (Session, error)
|
||||||
SaveInvitation(inv Invitation) error
|
SaveInvitation(inv Invitation) error
|
||||||
@@ -19,6 +20,7 @@ type Store interface {
|
|||||||
DeleteCollection(id string) error
|
DeleteCollection(id string) error
|
||||||
SaveFeature(f Feature)
|
SaveFeature(f Feature)
|
||||||
ListFeaturesByCollection(collectionID string) []Feature
|
ListFeaturesByCollection(collectionID string) []Feature
|
||||||
|
ListFeaturesAll() []Feature
|
||||||
GetFeature(featureID string) (Feature, error)
|
GetFeature(featureID string) (Feature, error)
|
||||||
DeleteFeature(featureID string) error
|
DeleteFeature(featureID string) error
|
||||||
SaveAsset(a Asset)
|
SaveAsset(a Asset)
|
||||||
|
|||||||
@@ -195,6 +195,16 @@ func (s *MemoryStore) ListFeaturesByCollection(collectionID string) []Feature {
|
|||||||
return result
|
return result
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *MemoryStore) ListFeaturesAll() []Feature {
|
||||||
|
s.mu.RLock()
|
||||||
|
defer s.mu.RUnlock()
|
||||||
|
result := make([]Feature, 0, len(s.features))
|
||||||
|
for _, f := range s.features {
|
||||||
|
result = append(result, f)
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
func (s *MemoryStore) GetFeature(featureID string) (Feature, error) {
|
func (s *MemoryStore) GetFeature(featureID string) (Feature, error) {
|
||||||
s.mu.RLock()
|
s.mu.RLock()
|
||||||
defer s.mu.RUnlock()
|
defer s.mu.RUnlock()
|
||||||
@@ -336,3 +346,7 @@ func (s *MemoryStore) PruneExpired(now time.Time) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *MemoryStore) SaveUserLogin(ul UserLogin) {
|
||||||
|
// In-memory store: no-op for login history (persistence only in Postgres)
|
||||||
|
}
|
||||||
|
|||||||
@@ -21,7 +21,6 @@ func Migrate(databaseURL string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer db.Close()
|
defer db.Close()
|
||||||
|
|
||||||
files, err := fs.ReadDir(migrationsFS, "migrations")
|
files, err := fs.ReadDir(migrationsFS, "migrations")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
|||||||
@@ -0,0 +1,21 @@
|
|||||||
|
-- Add ip column to challenges (idempotent)
|
||||||
|
DO $$
|
||||||
|
BEGIN
|
||||||
|
IF NOT EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_schema = 'public' AND table_name = 'challenges' AND column_name = 'ip'
|
||||||
|
) THEN
|
||||||
|
ALTER TABLE challenges ADD COLUMN ip TEXT;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
-- User login history (ip, created_at per user)
|
||||||
|
CREATE TABLE IF NOT EXISTS user_logins (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
public_key TEXT NOT NULL REFERENCES users(public_key) ON DELETE CASCADE,
|
||||||
|
ip TEXT NOT NULL,
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_user_logins_public_key ON user_logins(public_key);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_user_logins_created_at ON user_logins(created_at);
|
||||||
@@ -54,8 +54,8 @@ func (s *PostgresStore) GetUser(publicKey string) (User, error) {
|
|||||||
|
|
||||||
func (s *PostgresStore) CreateChallenge(ch Challenge) error {
|
func (s *PostgresStore) CreateChallenge(ch Challenge) error {
|
||||||
_, err := s.db.Exec(
|
_, err := s.db.Exec(
|
||||||
`INSERT INTO challenges (nonce, public_key, expires_at, used) VALUES ($1, $2, $3, $4)`,
|
`INSERT INTO challenges (nonce, public_key, ip, expires_at, used) VALUES ($1, $2, $3, $4, $5)`,
|
||||||
ch.Nonce, ch.PublicKey, ch.ExpiresAt, ch.Used,
|
ch.Nonce, ch.PublicKey, nullStr(ch.IP), ch.ExpiresAt, ch.Used,
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if isUniqueViolation(err) {
|
if isUniqueViolation(err) {
|
||||||
@@ -69,9 +69,9 @@ func (s *PostgresStore) CreateChallenge(ch Challenge) error {
|
|||||||
func (s *PostgresStore) GetChallenge(nonce string) (Challenge, error) {
|
func (s *PostgresStore) GetChallenge(nonce string) (Challenge, error) {
|
||||||
var ch Challenge
|
var ch Challenge
|
||||||
err := s.db.QueryRow(
|
err := s.db.QueryRow(
|
||||||
`SELECT nonce, public_key, expires_at, used FROM challenges WHERE nonce = $1`,
|
`SELECT nonce, public_key, COALESCE(ip,''), expires_at, used FROM challenges WHERE nonce = $1`,
|
||||||
nonce,
|
nonce,
|
||||||
).Scan(&ch.Nonce, &ch.PublicKey, &ch.ExpiresAt, &ch.Used)
|
).Scan(&ch.Nonce, &ch.PublicKey, &ch.IP, &ch.ExpiresAt, &ch.Used)
|
||||||
if errors.Is(err, sql.ErrNoRows) {
|
if errors.Is(err, sql.ErrNoRows) {
|
||||||
return Challenge{}, ErrNotFound
|
return Challenge{}, ErrNotFound
|
||||||
}
|
}
|
||||||
@@ -260,6 +260,29 @@ func (s *PostgresStore) ListFeaturesByCollection(collectionID string) []Feature
|
|||||||
return result
|
return result
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *PostgresStore) ListFeaturesAll() []Feature {
|
||||||
|
rows, err := s.db.Query(
|
||||||
|
`SELECT id, collection_id, owner_key, type, geometry, properties, created_at, updated_at
|
||||||
|
FROM features ORDER BY created_at`,
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
var result []Feature
|
||||||
|
for rows.Next() {
|
||||||
|
var f Feature
|
||||||
|
var geom, props []byte
|
||||||
|
if err := rows.Scan(&f.ID, &f.CollectionID, &f.OwnerKey, &f.Type, &geom, &props, &f.CreatedAt, &f.UpdatedAt); err != nil {
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
_ = json.Unmarshal(geom, &f.Geometry)
|
||||||
|
_ = json.Unmarshal(props, &f.Properties)
|
||||||
|
result = append(result, f)
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
func (s *PostgresStore) GetFeature(featureID string) (Feature, error) {
|
func (s *PostgresStore) GetFeature(featureID string) (Feature, error) {
|
||||||
var f Feature
|
var f Feature
|
||||||
var geom, props []byte
|
var geom, props []byte
|
||||||
@@ -291,6 +314,13 @@ func (s *PostgresStore) DeleteFeature(featureID string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *PostgresStore) SaveUserLogin(ul UserLogin) {
|
||||||
|
_, _ = s.db.Exec(
|
||||||
|
`INSERT INTO user_logins (public_key, ip, created_at) VALUES ($1, $2, $3)`,
|
||||||
|
ul.PublicKey, nullStr(ul.IP), ul.CreatedAt,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
func (s *PostgresStore) SaveAsset(a Asset) {
|
func (s *PostgresStore) SaveAsset(a Asset) {
|
||||||
_, _ = s.db.Exec(
|
_, _ = s.db.Exec(
|
||||||
`INSERT INTO assets (id, owner_key, checksum, ext, kind, mime_type, size_bytes, object_key, is_public, created_at, updated_at)
|
`INSERT INTO assets (id, owner_key, checksum, ext, kind, mime_type, size_bytes, object_key, is_public, created_at, updated_at)
|
||||||
@@ -403,7 +433,6 @@ func (s *PostgresStore) ListAssetsByFeature(featureID string) []FeatureAsset {
|
|||||||
}
|
}
|
||||||
return result
|
return result
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *PostgresStore) PruneExpired(now time.Time) {
|
func (s *PostgresStore) PruneExpired(now time.Time) {
|
||||||
_, _ = s.db.Exec(`DELETE FROM challenges WHERE expires_at < $1`, now)
|
_, _ = s.db.Exec(`DELETE FROM challenges WHERE expires_at < $1`, now)
|
||||||
_, _ = s.db.Exec(`DELETE FROM sessions WHERE expires_at < $1`, now)
|
_, _ = s.db.Exec(`DELETE FROM sessions WHERE expires_at < $1`, now)
|
||||||
|
|||||||
@@ -11,10 +11,18 @@ type User struct {
|
|||||||
type Challenge struct {
|
type Challenge struct {
|
||||||
Nonce string
|
Nonce string
|
||||||
PublicKey string
|
PublicKey string
|
||||||
|
IP string
|
||||||
ExpiresAt time.Time
|
ExpiresAt time.Time
|
||||||
Used bool
|
Used bool
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// UserLogin records a successful login for a user (ip, created_at)
|
||||||
|
type UserLogin struct {
|
||||||
|
PublicKey string
|
||||||
|
IP string
|
||||||
|
CreatedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
type Session struct {
|
type Session struct {
|
||||||
Token string
|
Token string
|
||||||
PublicKey string
|
PublicKey string
|
||||||
|
|||||||
Vendored
+27
-1
@@ -642,10 +642,36 @@ class GeoApiClient {
|
|||||||
return this.request("/v1/assets", { method: "POST", body: input });
|
return this.request("/v1/assets", { method: "POST", body: input });
|
||||||
}
|
}
|
||||||
async getAssetSignedUploadUrl(assetId, contentType) {
|
async getAssetSignedUploadUrl(assetId, contentType) {
|
||||||
return this.request(`/v1/assets/${assetId}/signed-upload`, {
|
const response = await this.request(`/v1/assets/${assetId}/signed-upload`, {
|
||||||
method: "POST",
|
method: "POST",
|
||||||
body: { contentType: contentType ?? "application/octet-stream" }
|
body: { contentType: contentType ?? "application/octet-stream" }
|
||||||
});
|
});
|
||||||
|
if (response.url.startsWith("/")) {
|
||||||
|
response.url = this.resolveRelativeLink(response.url);
|
||||||
|
}
|
||||||
|
return response;
|
||||||
|
}
|
||||||
|
async uploadAssetBinary(assetId, payload, contentType = "application/octet-stream") {
|
||||||
|
const upload = await this.getAssetSignedUploadUrl(assetId, contentType);
|
||||||
|
const headers = new Headers;
|
||||||
|
if (contentType) {
|
||||||
|
headers.set("Content-Type", contentType);
|
||||||
|
}
|
||||||
|
if (this.accessToken) {
|
||||||
|
headers.set("Authorization", `Bearer ${this.accessToken}`);
|
||||||
|
}
|
||||||
|
const res = await fetch(upload.url, {
|
||||||
|
method: upload.method || "PUT",
|
||||||
|
headers,
|
||||||
|
body: payload
|
||||||
|
});
|
||||||
|
if (!res.ok) {
|
||||||
|
const maybeJson = await res.json().catch(() => ({}));
|
||||||
|
let msg = maybeJson.error ?? `Upload failed (${res.status})`;
|
||||||
|
if (maybeJson.hint)
|
||||||
|
msg += `. ${maybeJson.hint}`;
|
||||||
|
throw new Error(msg);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
async setAssetVisibility(assetId, isPublic) {
|
async setAssetVisibility(assetId, isPublic) {
|
||||||
return this.request(`/v1/assets/${assetId}`, {
|
return this.request(`/v1/assets/${assetId}`, {
|
||||||
|
|||||||
@@ -29,7 +29,7 @@ export class GeoApiClient {
|
|||||||
private accessToken: string | null = null;
|
private accessToken: string | null = null;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* @param baseUrl - API base URL (e.g. https://momswap.produktor.duckdns.org)
|
* @param baseUrl - API base URL (e.g. https://tenerife.baby)
|
||||||
* @param storage - Storage adapter (localStorage-like: getItem, setItem, removeItem)
|
* @param storage - Storage adapter (localStorage-like: getItem, setItem, removeItem)
|
||||||
* @param storageKey - Key for persisting keypair (default from DEFAULT_KEYS_STORAGE_KEY)
|
* @param storageKey - Key for persisting keypair (default from DEFAULT_KEYS_STORAGE_KEY)
|
||||||
*/
|
*/
|
||||||
@@ -260,15 +260,52 @@ export class GeoApiClient {
|
|||||||
return this.request("/v1/assets", { method: "POST", body: input });
|
return this.request("/v1/assets", { method: "POST", body: input });
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Request a signed upload URL for an existing asset. */
|
/**
|
||||||
|
* Request a backend upload URL for an existing asset.
|
||||||
|
* Backend returns a service URL (for example /v1/assets/{id}/upload), not a direct storage endpoint.
|
||||||
|
*/
|
||||||
async getAssetSignedUploadUrl(
|
async getAssetSignedUploadUrl(
|
||||||
assetId: string,
|
assetId: string,
|
||||||
contentType?: string
|
contentType?: string
|
||||||
): Promise<{ url: string; method: string }> {
|
): Promise<{ url: string; method: string }> {
|
||||||
return this.request(`/v1/assets/${assetId}/signed-upload`, {
|
const response = await this.request<{ url: string; method: string }>(`/v1/assets/${assetId}/signed-upload`, {
|
||||||
method: "POST",
|
method: "POST",
|
||||||
body: { contentType: contentType ?? "application/octet-stream" },
|
body: { contentType: contentType ?? "application/octet-stream" },
|
||||||
});
|
});
|
||||||
|
if (response.url.startsWith("/")) {
|
||||||
|
response.url = this.resolveRelativeLink(response.url);
|
||||||
|
}
|
||||||
|
return response;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Upload file/binary for an existing asset through backend upload endpoint.
|
||||||
|
* Uses getAssetSignedUploadUrl internally and executes the upload request.
|
||||||
|
*/
|
||||||
|
async uploadAssetBinary(
|
||||||
|
assetId: string,
|
||||||
|
payload: BodyInit,
|
||||||
|
contentType = "application/octet-stream"
|
||||||
|
): Promise<void> {
|
||||||
|
const upload = await this.getAssetSignedUploadUrl(assetId, contentType);
|
||||||
|
const headers = new Headers();
|
||||||
|
if (contentType) {
|
||||||
|
headers.set("Content-Type", contentType);
|
||||||
|
}
|
||||||
|
if (this.accessToken) {
|
||||||
|
headers.set("Authorization", `Bearer ${this.accessToken}`);
|
||||||
|
}
|
||||||
|
const res = await fetch(upload.url, {
|
||||||
|
method: upload.method || "PUT",
|
||||||
|
headers,
|
||||||
|
body: payload,
|
||||||
|
});
|
||||||
|
if (!res.ok) {
|
||||||
|
const maybeJson = (await res.json().catch(() => ({}))) as { error?: string; hint?: string };
|
||||||
|
let msg = maybeJson.error ?? `Upload failed (${res.status})`;
|
||||||
|
if (maybeJson.hint) msg += `. ${maybeJson.hint}`;
|
||||||
|
throw new Error(msg);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Update asset visibility (owner only). */
|
/** Update asset visibility (owner only). */
|
||||||
|
|||||||
@@ -231,7 +231,12 @@ async function createMockServer(): Promise<{ url: string; server: ReturnType<typ
|
|||||||
// POST /v1/assets/:id/signed-upload
|
// POST /v1/assets/:id/signed-upload
|
||||||
if (method === "POST" && path.match(/^\/v1\/assets\/[^/]+\/signed-upload$/)) {
|
if (method === "POST" && path.match(/^\/v1\/assets\/[^/]+\/signed-upload$/)) {
|
||||||
const id = path.split("/")[3]!;
|
const id = path.split("/")[3]!;
|
||||||
return Response.json({ url: `http://upload.local/${id}`, method: "PUT" });
|
return Response.json({ url: `/v1/assets/${id}/upload`, method: "PUT" });
|
||||||
|
}
|
||||||
|
|
||||||
|
// PUT /v1/assets/:id/upload
|
||||||
|
if (method === "PUT" && path.match(/^\/v1\/assets\/[^/]+\/upload$/)) {
|
||||||
|
return new Response(null, { status: 204 });
|
||||||
}
|
}
|
||||||
|
|
||||||
// PATCH /v1/assets/:id
|
// PATCH /v1/assets/:id
|
||||||
@@ -325,7 +330,9 @@ describe("GeoApiClient integration (docs flow)", () => {
|
|||||||
|
|
||||||
const upload = await client.getAssetSignedUploadUrl(createdAsset.asset.id, "model/gltf-binary");
|
const upload = await client.getAssetSignedUploadUrl(createdAsset.asset.id, "model/gltf-binary");
|
||||||
expect(upload.method).toBe("PUT");
|
expect(upload.method).toBe("PUT");
|
||||||
expect(upload.url).toContain(createdAsset.asset.id);
|
expect(upload.url).toContain(`/v1/assets/${createdAsset.asset.id}/upload`);
|
||||||
|
|
||||||
|
await client.uploadAssetBinary(createdAsset.asset.id, new Blob(["fake-glb"]), "model/gltf-binary");
|
||||||
|
|
||||||
const toggled = await client.setAssetVisibility(createdAsset.asset.id, false);
|
const toggled = await client.setAssetVisibility(createdAsset.asset.id, false);
|
||||||
expect(toggled.asset.isPublic).toBe(false);
|
expect(toggled.asset.isPublic).toBe(false);
|
||||||
|
|||||||
+12
-1
@@ -4,9 +4,20 @@ import { scanQRFromCamera } from "./scanner.js";
|
|||||||
|
|
||||||
const { createApp, ref, reactive, onMounted, watch } = Vue;
|
const { createApp, ref, reactive, onMounted, watch } = Vue;
|
||||||
|
|
||||||
|
function normalizeInitialApiBase() {
|
||||||
|
const saved = localStorage.getItem("geo_api_base") || "";
|
||||||
|
const host = window.location.hostname.toLowerCase();
|
||||||
|
if (saved) {
|
||||||
|
const pointsToLocalhost = /^https?:\/\/(localhost|127\.0\.0\.1)(:\d+)?(\/|$)/i.test(saved);
|
||||||
|
const runningHosted = host !== "localhost" && host !== "127.0.0.1";
|
||||||
|
if (!(runningHosted && pointsToLocalhost)) return saved;
|
||||||
|
}
|
||||||
|
return window.location.origin;
|
||||||
|
}
|
||||||
|
|
||||||
createApp({
|
createApp({
|
||||||
setup() {
|
setup() {
|
||||||
const apiBase = ref(localStorage.getItem("geo_api_base") || "https://momswap.produktor.duckdns.org");
|
const apiBase = ref(normalizeInitialApiBase());
|
||||||
const state = reactive({
|
const state = reactive({
|
||||||
publicKey: "",
|
publicKey: "",
|
||||||
privateKey: "",
|
privateKey: "",
|
||||||
|
|||||||
Binary file not shown.
Binary file not shown.
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic10240-10495
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic10496-10751
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic10752-11007
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic11008-11263
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic11264-11519
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic11520-11775
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic11776-12031
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic12032-12287
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic12288-12543
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic12544-12799
|
||||||
Binary file not shown.
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic12800-13055
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic13056-13311
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic13312-13567
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic13568-13823
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic13824-14079
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic14080-14335
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic14336-14591
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic14592-14847
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic14848-15103
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic15104-15359
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
|
||||||
|
Roboto Black Italic 1536-1791
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic15360-15615
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic15616-15871
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic15872-16127
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic16128-16383
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic16384-16639
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic16640-16895
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic16896-17151
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic17152-17407
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic17408-17663
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic17664-17919
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
|
||||||
|
Roboto Black Italic 1792-2047
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic17920-18175
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic18176-18431
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic18432-18687
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic18688-18943
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic18944-19199
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic19200-19455
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic19456-19711
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic19712-19967
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic19968-20223
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic20224-20479
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
|
||||||
|
Roboto Black Italic 2048-2303
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic20480-20735
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic20736-20991
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic20992-21247
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic21248-21503
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic21504-21759
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic21760-22015
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic22016-22271
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic22272-22527
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic22528-22783
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic22784-23039
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
|
||||||
|
Roboto Black Italic 2304-2559
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic23040-23295
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic23296-23551
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic23552-23807
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic23808-24063
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic24064-24319
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic24320-24575
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic24576-24831
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic24832-25087
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic25088-25343
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic25344-25599
|
||||||
Binary file not shown.
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
|
||||||
|
Roboto Black Italic 2560-2815
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic25600-25855
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic25856-26111
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic26112-26367
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
"
|
||||||
|
Roboto Black Italic26368-26623
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user