forked from vfxetc/sgcache
-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathTODO.txt
123 lines (89 loc) · 4.73 KB
/
TODO.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
- unit test all the things!
- Shot.task_template
- every individual field type
- comprehensive sorting
- paging
- event edge cases
- document missing filters:
We dont' support "name" at all:
- *.name*
If it appears that 'name' is always a name, then we can get away with
just that one field.
We don't support native date/time SQL functions:
- *in_last
- *in_next
- in_calendar_day
- in_calendar_week
- in_calendar_month
- Docs:
- shotgun_api3_registry use
- full setup tutorial
- Remove the ping/pong feature after shotgun_api3_registry uses info['sgcache']
- Contain the schema completely within the database itself. Column fields include:
- db_name: The database column name.
- sg_name: The shotgun field name.
- do_cache: it should be cached by the events and scanner
- do_expose: it should be considered by the web API
With this, it gets TONS easier to make safe schema migrations.
OLDER (and not nessesarily still TODD)
======================================
- If we cache all entity and multi_entity fields, then we can infer the
existance of new entities upon return of a creation. E.g. creating a Shot
with a task_template results in a few new Tasks. If we cache Shot.tasks,
then we will immediately see those, and the multi_entity field can be
responsible for making sure they exist. Likely a _complete column would be
added to signal incomplete entities which need fetching. A thread can wait
for a signal that such entities exist, then scan the database looking for them,
and fully cache them.
- We can cache SOME url fields, depending on their "link_type" (e.g. we can
cache those of "web" type). Would generally need support for raising an
exception for a single field of a single entity.
If we want to immediately deal with partially cached data, start by collecting
the uncached_entities and uncached_fields on some object. Those must then
be resolved immediatly via a ('id', 'in', set_of_ids) query (because if
we do a simple passthrough that requires our order_by to match that of
Shotgun's).
Another method is to have another thread listening to a queue for encounters
with uncached values. When encountered, a signal can be sent to that thread,
and an exception raised in order to pass the request through.
Passthrough find(s) could be augmented to return every field, such that we
are able to cache those entities immediately.
- Skip change events that follow a create event.
- return accurate paging info; currently we are cheating a bit
- how to perform rolling schema upgrades?
1. move event watcher to new schema
2. perform full scan
3. restart interval scanner
4. restart cache with new schema
- this generally needs services to be independant
- would also be great if we have a system for sending messages into the
running services, and having them reload the schema, or adding fields
live on the fly
- unix socket in a thread with json messages
- get and parse private schema
s = requests.Session()
s.cookies['_session_id'] = sg._get_session_token()
s.get('https://keystone.shotgunstudio.com/page/schema')
- return a fake "name" (as determined by the identifier_column in the
private schema) with entities/multi-entities
- sgcache-reprocess --retirement -> reprocess retirement events
- table with log of data updates
- the actual event logs?
- docs:
./bin/grep-schema -f schema/basic-filters.txt schema/keystone-full.yaml > schema/keystone-basic.yml
dev ./bin/dump-data -p 66 -s schema/keystone-basic.yml | tee data.json
PYTHONPATH=.:$PYTHONPATH bin/update-data -u sqlite:///test.db -s schema/keystone-basic.yml data.json
SGCACHE_SQLA_URL=sqlite:///test.db PYTHONPATH="$PYTHONPATH:." python -m sgcache.web
./bin/dump-data -p 115 -s schema/keystone-basic.yml | tee all.json
PYTHONPATH=.:$PYTHONPATH bin/update-data -u sqlite:///test.db -s schema/keystone-basic.yml all.json
SGCACHE_SQLA_URL=sqlite:///test.db PYTHONPATH="$PYTHONPATH:." python -m sgcache.web
SGCACHE_SQLA_URL=postgres:///sgcache python -m sgcache.web
./bin/dump-data -p 66 -s schema/keystone-basic.yml | tee all.json
./bin/dump-data -p 115 -s schema/keystone-basic.yml | tee all.json
PYTHONPATH=.:$PYTHONPATH ./bin/update-data -u postgres:///sgcache -s schema/keystone-basic.yml all.json
- what must be do for `s3_uploads_enabled` in server info?
- URLS could include the real shotgun server:
>>> sg = Shotgun('http://localhost:8000/next=keystone.shotgunstudio.com/', 'xxx', 'yyy')
The trailing slash is required for it to not get parsed out.
- Can we instead exist as the http_proxy kwarg?
- listen for schema changes, and use that to invalidate data