Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add batch support to generator framework #14

Closed
Closed
Show file tree
Hide file tree
Changes from 10 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 42 additions & 4 deletions boundary_layer/builders/templates/generator_operator.j2
Original file line number Diff line number Diff line change
Expand Up @@ -29,23 +29,61 @@ def {{ item_name_builder }}(index, item):
latter would discard any default task args, expecting them to be filled-in
by airflow, while in fact airflow would not fill them in at all. #}
{% set properties = node.resolved_properties.values %}
for (index, item) in enumerate({{ iterable_builder }}(

{% set all_items = (node.name + '_all_items') | sanitize_operator_name %}
{{ all_items }} = {{ iterable_builder }}(
{% for arg in builder_args %}
{% if arg in properties %}
{{ arg }} = {{ properties[arg] | format_value }},
{% endif %}
{% endfor %}
)):
)

{% if node.batching.enabled %}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if, instead of using a conditional block for this, you could instead:

  • use a default batch size of 1
  • create the {{item_name}} variable inside the builder function only if the batch size is 1
  • have the batch-name builder default to just choosing the item_name if the batch size is 1?

I think something along these lines might simplify a lot of the code, because many lines and branches have to go into dealing with differences between item_name and batch_name. I guess batch_name would become standard and item_name would only be a special case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get what you're saying, but if things technically run in "batch" mode all the time, then all related functions should return a list (even if that list only contains one item). For backwards compatibility, a single-item list could be simplified to its singular element, but IMO, that inconsistency complicates the API. 🤷‍♂️

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright yeah you convinced me 😄

I think I still do prefer the interface described below in the comment about BatchingSchema, where if batch_size is missing or None then we implicitly interpret that to be equivalent to disabling batching, so that we don't need to fill in both the enabled field and the batch_size field. thoughts on that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it. Makes usage a little bit easier. 👍

{# Generate code for batched situations #}
{% set batch_name_builder = (node.name + '_batch_name_builder') | sanitize_operator_name %}
def {{ batch_name_builder }}(index, items):
return 'batch_%d_%d' % (index, len(items))

{# TODO: Import this from some util module when such functionality is possible #}
def generator_helper_filter_with_blocklist(items, item_name_builder, blocklist):
def not_in_blocklist(index, item):
item_name = item_name_builder(index, item)
return not any(re.match(i, item_name) for i in blocklist)

filtered = filter(lambda (index, item): not_in_blocklist(index, item), enumerate(items))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unfortunately python 3 does not support the syntax in which function/lambda args are automatically expanded into tuples (though I will admit I have not carefully considered whether other parts of the code-generating templates generate python3 compliant code... we may have some work to do in that area).

Maybe it would be better to just map and filter using a comprehension like

return [ index for (index, item) in enumerate(items)
    if not_in_blocklist(index, item) ]

?


return map(lambda t: t[1], filtered)

{# TODO: Import this from some util module when such functionality is possible #}
{# Borrowed from: https://stackoverflow.com/a/312464 #}
def generator_helper_grouped_list(l, n):
for i in range(0, len(l), n):
yield l[i:i + n]

{% set filtered = (node.name + '_filtered') | sanitize_operator_name %}
{{ filtered }} = generator_helper_filter_with_blocklist({{ all_items }}, {{ item_name_builder }}, {{ blocklist }})

for (index, items) in enumerate(generator_helper_grouped_list({{ filtered }}, {{ node.batching.batch_size }})):
batch_name = {{ batch_name_builder }}(index, items)

{% set item_input = 'items' %}
{% set name_input = 'batch_name' %}
{% else %}
{# Generate code for non-batched situations #}
for (index, item) in enumerate({{ all_items }}):
item_name = {{ item_name_builder }}(index, item)
blocklist_match = any(re.match(i, item_name) for i in {{ blocklist }})
if blocklist_match:
continue

{% set item_input = 'item' %}
{% set name_input = 'item_name' %}
{% endif %}
{{ node.target | sanitize_operator_name }}_builder(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I broke this out of the conditional blocks and templated item_input and name_input to avoid repetition, but I'm actually not sure if that's the best idea here... Thoughts?

index = index,
item = item,
item_name = item_name,
{{ item_input }} = {{ item_input }},
{{ name_input }} = {{ name_input }},
dag = dag,
upstream_dependencies = {{ upstream_dependencies | sanitize_operator_name | verbatim | format_value }},
downstream_dependencies = {{ downstream_dependencies | sanitize_operator_name | verbatim | format_value }})
Expand Down
11 changes: 9 additions & 2 deletions boundary_layer/builders/templates/generator_preamble.j2
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,17 @@ You may obtain a copy of the License at
See the License for the specific language governing permissions and
limitations under the License.
#}
{% if referring_node.batching.enabled %}
{%- set item_input = 'items' %}
{%- set name_input = 'batch_name' %}
{% else %}
{%- set item_input = 'item' %}
{%- set name_input = 'item_name' %}
{% endif %}
def {{ generator_operator_name | sanitize_operator_name }}_builder(
index,
item,
item_name,
{{ item_input }},
{{ name_input }},
dag,
upstream_dependencies,
downstream_dependencies):
4 changes: 4 additions & 0 deletions boundary_layer/registry/types/generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,10 @@ class GeneratorNode(SubdagNode):
def regex_blocklist(self):
return self.item.get('regex_blocklist', ())

@property
def batching(self):
return self.item.get('batching', {'enabled': False, 'batch_size': 1})


class GeneratorRegistry(ConfigFileRegistry):
node_cls = GeneratorNode
Expand Down
14 changes: 12 additions & 2 deletions boundary_layer/registry/types/operator.py
Original file line number Diff line number Diff line change
Expand Up @@ -422,8 +422,18 @@ def _build_task_id(self, execution_context):
return base_name

suffix_mode = execution_context.referrer.item.get('auto_task_id_mode')
if not suffix_mode or suffix_mode == 'item_name':
return base_name + '-<<item_name>>'
batching_config = execution_context.referrer.item.get('batching', {'enabled': False})
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like there's probably a better way to do this... 🤔

# Validate suffix_mode based on batching config
if batching_config['enabled'] and suffix_mode == 'item_name':
raise Exception(
'Cannot use `item_name` for auto_task_id_mode when batching is enabled')
elif not batching_config['enabled'] and suffix_mode == 'batch_name':
raise Exception(
'Cannot use `batch_name` for auto_task_id_mode when batching is disabled')

name_var = 'batch_name' if batching_config['enabled'] else 'item_name'
if not suffix_mode or suffix_mode == name_var:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, should probably support either batch_name or item_name when batching is enabled, but only item_name if batching is not enabled. Thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yup, if I had read this comment first I would have type a lot less up there ^^ haha

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On second thought, I don't think it makes sense to use item_name in a batching scenario. There's not really a good way to construct the item name in that case, so I think we'd have to use batch_name. Let me know if I'm missing something here... Otherwise, I'll add some validation around this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

return base_name + '-<<' + name_var + '>>'
elif suffix_mode == 'index':
return base_name + '-<<str(index)>>'

Expand Down
28 changes: 27 additions & 1 deletion boundary_layer/schemas/dag.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# limitations under the License.

import semver
from marshmallow import fields, validates_schema, ValidationError
from marshmallow import fields, post_load, pre_dump, validates_schema, ValidationError
from boundary_layer import VERSION, MIN_SUPPORTED_VERSION
from boundary_layer.schemas.base import StrictSchema

Expand All @@ -35,9 +35,35 @@ class ReferenceSchema(OperatorSchema):
target = fields.String(required=True)


class BatchingSchema(StrictSchema):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor maybe, but you could do this with just a single optional batch_size argument in the generator schema, i think? and if that argument is missing then you implicitly assume that enabled == False. This would also simplify the part above where you say you think there would be a better way...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my comment in generator_operator.j2 for my logic here.

enabled = fields.Boolean()
batch_size = fields.Integer(required=True)
# This is a "transient" field to help with implicit enablement behavior
original_enabled = fields.Boolean(load_only=True)

@post_load
def fix_enabled_pre_load(self, data):
"""
If batching config is set at all, it's assumed to be enabled.
"""
enabled = data.get('enabled', None)
data['original_enabled'] = enabled
if enabled is None:
data['enabled'] = True

@pre_dump
def fix_enabled_pre_dump(self, data):
"""
Don't persist the enabled field if it wasn't explicitly configured.
"""
if data['original_enabled'] is None:
del data['enabled']


class GeneratorSchema(ReferenceSchema):
auto_task_id_mode = fields.String()
regex_blocklist = fields.List(fields.String())
batching = fields.Nested(BatchingSchema)

@validates_schema
def check_task_id_mode(self, data):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I did not notice this in the original PR, but we'll have to add logic in this check_task_id_mode() method to allow the auto_task_id_mode value to be set to batch_name, otherwise I think the config parser will reject this setting. Maybe the logic that you already have for checking these values in operator.py could be moved here?

Expand Down
57 changes: 57 additions & 0 deletions test/test_schemas.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
from boundary_layer.schemas.dag import BatchingSchema
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me know if there's a better place to put this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh ya I wouldn't bet on us having perfect fidelity with this, but we have tried to align the locations of files in test/ with the directory structure that is used in the main package. So could you please put this file into a directory test/schemas ?



def test_batching_schema_implicit_enabled():
schema = BatchingSchema()
data = {
'batch_size': 10
}
batching = schema.load(data)[0]

assert batching['enabled'] is True
assert batching['batch_size'] == 10
assert batching['original_enabled'] is None

dumped = schema.dump(batching)[0]

assert 'enabled' not in dumped
assert dumped['batch_size'] == 10
assert 'original_enabled' not in dumped


def test_batching_schema_explicit_enabled():
schema = BatchingSchema()
data = {
'enabled': True,
'batch_size': 10
}
batching = schema.load(data)[0]

assert batching['enabled'] is True
assert batching['batch_size'] == 10
assert batching['original_enabled'] is True

dumped = schema.dump(batching)[0]

assert dumped['enabled'] is True
assert dumped['batch_size'] == 10
assert 'original_enabled' not in dumped


def test_batching_schema_disabled():
schema = BatchingSchema()
data = {
'enabled': False,
'batch_size': 10
}
batching = schema.load(data)[0]

assert batching['enabled'] is False
assert batching['batch_size'] == 10
assert batching['original_enabled'] is False

dumped = schema.dump(batching)[0]

assert dumped['enabled'] is False
assert dumped['batch_size'] == 10
assert 'original_enabled' not in dumped