Replies: 5 comments 1 reply
-
One approach might be to think of themes. For example,
Or,
|
Beta Was this translation helpful? Give feedback.
-
Another possible model: We assign "Topics for conversation" e.g. "What are the barriers to adoption of open-source in your organisation?" and small groups / tables discuss this topic for 15 mins then each table feeds back. Would allow us to engage and draw out common themes and anecdotes for later discussion in panel. |
Beta Was this translation helpful? Give feedback.
-
Potential formats availableAssumption is we have more than 40 people, and a single round table will not facilitate full engagement in the discussion. Regardless, the 'shared validation' topic would use up the first 90 minutes. Panel based - everyone in one streampre-event: We review the topics, and create an agenda from the submissions. ~4 of the advisory board take on each topic in the agenda with a chair, and prepare a couple of slides each per topic at-event: We have one stream, for each topic the panel introduce the topic, and then the chair facilitates a discussion between the panelists and the audience notes: This would be a structured discussion, using a format that is fairly common for panels. Having the panel helps to focus the discussion and introduce key information and areas to cover, and hopefully this transitions to a good interaction between the panel and the remaining people in the room. Chair based - we break it into streams (need to know number coming for this)pre-event: We decide on the agenda but we have multiple streams (e.g. 2-4 topics at a time), and make sure each topic has a chair. at-event*: We break into streams (could be tables), with a chair at each table to guide the conversation. Chairs would be responsible for summarising their session. Quasi un-conference (live agenda)pre-event: We use the submissions to seed a draft agenda on the kanban, with a few parallel streams (based on room layout and number attending) at-event*: We dedicate the first 30 minutes (after Doug) to reviewing the agenda - and giving people a chance to propose new topics, merging topics or moving topics around. We ask for a volunteer to chair for each topic, with only those with a volunteer remaining on the agenda. Event runs- but we stress to people to feel free to move around. After event chairs summarise. |
Beta Was this translation helpful? Give feedback.
-
Just capturing a thought during the call on 8/23... I would love to see a tangible output that we can share broadly after the conference. I think it would be really cool if we aimed to have something like a white paper that captures the attendees' thoughts on each of the breakout group questions. I wouldn't want to distract too much from the discussions on the day, so maybe this bridges us nicely into a post-conference chat network to polish the summaries into something we can share broadly. Similar idea from Ning in the chat during the call
|
Beta Was this translation helpful? Give feedback.
-
Another idea regarding the breakouts. I've been to other unconferences where they emphasized that participants should feel fully empowered to migrate between breakouts - whether they find that the discussion isn't what they were expecting or they just want to sample other discussions. In practice, I think people stay wherever they first go because it's a bit awkward to just leave and join another group. If we want to keep that self-assembling atmosphere, perhaps having a small announcement a few times throughout each breakout session would help assure people that it's okay and encouraged. |
Beta Was this translation helpful? Give feedback.
-
We have confirmed to have an opening 90min topic, which will be led by @dgkf which was described to people signing up to the workshop as "a 90 minute workshop on a new validated repository for GxP R use".
After that we need to define how to structure the day. e.g. will we pre-define the agenda topics? Will we do a more fluid unconference style, and let people adapt and vote on the day?
Agenda (draft)
9 - 9.30am: Introduction, get to know participants
9.30am - 11am: A shared repository for R package validation (Responsible: Doug Kelkhoff)
11am - 12.30: Fostering strong communities and collaboration between companies for corporate sponsored open sourced packages that are business critical
Snowball fight:
NEED to flow this into what can we do to address this (to be defined)
12.30 - 1.30pm: Lunch
1 - 5pm: Small group sessions (need to schedule bio-break?)
Each group has one facilitator, that stays with the group for the entire afternoon. The afternoon is broken into two sections, with participants joining one group for the first half, and then moving to a different group in the afternoon. Each facilitator should aim to define a common understanding of the problem with each group, and aim to end the session either with some content that helps address the topic raised, or with a set of actions and recommendations to follow up on.
Group 1: #6 & #10 What is stopping people and companies from contributions in OS? Can we define the case for contributing to OS?
Group 2: #8 Where are we on getting data analysts and data scientists that work with clinical data on board (in particular, those delivering CSRs and submission packages)? What are the challenges - what has been overcome?
Group 3: #11 What should we be doing to leverage advances in LLMs/AA/AI impact? (at the drug development through to developer efficiency levels)
Group 4: #3 There is more need than ever to integrate different roles, and ways of working, along with different data modalities. What are the barriers bringing imaging/genomics/digital biomarkers and the CRF closer, how could we overcome them, and what is our envisioned benefit?
Group 5: #16 What are our goals for a modern clinical reporting workflow, on a modern SCE? What are our learnings today achieving that goal, and how can we better prepare ourselves to balance the drive to innovate while having to evolve people and processes?
Group 6: #9 We have a path to R package validation - but what about shiny apps? In what context would validation become relevant to shiny app code, and how can we get ahead of this topic to pave a way forward for interactive CSRs?
Group 7: #15 How much risk is there in depending on external packages, and can we foster a clearer set of expectations between developers and people/companies that depend on these packages?
Group 8: #17 If we assume it's technically possible to transfer - what would it mean to give an interactive CSR? How would primary, secondary and ad-hoc analysis be viewed? Would views change depending on role (e.g. sponsor, statistical reviewer, clinical reviewer)?
5 - 5.30pm: Closing, each facilitator has 5 mins to reflections, and how they wish to follow up and action topics discussed in their group
Beta Was this translation helpful? Give feedback.
All reactions