This 6 week series will cover causal inference model building and evaluation techniques. In this workshop, we’ll teach the essential elements of answering causal questions in R through causal diagrams, and causal modeling techniques such as propensity scores and inverse probability weighting. We’ll also show that by distinguishing predictive models from causal models, we can better take advantage of both tools. You’ll be able to use the tools you already know–the tidyverse, regression models, and more–to answer the questions that are important to your work.
This talk will focus on an application, ConTESSA, along with the accompanying R package, tti, designed to help quantify the impact of contact tracing programs. The talk will walk through the technical aspects of the underlying model as well as highlight how R, and in particular shiny, were used to create this product.
The data revolution has led to an increased interest in the practice of data analysis. While much has been written about statistical thinking, a complementary form of thinking that appears in the practice of data analysis is design thinking – the problem-solving process to understand the people for whom a product is being designed. For a given problem, there can be significant or subtle differences in how a data analyst (or producer of a data analysis) constructs, creates, or designs a data analysis, including differences in the choice of methods, tooling, and workflow. These choices can affect the data analysis products themselves and the experience of the consumer of the data analysis. Therefore, the role of a producer can be thought of as designing the data analysis with a set of design principles. This talk will introduce six design principles for data analysis and describe how they can be mapped to data analyses in a quantitative and informative manner. We also provide empirical evidence of variation of these principles within and between producers of data analyses. This will hopefully provide guidance for future work in characterizing the data analytic process.
In this workshop, we’ll teach the essential elements of answering causal questions in R through causal diagrams, and causal modeling techniques such as propensity scores and inverse probability weighting.
The Wake Forest Conference on Analytics Impact is focused on the impactful use of analytics to solve problems in business, non-profits, government agencies and society. During the pandemic, government officials and healthcare professionals have more so than ever before, had to communicate to the public using healthcare data. How to communicate these data statistically and visually to influence people’s behavior has proven very challenging. What have we learned about communicating with data during this crisis? What did we get right and what failed? This year’s Conference on Analytics Impact is focused on communicating with health care data and lessons learned from the pandemic.
Without strong communication skills, all the advanced analysis we have performed might be overrun. At this event, our expert panelists will share tips and advice on how to clearly and effectively communicate statistics, particularly in social media, and answer questions from the audience.
The debate over the value and interpretation of p-value has endured since the time of its inception nearly 100 years ago. The use and interpretation of p-values vary by a host of factors, especially by discipline. These differences have proven to be a barrier when developing and implementing boundary-crossing clinical and translational science. The purpose of this panel discussion is to discuss misconceptions, debates, and alternatives to the p-value.
This talk will focus on leveraging social media to communicate statistical concepts. From summarizing other’s content to promoting your own work, we will discuss best practices for effective statistical communication that simultaneously is clear, engaging, and understandable while remaining rigorous and mathematically correct. It is increasingly important for people to be able to sift through what is important and what is noise, what is evidence and what is an anecdote. This talk focuses on techniques to strike an appropriate balance, with specifics on how to communicate complex statistical concepts in an engaging manner without sacrificing truth and content.
Clear statistical communication is both an educational and public health priority. This talk will focus on best practices for effective statistical communication that simultaneously is clear, engaging, and understandable while remaining rigorous and mathematically correct. It is increasingly important for people to be able to sift through what is important and what is noise, what is evidence and what is an anecdote. This talk focuses on techniques to strike an appropriate balance, with specifics on how to communicate complex statistical concepts in an engaging manner without sacrificing truth and content.
Data Science, as a broad and interdisciplinary field, is one of the fastest growing areas of student interest (and employment opportunities). The traditional introductory statistics courses that would typically serve as a gateway to data science need modernized curricula and pedagogy in order to adapt to today’s increasingly large and complex data sources and data science questions. In this session, we share our experience to address the following issues: • What constitutes the fundamentals of good data science practice? • How to teach a data science course with innovative pedagogy? • How to improve communication skills to bridge data scientists and practitioners? • How to take advantage of virtual learning? Discussant: Linda Zhao Speakers: Leanna House on Adapting student engagement strategies for a virtual environment, Lucy D’Agostino McGowan on Bringing Data Science Communication into the Classroom, and Nusrat Jahan on Data Science Education in Undergraduate Setting. There will be three speakers and a discussant in this session.