Qualitative Analysis: Cluster Randomized Trials (CRT’s)
Cluster randomization refers to the research methodology frequently applied in educational, social sciences and health services, implementation research, and public health research. This method uses a cluster of individuals like medical practitioners, and then randomly assigns them distinctive intervention arms. Because CRT’s methodological features are quite distinct, ethical challenges arise. For instance, identifying a subject for the CRT in hefty community-based public health Cluster randomized trials is a difficult task. This is because randomization units, outcome measurements, and interventions are different. This is because some participants have “gatekeepers” to help them decide whether or not to participate in CRT’s.
CRT’s pose distinctive ethical issues chiefly because; outcome measurements, randomization units, and interventions may sometimes be dissimilar. Informants identified that gatekeepers and need for consent in order to participate in medical CRT’s is the main challenge they are facing. Legitimacy issues on whether consent should be acquired via authorities such as social groups or municipalities brought about uncertainties regarding gatekeeper authority scope.
In addition to ethical issues surrounding CRT’s, the following are also listed as being the main research question. Firstly, CRT ethics review processes and secondly, the need of having comprehensive ethics guidelines for conducting CRT’s. McRae et al. lists the three as its main exploration questions, but no clarity is given thus making the research hard to understand.
Informants are concerned of biasness arising from consent approvals in order to be a CRT participant and intervention types should have a relation to the consent approvals. This may aid in reducing ethics related issues such as risks in privacy loss. Informants also state that ethics review processes have had both good and bad impacts on the conduction of CRT’s. They state that availability of guidelines would be helpful to researchers and their ethics committees. They also see a need in reduction of jurisdiction limitations so as to make their work easier.
Contact with potential informants was done via email by senior members who are chosen because of their experience in the CRT field. The email included consent to participate together with the purpose of the study, and the study design method. Potential informants who are willing to participate were interviewed via a telephone call, and details of the interview sent to them by mail though only after a verbal consent was obtained.
Sample, population, participants
Initial study was to be conducted on twenty five potential subjects, but four declined, and one informant’s data was cast-off due to its insufficiency in quality analysis and transcription. In the end, twenty experienced researchers in CRT were used. They were based in different places, ten in Europe, six in USA, four in Canada while five were self-proclaimed statisticians. Eleven of the informants are in the primary care field, three in public health, and six in hospital based care.
Individual informant’s interview transcripts were imported to qualitative data analysis softwares, and then later on a content analysis approach was adopted so as to categorize the responses. This is quite hard to understand as the information sounds too technical.
Behavioral interventions of CRT’s lead to biasness as the participants may change their behaviors due to information given during consent negotiation. The study does not recognize this as a limitation, but it is a huge disadvantage. The study was conducted on experienced CRT researchers, who are English speakers, and have only worked in developing countries.
The participants retorted the main questions of the study. The answers are categorized as follows: Need for informed consent, roles of cluster gatekeepers, potential benefits and risks involved, experiences with the ethics review process, and the development of CRT ethics guidelines.
The informants only highlighted difficulties in the ethical review process of CRT’s, consent issues from gatekeepers. Relations between harms, benefits and distributive justice issues have been left out.
Quantitative Analysis: Understanding the impact of Video Quality on User Engagement.
Video quality has a huge impact on the user’s engagement; this is because internet video distribution is a mainstream and internet traffics are mostly composed of videos. Users are now going for quality as compared to availability. This is being stimulated by the ever reducing cost of internet content delivery, and the advent of new subscription models.
The main points of discussion are types of video content, user engagement, and video quality metrics.
Questions under scrutiny are: by how much does a poor video quality reduce a user’s engagement? , Do different quality metrics vary in the intensity at which they affect a user’s engagement, and does the quality metrics impact differ across granularities of user’s engagements and beyond content genres?
User engagement in relation to quality impacts brought about three data types which support the research. Firstly, different contents types are being offered to users. Secondly, different engagement timescales are grouped per view. For instance, a single video being watched and “per viewer” meaning, a user’s aggregate of all videos they are to watch, and thirdly, quality metrics capture different features of observed, video quality being rendered, rate at which a video is encoded, how much and how often the user experiences buffering events.
Quantitative analysis done indicated an interaction between quality metrics and play time as being very complex. The relation was named as being linear, and black-box regression models are avoided as the regression buffer ratio was only from 1-10%. Buff ratio has the sturdiest quantitative impact on live, long VoD, and short VoD respectively.
Sample, population, participants
Data was collected from five previous affiliates. They appear in the top 500 most recognized sites. These sites serve a large amount of video content thus enabling them provide a representative view on the internet video quality and user engagement.
Data collection and data analysis
The data collected was organized into three types of content categories. Long Video on Demand contents, Short Video on Demand contents, and live contents. The data was then analyzed by calculating play time for all categories.
The findings that are presented were as a result of an iterative process which included many false statistics and misinterpretations.
There is a need for a complimentary analysis for long Video on Demand cases as it was realized that correlation co-efficiencies for an average bitrate are weak, but information gain was high. Context is an important factor in the impact caused by the video quality of a user.
There is the importance in backing statistical analysis with controlled experiments, and domain-specific insights so as to replicate observations made. The importance has made it easier to view a user’s behavior and a video player’s optimization capabilities than it was before all of the above.
Content providers still worry whether typical video observations can be applied to events of high impacts like the Olympics. The findings of this research still are not a clear explanation on whether video quality impacts are related to a user’s engagement or a user’s attention span.
Discussion of findings
Video Quality Impacts on user engagement (VQI engagement), and CRT research share some common similarities, and some great differences in their objectives of the study. Both CRT’s and VQI user engagement study was aimed at finding answers on specific matters. They both in the same way conduct their studies with human beings as subject matters, and, also, in addition to that, they both acquire subjects consent before conducting their studies. However, even though they both strive to achieve a definite specific answer to their questions, they both have some similarities between them. CRT researchers used phone calls to conduct their study while Video quality impact on user engagement researchers used internet sites to conduct theirs. Video quality study used multiple false statistics and assumptions while CRT’s used real statistics to conduct their study. This specific point has huge implications on both studies. There’s a huge factor of randomness in video quality impacts on user engagement, but CRT’s are more specific and straight forward even though some of their findings also have a chance of randomness. When it comes to their limitations, CRT limitations are mostly due to bias acts on the informant’s side while Video quality impacts on user engagements list their limitation as being due to false statistics caused by assumptions. In conclusion, Video impact on user engagement findings has still not proven to be completely true while findings of the CRT study have found answers to their study.