Skip to main content
Home
Department Of Physics text logo
  • Research
    • Our research
    • Our research groups
    • Our research in action
    • Research funding support
    • Summer internships for undergraduates
  • Study
    • Undergraduates
    • Postgraduates
  • Engage
    • For alumni
    • For business
    • For schools
    • For the public
Menu
Black Hole

Lensing of space time around a black hole. At Oxford we study black holes observationally and theoretically on all size and time scales - it is some of our core work.

Credit: ALAIN RIAZUELO, IAP/UPMC/CNRS. CLICK HERE TO VIEW MORE IMAGES.

Prof Chris Lintott

Professor of Astrophysics and Citizen Science Lead

Research theme

  • Astronomy and astrophysics

Sub department

  • Astrophysics

Research groups

  • Zooniverse
  • Beecroft Institute for Particle Astrophysics and Cosmology
  • Rubin-LSST
chris.lintott@physics.ox.ac.uk
Telephone: 01865 (2)73638
Denys Wilkinson Building, room 532C
www.zooniverse.org
orcid.org/0000-0001-5578-359X
  • About
  • Citizen science
  • Group alumni
  • Publications

Zooniverse labs

Zooniverse lab
Build your own Zooniverse project

The Zooniverse lab lets anyone build their own citizen science project

Zooniverse Lab

An introduction to the Zooniverse

AAAI Workshop - Technical Report WS-13-18 (2013) 103

Authors:

AM Smith, S Lynn, CJ Lintott

Abstract:

The Zooniverse (zooniverse.org) began in 2007 with the launch of Galaxy Zoo, a project in which more than 175,000 people provided shape analyses of more than 1 million galaxy images sourced from the Sloan Digital Sky Survey. These galaxy 'classifications', some 60 million in total, have subsequently been used to produce more than 50 peer-reviewed publications based not only on the original research goals of the project but also because of serendipitous discoveries made by the volunteer community. Based upon the success of Galaxy Zoo the team have gone on to develop more than 25 web-based citizen science projects, all with a strong research focus in a range of subjects from astronomy to zoology where human-based analysis still exceeds that of machine intelligence. Over the past 6 years Zooniverse projects have collected more than 300 million data analyses from over 1 million volunteers providing fantastically rich datasets for not only the individuals working to produce research from their projects but also the machine learning and computer vision research communities. The Zooniverse platform has always been developed to be the 'simplest thing that works', implementing only the most rudimentary algorithms for functionality such as task allocation and user-performance metrics. These simplifications have been necessary to scale the Zooniverse so that the core team of developers and data scientists can remain small and the cost of running the computing infrastructure relatively modest. To date these simplifications have been acceptable for the data volumes and analysis tasks being addressed. This situation however is changing: next generation telescopes such as the Large Synoptic Sky Telescope (LSST) will produce data volumes dwarfing those previously analyzed. If citizen science is to have a part to play in analyzing these next-generation datasets then the Zooniverse will need to evolve into a smarter system capable for example of modeling the abilities of users and the complexities of the data being classified in real time. In this session we will outline the current architecture of the Zooniverse platform and introduce new functionality being developed that should be of interest to the HCOMP community. Our platform is evolving into a system capable of integrating human and machine intelligence in a live environment. Data APIs providing realtime access to 'event streams' from the Zooniverse infrastructure are currently being tested as well as API endpoints for making decisions about for example what piece of data to show next to a volunteer as well as when to retire a piece of data from the live system because a consensus has been reached.
More details from the publisher

Crowd-Sourced Assessment of Technical Skills: a novel method to evaluate surgical performance

Journal of Surgical Research (2013)

Authors:

C Chen, D Holst, L White, T Kowalewski, R Aggarwal, C Lintott, B Comstock, K Kuksenok, C Aragon, T Lendvay

Abstract:

Background: Validated methods of objective assessments of surgical skills are resource intensive. We sought to test a web-based grading tool using crowdsourcing called Crowd-Sourced Assessment of Technical Skill. Materials and methods: Institutional Review Board approval was granted to test the accuracy of Amazon.com's Mechanical Turk and Facebook crowdworkers compared with experienced surgical faculty grading a recorded dry-laboratory robotic surgical suturing performance using three performance domains from a validated assessment tool. Assessor free-text comments describing their rating rationale were used to explore a relationship between the language used by the crowd and grading accuracy. Results: Of a total possible global performance score of 3-15, 10 experienced surgeons graded the suturing video at a mean score of 12.11 (95% confidence interval [CI], 11.11-13.11). Mechanical Turk and Facebook graders rated the video at mean scores of 12.21 (95% CI, 11.98-12.43) and 12.06 (95% CI, 11.57-12.55), respectively. It took 24 h to obtain responses from 501 Mechanical Turk subjects, whereas it took 24 d for 10 faculty surgeons to complete the 3-min survey. Facebook subjects (110) responded within 25 d. Language analysis indicated that crowdworkers who used negation words (i.e., "but," "although," and so forth) scored the performance more equivalently to experienced surgeons than crowdworkers who did not (P < 0.00001). Conclusions: For a robotic suturing performance, we have shown that surgery-naive crowdworkers can rapidly assess skill equivalent to experienced faculty surgeons using Crowd-Sourced Assessment of Technical Skill. It remains to be seen whether crowds can discriminate different levels of skill and can accurately assess human surgery performances. © 2013 Elsevier Inc. All rights reserved.
More details from the publisher
More details
More details

Human Computation in Citizen Science

Chapter in Handbook of Human Computation, Springer Nature (2013) 153-162

Authors:

Chris Lintott, Jason Reed
More details from the publisher

Morphology in the era of large surveys

ASTRONOMY & GEOPHYSICS 54:5 (2013) 16-19

Authors:

Chris Lintott, Karen Masters, Brooke Simmons, Steven Bamford, Sugata Kaviraj
More details

Participating in Online Citizen Science: Motivations as the Basis for User Types and Trajectories

Chapter in Handbook of Human Computation, Springer Nature (2013) 695-702

Authors:

Jason T Reed, Ryan Cook, M Jordan Raddick, Karen Carney, Chris Lintott
More details from the publisher

Pagination

  • First page First
  • Previous page Prev
  • …
  • Page 31
  • Page 32
  • Page 33
  • Page 34
  • Current page 35
  • Page 36
  • Page 37
  • Page 38
  • Page 39
  • …
  • Next page Next
  • Last page Last

Footer Menu

  • Contact us
  • Giving to the Dept of Physics
  • Work with us
  • Media

User account menu

  • Log in

Follow us

FIND US

Clarendon Laboratory,

Parks Road,

Oxford,

OX1 3PU

CONTACT US

Tel: +44(0)1865272200

University of Oxfrod logo Department Of Physics text logo
IOP Juno Champion logo Athena Swan Silver Award logo

© University of Oxford - Department of Physics

Cookies | Privacy policy | Accessibility statement

Built by: Versantus

  • Home
  • Research
  • Study
  • Engage
  • Our people
  • News & Comment
  • Events
  • Our facilities & services
  • About us
  • Current students
  • Staff intranet