Skip to main content
Home
Department Of Physics text logo
  • Research
    • Our research
    • Our research groups
    • Our research in action
    • Research funding support
    • Summer internships for undergraduates
  • Study
    • Undergraduates
    • Postgraduates
  • Engage
    • For alumni
    • For business
    • For schools
    • For the public
Menu
Black Hole

Lensing of space time around a black hole. At Oxford we study black holes observationally and theoretically on all size and time scales - it is some of our core work.

Credit: ALAIN RIAZUELO, IAP/UPMC/CNRS. CLICK HERE TO VIEW MORE IMAGES.

Prof Chris Lintott

Professor of Astrophysics and Citizen Science Lead

Research theme

  • Astronomy and astrophysics

Sub department

  • Astrophysics

Research groups

  • Zooniverse
  • Beecroft Institute for Particle Astrophysics and Cosmology
  • Rubin-LSST
chris.lintott@physics.ox.ac.uk
Telephone: 01865 (2)73638
Denys Wilkinson Building, room 532C
www.zooniverse.org
orcid.org/0000-0001-5578-359X
  • About
  • Citizen science
  • Group alumni
  • Publications

Zooniverse labs

Zooniverse lab
Build your own Zooniverse project

The Zooniverse lab lets anyone build their own citizen science project

Zooniverse Lab

Galaxy Zoo: Motivations of Citizen Scientists

ArXiv 1303.6886 (2013)

Authors:

M Jordan Raddick, Georgia Bracey, Pamela L Gay, Chris J Lintott, Carie Cardamone, Phil Murray, Kevin Schawinski, Alexander S Szalay, Jan Vandenberg

Abstract:

Citizen science, in which volunteers work with professional scientists to conduct research, is expanding due to large online datasets. To plan projects, it is important to understand volunteers' motivations for participating. This paper analyzes results from an online survey of nearly 11,000 volunteers in Galaxy Zoo, an astronomy citizen science project. Results show that volunteers' primary motivation is a desire to contribute to scientific research. We encourage other citizen science projects to study the motivations of their volunteers, to see whether and how these results may be generalized to inform the field of citizen science.
Details from ArXiV
More details from the publisher

Unproceedings of the Fourth .Astronomy Conference (.Astronomy 4), Heidelberg, Germany, July 9-11 2012

ArXiv 1301.5193 (2013)

Authors:

Robert J Simpson, Chris Lintott, Amanda Bauer, Bruce Berriman, Edward Gomez, Sarah Kendrew, Thomas Kitching, August Muench, Demitri Muna, Thomas Robitaille, Megan E Schwamb, Brooke Simmons

Abstract:

The goal of the .Astronomy conference series is to bring together astronomers, educators, developers and others interested in using the Internet as a medium for astronomy. Attendance at the event is limited to approximately 50 participants, and days are split into mornings of scheduled talks, followed by 'unconference' afternoons, where sessions are defined by participants during the course of the event. Participants in unconference sessions are discouraged from formal presentations, with discussion, workshop-style formats or informal practical tutorials encouraged. The conference also designates one day as a 'hack day', in which attendees collaborate in groups on day-long projects for presentation the following morning. These hacks are often a way of concentrating effort, learning new skills, and exploring ideas in a practical fashion. The emphasis on informal, focused interaction makes recording proceedings more difficult than for a normal meeting. While the first .Astronomy conference is preserved formally in a book, more recent iterations are not documented. We therefore, in the spirit of .Astronomy, report 'unproceedings' from .Astronomy 4, which was held in Heidelberg in July 2012.
Details from ArXiV
More details from the publisher

Planet Hunters. V. A Confirmed Jupiter-Size Planet in the Habitable Zone and 42 Planet Candidates from the Kepler Archive Data

ArXiv 1301.0644 (2013)

Authors:

Ji Wang, Debra A Fischer, Thomas Barclay, Tabetha S Boyajian, Justin R Crepp, Megan E Schwamb, Chris Lintott, Kian J Jek, Arfon M Smith, Michael Parrish, Kevin Schawinski, Joseph Schmitt, Matthew J Giguere, John M Brewer, Stuart Lynn, Robert Simpson, Abe J Hoekstra, Thomas Lee Jacobs, Daryll LaCourse, Hans Martin Schwengeler, Mike Chopin

Abstract:

We report the latest Planet Hunter results, including PH2 b, a Jupiter-size (R_PL = 10.12 \pm 0.56 R_E) planet orbiting in the habitable zone of a solar-type star. PH2 b was elevated from candidate status when a series of false positive tests yielded a 99.9% confidence level that transit events detected around the star KIC 12735740 had a planetary origin. Planet Hunter volunteers have also discovered 42 new planet candidates in the Kepler public archive data, of which 33 have at least three transits recorded. Most of these transit candidates have orbital periods longer than 100 days and 20 are potentially located in the habitable zones of their host stars. Nine candidates were detected with only two transit events and the prospective periods are longer than 400 days. The photometric models suggest that these objects have radii that range between Neptune to Jupiter. These detections nearly double the number of gas giant planet candidates orbiting at habitable zone distances. We conducted spectroscopic observations for nine of the brighter targets to improve the stellar parameters and we obtained adaptive optics imaging for four of the stars to search for blended background or foreground stars that could confuse our photometric modeling. We present an iterative analysis method to derive the stellar and planet properties and uncertainties by combining the available spectroscopic parameters, stellar evolution models, and transiting light curve parameters, weighted by the measurement errors. Planet Hunters is a citizen science project that crowd-sources the assessment of NASA Kepler light curves. The discovery of these 43 planet candidates demonstrates the success of citizen scientists at identifying planet candidates, even in longer period orbits with only two or three transit events.
Details from ArXiV
More details from the publisher
More details
More details

An introduction to the Zooniverse

AAAI Workshop - Technical Report WS-13-18 (2013) 103

Authors:

AM Smith, S Lynn, CJ Lintott

Abstract:

The Zooniverse (zooniverse.org) began in 2007 with the launch of Galaxy Zoo, a project in which more than 175,000 people provided shape analyses of more than 1 million galaxy images sourced from the Sloan Digital Sky Survey. These galaxy 'classifications', some 60 million in total, have subsequently been used to produce more than 50 peer-reviewed publications based not only on the original research goals of the project but also because of serendipitous discoveries made by the volunteer community. Based upon the success of Galaxy Zoo the team have gone on to develop more than 25 web-based citizen science projects, all with a strong research focus in a range of subjects from astronomy to zoology where human-based analysis still exceeds that of machine intelligence. Over the past 6 years Zooniverse projects have collected more than 300 million data analyses from over 1 million volunteers providing fantastically rich datasets for not only the individuals working to produce research from their projects but also the machine learning and computer vision research communities. The Zooniverse platform has always been developed to be the 'simplest thing that works', implementing only the most rudimentary algorithms for functionality such as task allocation and user-performance metrics. These simplifications have been necessary to scale the Zooniverse so that the core team of developers and data scientists can remain small and the cost of running the computing infrastructure relatively modest. To date these simplifications have been acceptable for the data volumes and analysis tasks being addressed. This situation however is changing: next generation telescopes such as the Large Synoptic Sky Telescope (LSST) will produce data volumes dwarfing those previously analyzed. If citizen science is to have a part to play in analyzing these next-generation datasets then the Zooniverse will need to evolve into a smarter system capable for example of modeling the abilities of users and the complexities of the data being classified in real time. In this session we will outline the current architecture of the Zooniverse platform and introduce new functionality being developed that should be of interest to the HCOMP community. Our platform is evolving into a system capable of integrating human and machine intelligence in a live environment. Data APIs providing realtime access to 'event streams' from the Zooniverse infrastructure are currently being tested as well as API endpoints for making decisions about for example what piece of data to show next to a volunteer as well as when to retire a piece of data from the live system because a consensus has been reached.
More details from the publisher

Crowd-Sourced Assessment of Technical Skills: a novel method to evaluate surgical performance

Journal of Surgical Research (2013)

Authors:

C Chen, D Holst, L White, T Kowalewski, R Aggarwal, C Lintott, B Comstock, K Kuksenok, C Aragon, T Lendvay

Abstract:

Background: Validated methods of objective assessments of surgical skills are resource intensive. We sought to test a web-based grading tool using crowdsourcing called Crowd-Sourced Assessment of Technical Skill. Materials and methods: Institutional Review Board approval was granted to test the accuracy of Amazon.com's Mechanical Turk and Facebook crowdworkers compared with experienced surgical faculty grading a recorded dry-laboratory robotic surgical suturing performance using three performance domains from a validated assessment tool. Assessor free-text comments describing their rating rationale were used to explore a relationship between the language used by the crowd and grading accuracy. Results: Of a total possible global performance score of 3-15, 10 experienced surgeons graded the suturing video at a mean score of 12.11 (95% confidence interval [CI], 11.11-13.11). Mechanical Turk and Facebook graders rated the video at mean scores of 12.21 (95% CI, 11.98-12.43) and 12.06 (95% CI, 11.57-12.55), respectively. It took 24 h to obtain responses from 501 Mechanical Turk subjects, whereas it took 24 d for 10 faculty surgeons to complete the 3-min survey. Facebook subjects (110) responded within 25 d. Language analysis indicated that crowdworkers who used negation words (i.e., "but," "although," and so forth) scored the performance more equivalently to experienced surgeons than crowdworkers who did not (P < 0.00001). Conclusions: For a robotic suturing performance, we have shown that surgery-naive crowdworkers can rapidly assess skill equivalent to experienced faculty surgeons using Crowd-Sourced Assessment of Technical Skill. It remains to be seen whether crowds can discriminate different levels of skill and can accurately assess human surgery performances. © 2013 Elsevier Inc. All rights reserved.
More details from the publisher
More details
More details

Pagination

  • First page First
  • Previous page Prev
  • …
  • Page 30
  • Page 31
  • Page 32
  • Page 33
  • Current page 34
  • Page 35
  • Page 36
  • Page 37
  • Page 38
  • …
  • Next page Next
  • Last page Last

Footer Menu

  • Contact us
  • Giving to the Dept of Physics
  • Work with us
  • Media

User account menu

  • Log in

Follow us

FIND US

Clarendon Laboratory,

Parks Road,

Oxford,

OX1 3PU

CONTACT US

Tel: +44(0)1865272200

University of Oxfrod logo Department Of Physics text logo
IOP Juno Champion logo Athena Swan Silver Award logo

© University of Oxford - Department of Physics

Cookies | Privacy policy | Accessibility statement

Built by: Versantus

  • Home
  • Research
  • Study
  • Engage
  • Our people
  • News & Comment
  • Events
  • Our facilities & services
  • About us
  • Current students
  • Staff intranet