FOSS4G Academic Track and Review Process
With a changing world tide, especially in science and technology, we have to listen to and involve as many communities as possible to create a fit-for-all geo-solution. The world problems have evolved and so should our approach.
There is an utmost requirement for the Open-Source community to bring in scientists, researchers, developers, end users, and almost all tiers of the geospatial paradigm to confront and answer current geo-challenges plaguing our planet Earth and humanity. The FOSS4G 2017 Conference will be an important bridge between academia and industry that leads to new ideas and innovations.
To highlight the top quality academic work that is motivated by and is carried out with FOSS4G data and software, the Conference solicited academic papers in association with oral presentations. The FOSS4G 2017 Academic Committee consisted of 21 individuals having a wide range of expertise in the development and application of FOSS4G technologies, and with academic backgrounds that generally included refereeing papers submitted to journals. They were charged with reviewing the academic abstracts and selecting the best of them for further evaluation of the written article. These papers will be published in the Conference Proceedings, but the best of the papers will also be promoted for consideration for publication in an internationally known GIS journal (yet to be confirmed).
There were 49 academic abstracts amongst the 416 all-conference submissions, and after double-blind review the Academic Committee selected 30 for acceptance as papers. Another 14 were considered worthy for inclusion as posters (which will also be published in the Proceedings). The remaining abstracts were rejected for not meeting the Committee’s agreed upon standards (one was a duplicate).
The FOSS4G 2017 Academic Committee developed a set of criteria for assessing the academic abstracts, and three committee members were assigned to review each one based on their areas of expertise, with no identifying information included.
The reviewers evaluated their assigned abstracts along five components from 1 (worst) to 5 (best):
- Originality
- Technical Quality
- Relevance to FOSS
- Relevance to Geography
- Academic Interest
The reviews for each of these components were averaged together for each abstract as a tertiary guide for review of abstracts at the margins. The resulting abstract components were also averaged together with a uniform weight to provide a single Component Composite, as a secondary guide.
The reviewers also provided a summary Final Recommendation, which was averaged together for each abstract using a linear numeric scale:
- Strong Accept (5)
- Borderline Accept (4)
- Accept as Poster (3)
- Borderline Reject (2)
- Strong Reject (1)
As might be expected, there is a high correlation between the Final Recommendation and the Component Composite, 0.79.
In the Acceptance List, the abstracts were sorted by their Final Recommendation Average and converted back to the text recommendations above; + or – were added to each if they were an integer ± 1/3 (indicating non-consensus), with the following results:
Recommendation | Number | Decision |
---|---|---|
Strong Accept | 5 | Paper |
Strong Accept - | 10 | Paper |
Borderline Accept + | 5 | Paper |
Borderline Accept | 7 | Paper |
Borderline Accept - | 7 | 3 Paper, 4 Poster |
Accept as Poster + | 8 | Poster |
Accept as Poster | 1 | Poster |
Accept as Poster - | 1 | Reject |
Borderline Reject + | 2 | 1 Poster, 1 Reject |
Borderline Reject | 1 | Reject |
Strong Reject | 2 | Reject |
There were 27 abstracts that were “Borderline Accept” or better, meaning a consensus on acceptance. The Co-Chairs also looked at the next level of 7 that were “Borderline Accept –”, and using the component evaluations and reviewer comments noted a distinction that led to three more being accepted for paper submission, for 30 all together.
The next group of abstracts were considered for posters in the same way, with 9 that were “Accept as Poster” or better. Consideration of the components and reviewer comments led to the acceptance of one “Borderline Reject +” as a poster, and rejecting one “Accept as Poster –”, along with the remaining five abstracts.
The Academic submissions were also reviewed by the Program Committee and were also voted on by the Community, sometimes with different results reflecting the perspectives and interests of these more diverse groups. When a paper was accepted for a regular oral presentation, the academic submitters were notified of this and were provided with the opportunity to withdraw from the Academic Program with its requirement for an academic paper or poster.
In Conclusion
We deeply appreciate all those who took the time to submit an abstract for consideration by the Academic Committee, and all of the reviewers for their time and effort for arranging and undertaking this review. Of course, no process is free from false negatives and the process we describe above is no exception. We understand that a few abstract submitters will be upset with their result, but we hope the above explains that we did our best to run a selection process that was unbiased, fair, used peer-review expertise, and was systematic in its selection. This was no easy task on our end, and we do believe that the FOSS4G community will support the decisions at this stage, as well in later stages when we review the submitted, accepted papers in June.
Contact information
Please feel free to get back to us for any kind of assistance. We are always at your disposal.
Prof. Dr. Charlie M. Schweik Chair of FOSS4G Boston 2017 Academic Committee Email: cschweik@pubpol.umass.edu
Mohammed Zia Co-Chair of FOSS4G Boston 2017 Academic Committee Email: mohammed.zia33@gmail.com
Thanks again, you all, for making FOSS4G 2017 a reality. We hope to see you in Boston, U.S.A. in August! (Register now)