CHI 2024 Registration is now open

We’re excited that registration for CHI2024 is now open!

You can register here. The early registration deadline is April 1st 12th 2024 EOD AOE (Anywhere On Earth).

We have – as in previous years – different pricing by geographic region. See the list of countries in each category at the end of this post. We also offer opportunities for onsite as well as online only participation. An overview of all options is given on the first page of the registration page.

The list of conference hotels and further information about travel, visa applications, and the venue is online.

Categories (country list)

Category C

All countries not listed in category H or I.

Category H

  • Albania
  • Algeria
  • Angola
  • Argentina
  • Armenia
  • Azerbaijan
  • Belarus
  • Belize
  • Bosnia
  • Botswana
  • Brazil
  • Bulgaria
  • Colombia
  • Cook Islands
  • Costa Rica
  • Cuba
  • Dominica
  • Dominican Republic
  • Ecuador
  • Fiji
  • French Guiana
  • Gabon
  • Georgia
  • Grenada
  • Guadeloupe
  • Guatemala
  • Guyana
  • Iran
  • Iraq
  • Jamaica
  • Jordan
  • Kazakhstan
  • Kosovo
  • Lebanon
  • Libya
  • North Macedonia
  • Malaysia
  • Maldives
  • Marshall Islands
  • Mauritius
  • Mexico
  • Montenegro
  • Namibia
  • Paraguay
  • Peru
  • Romania
  • Russian Federation
  • Saint Lucia
  • Samoa
  • Serbia
  • South Africa
  • Sri Lanka
  • St. Vincent
  • Suriname
  • Thailand
  • Tonga
  • Tunisia
  • Turkey
  • Turkmenistan
  • Tuvalu
  • Venezuela

Category I

  • Afghanistan
  • Bangladesh
  • Benin
  • Bhutan
  • Bolivia
  • Burkina Faso
  • Burundi
  • C African Rp
  • Cambodia
  • Cameroon
  • Cape Verde
  • Chad
  • China
  • Comoros
  • Congo
  • Congo, Democratic Republic
  • Djibouti
  • Egypt
  • El Salvador
  • Eritrea
  • Eswatini
  • Ethiopia
  • Federal State of Micronesia
  • Gambia
  • Ghana
  • Guinea
  • Guinea-Bissau
  • Haiti
  • Honduras
  • India
  • Indonesia
  • Ivory Coast
  • Kenya
  • Kiribati
  • Kyrgyzstan
  • Lesotho
  • Liberia
  • Madagascar
  • Malawi
  • Mali
  • Mauritania
  • Mongolia
  • Morocco
  • Mozambique
  • Myanmar
  • Nepal
  • Nicaragua
  • Niger
  • Nigeria
  • North Korea
  • Pakistan
  • Palestine
  • Papua New Guinea
  • People’s Dem. Republic of Lao
  • Philippines
  • Republic Moldova
  • Rwanda
  • Sao Tome and Principe
  • Senegal
  • Sierra Leone
  • Solomon Isl
  • Somalia
  • South Sudan
  • Sudan
  • Swaziland
  • Syria
  • Tadzhikistan
  • Tanzania
  • Timor-Leste
  • Togo
  • Uganda
  • Ukraine
  • Uzbekistan
  • Vanuatu
  • Viet Nam
  • Yemen
  • Zambia
  • Zimbabwe

CHI 2024 – Papers track, post-PC outcomes report

NB — The numbers might not always work out here, there are missing data from the analyses due to conflicts.

This blog post covers outcomes from the CHI 2024 Program Committee (PC) meeting, which took place from the 16-18th January 2024.

After the first round of reviews, 1651 submissions were invited to submit revisions. External reviewers and Associate Chairs (ACs) reviewed and discussed these revised submissions asynchronously and made individual binary accept/reject decisions. Submissions were there then discussed in Subcommittees at the PC meeting where final accept/reject decisions were made.

Overall acceptance rates

The Conference Program Committee accepted 1060 submissions. The overall acceptance rate for the Papers track was 26.4%. Of those submissions that were revised and resubmitted, 64% were accepted. The overall acceptance rate for “short” submissions (< 5000 words) was 10% (54/551 submissions). The overall acceptance rates for “standard” and “excessive” submissions was 29% (1006/3458 submissions). Seventeen submissions of “excessive” length were accepted (of 66,26%).

Acceptance rates by subcommittee

Submissions to the CHI Papers track are made to one of eighteen subcommittees. We considered the submission rates to these subcommittees and their respective R+R rates in other blog posts. Final acceptance rates varied between 34% (Accessibility and Aging, Access) and 18% (User Experience and Usability, UX). Figure 1 shows the acceptance rates for each subcommittee.


A bar chart, showing acceptance rates for each CHI subcomittee, ranked from low (UX, on the left) to high (Access, on the right).

Figure 1: Acceptance rates by subcommittee, CHI 2024 Papers

Recommendations

Submissions to the CHI Papers track receive recommendations from reviewers, not scores. While it may not make sense to compute and plot mean scores, we can still have a closer look at the pattern of recommendations. Do all accepted submissions have a ‘clean sweep’ of accept recommendations? What is the mix of recommendations on rejected submissions? Figure 2 shows these patterns, focusing on R+R submissions where the final recommendation from reviewers was either Accept or Reject and where there were four reviews in total (edge cases are shown below).


A bar chart showing the proportion of recommendations after the PC for R+R submissions. The y-axis is a count of submissions, the x-axis enumerates the combinations of reviews (e.g., all reject, all accept etc)

Figure 2: How do the recommendations of reviewers look after the PC for papers invited to R+R

Not all papers finished with four recommendations. A couple had six, fifty-nine had three. The following table comprises the recommendation counts of all R+R submissions that did not finish the process with four recommendations.

Decision # Accept Recommendations # Reject Recommendations Total # recommendations n
Reject 2 3 5 52
Accept 3 0 3 43
Reject 1 4 5 41
Accept 4 1 5 20
Accept 5 0 5 16
Reject 0 3 3 15
Accept 3 2 5 8
Reject 0 5 5 8
Reject 3 2 5 4
Accept 2 3 5 2
Accept 2 1 3 1
Accept 3 3 6 1
Reject 2 4 6 1

Bonus Chartjunk

The Program Committee meeting was considering over 1600 R+R submissions. It discussed and accepted or rejected these over the course of three days. It’s busy! Precision Conference, the tool that is used to manage the submission process, keeps an ‘action log’. When someone updates their review, it gets updated. When someone makes a new submission, it gets updated. When a decision is reached, the log is updated. Figure 3 shows how activity ramped up during the PC meeting. (We’ve stripped out the “sends email to contact author” and “send eRights to ACM” events – a lot of these happen at the same time and the chart is less interesting to look at.)


A histogram. Dates for the PC are shown on the x-axis. The y-axis shows how many events were added to the log during a given time period. The peaks associated with the PC meeting are clear.

Figure 3: PCS gets busy during the PC meeting!

Datatables

Figure 1 shows the acceptance rates across the PC’s subcommittees. The underlying data is provided below.

Subcommittee Number of accepted submissions Total number of submissions Acceptance rate
Access 101 300 34%
Critical 68 218 32%
Devices 39 142 28%
Privacy 57 209 28%
Systems 81 283 28%
Apps 62 235 26%
Health 78 318 24%
PeopleQual 61 260 24%
Viz 42 173 24%
Design 65 295 22%
Games 35 157 22%
Ibti 42 198 22%
CompInt 56 283 20%
IntTech 65 314 20%
Learning 53 268 20%
PeopleMixed 44 219 20%
PeopleStat 52 252 20%
UX 59 327 18%

Figure 2 shows the variety of recommendation permutations reached by reviewers on R+R submissions. The more esoteric permutations are given in a table, but here is the full dataset:

Decision # Accept Recommendations # Reject Recommendations Total # recommendations n
Accept 4 0 4 825
Reject 0 4 4 244
Reject 1 3 4 144
Accept 3 1 4 94
Reject 2 3 5 52
Accept 3 0 3 43
Reject 1 4 5 41
Reject 2 2 4 27
Accept 4 1 5 20
Accept 5 0 5 16
Reject 0 3 3 15
Accept 2 2 4 10
Accept 3 2 5 8
Reject 0 5 5 8
Reject 3 2 5 4
Accept 2 3 5 2
Accept 1 3 4 1
Accept 2 1 3 1
Accept 3 3 6 1
Reject 2 4 6 1

A full breakdown of the countries from which submissions were received (and whether one or more submission from that country/territory was accepted) to the Papers track is given below.

Country/
Territory
Authorships on rejected submissions Authorships on accepted submissions Total authorships Proportion of authorship on accepted papers
United States of America 3094 1958 5052 38%
China 1263 404 1667 24%
Germany 825 316 1141 28%
United Kingdom 716 293 1009 30%
Canada 436 257 693 38%
South Korea 372 192 564 34%
Australia 353 147 500 30%
Japan 352 131 483 28%
Netherlands 224 76 300 26%
Switzerland 145 75 220 34%
Finland 166 44 210 20%
Denmark 108 63 171 36%
Taiwan 128 30 158 18%
France 106 47 153 30%
Singapore 77 52 129 40%
Austria 98 28 126 22%
Sweden 62 41 103 40%
India 68 32 100 32%
Portugal 46 42 88 48%
Italy 63 4 67 6%
Bangladesh 52 6 58 10%
Ireland 38 18 56 32%
Hong Kong S.A.R. 31 24 55 44%
Spain 28 25 53 48%
New Zealand 38 12 50 24%
Israel 38 10 48 20%
Belgium 33 11 44 24%
Brazil 34 8 42 20%
Norway 29 4 33 12%
Luxembourg 21 11 32 34%
Poland 21 5 26 20%
Turkey 17 5 22 22%
Philippines 21 0 21 0%
Cyprus 15 0 15 0%
Czech Republic 15 0 15 0%
Qatar 13 1 14 8%
Kenya 13 0 13 0%
Iran 10 2 12 16%
Saudi Arabia 4 6 10 60%
Ecuador 7 2 9 22%
Estonia 5 2 7 28%
Pakistan 7 0 7 0%
Macedonia 6 0 6 0%
Malawi 0 6 6 100%
Slovenia 4 2 6 34%
South Africa 6 0 6 0%
Uruguay 6 0 6 0%
Romania 2 3 5 60%
Malaysia 3 1 4 24%
United Arab Emirates 4 0 4 0%
Iceland 3 0 3 0%
Mexico 3 0 3 0%
Nigeria 2 1 3 34%
Rwanda 3 0 3 0%
Bahrain 2 0 2 0%
Ghana 1 1 2 50%
Macau S.A.R 1 1 2 50%
Peru 2 0 2 0%
Russia 2 0 2 0%
Republic of Serbia 1 1 2 50%
Thailand 2 0 2 0%
Argentina 1 0 1 0%
Armenia 1 0 1 0%
Chile 1 0 1 0%
Colombia 0 1 1 100%
Costa Rica 1 0 1 0%
Croatia 1 0 1 0%
Egypt 0 1 1 100%
Indonesia 0 1 1 100%
Jordan 1 0 1 0%
Kazakhstan 1 0 1 0%
Namibia 1 0 1 0%
Sri Lanka 1 0 1 0%
United Republic of Tanzania 0 1 1 100%
Ukraine 1 0 1 0%
Vietnam 1 0 1 0%

Special Recognition for Sustainable Practices

The CHI’24 sustainability committee is excited to announce the debut of a Special Recognition for papers that take exceptional measures toward sustainable research practices. This initiative aims to draw attention to sustainable research and celebrate authors’ dedication to sustainability. This honor is open to any project that has taken steps to be more sustainable–not only projects that directly address sustainable topics. There are many creative ways that HCI researchers could consider to make their work more sustainable and potentially earn the Special Recognition for Sustainable Practices, including:

  • Offsetting carbon costs (e.g., of training machine learning models)
  • Hosting a no-waste workshop
  • Purchasing recycled materials
  • Minimizing project-related travel (e.g., holding hybrid and virtual meetings)
  • Incorporating community leaders in project funding
  • Advocating for sustainable policy
  • Public outreach or education on sustainable topics or practices
  • Reducing electronic waste (e.g., through sustainable purchasing practices, reusing and recycling parts)

Any of these actions or similar could potentially earn your project a Special Recognition for Sustainable Practices. We hope to hear many other creative ideas as well!

How to Apply

The submission portal in PCS includes a new field where authors can describe steps they have taken to make their work/projects more sustainable. In 2-3 paragraphs (300 words or less), tell us what actions you’ve taken to make your project more sustainable, your reasoning for taking those actions, and what impact you’ve seen. Note: Please note that what you enter there is separate from the review process. Additionally, this is a new effort to promote sustainability hence, we are trying things out. So, depending on the feedback we receive, we might extend this initiative to other SIGCHI venues.

Special Recognitions will be announced prior to the first day of the conference on May 11th. Papers receiving Special Recognition will be highlighted on Twitter/X and will be mentioned in the closing keynote during the conference.

Call for expression of interest: funding bodies/agencies

We recognize the increasingly tight landscape for funding research. Hence, it is more and more important that the right HCI research project, the right researchers, and the right funding body come together. CHI wants to help, and hence invite expressions of interest from funding bodies/agencies interested in supporting HCI research. Building on past experiences, we are seeking expressions of interest from funding bodies worldwide who are interested in engaging with the HCI community at CHI’24 (https://chi2024.acm.org/), helping to match the right funding body with the right HCI research project and researchers.

Please be aware that CHI cannot provide financial support, unfortunately.

If interested, please contact the general chairs at generalchairs@chi2024.acm.org by 11 Jan 2024 (anytime on Earth). If you know a funding body that might be interested, please pass this on. Thank you!

CHI 2024 — Papers track, post-round one outcomes report

NB — The numbers might not always work out here, there are missing data from the analyses due to conflicts.

This blog post covers reviewing activity during the first round of reviewing for the CHI 2024 Papers track. A blog post has already been published on the outcomes of those reviews. This blog post instead focuses on the reviewing activity itself — we’re considering the distribution of reviewing load, the relationship between authors and reviewers, the length of reviews. Those kinds of things.

Reviews, authors, reviewers, and ACs

What is the overlap between authors and reviewers? Where there is overlap, how many reviews do people contribute compared with the load created by their submissions? We discount desk rejected submissions, and focus on submissions that completed the first round in a regular fashion (i.e., with an RR or reject recommendation).

Submissions had 1-36 authors (M=4.8, SD=2.4). There were 71 single-author submissions (37% to RR) and 137 submissions with ten or more co-authors (58% to RR). (A logistic regression shows that author count weakly predicts decision, with more authors increasing the chance of an RR decision, p < .001, 95% CI [.05, .11].) The load created by a given author on a given submission is the reciprocal of the number of authors on the submission multiplied by four (as a submission requires four reviews). A paper with one author generates a load of four for the author. A paper with four authors creates a review load of one per author. A paper with 36 authors generates a load of ~0.1 per author.

Computing this load allows us to understand the review load implied by each author, controlling for the fact their co-authors should also be reviewing. We can use this to produce a histogram of load per author, show in Figure 1. Load created ranged between 32.7 reviews and 0.11 of a review (M=1.24, SD=1.27).


A histogram showing the review load created by each author of a CHI 2024 papers submission. There are a very small number of authors creating a tiny load of ~0.1, and a very small number creating a high load of >10. Most authors create a total load of between 0.5 and 1.2

Figure 1: What is the review load created by each author submitting to CHI 2024 Papers? It depends on how many co-authors they have!

As an author, if your load-created is four, it means you need to complete four reviews in order to have provided as many reviews to the pool as your submissions have incurred. Is this what happens? No. Of the list of 14,461 people who were ‘participants’ in the first round of the papers track (either as an author, reviewer, or AC), 9,673 were authors who did not participate in reviewing; 4098 were external reviewers (of whom 2,051, 50%,were also authors) and 690 were ACs (of whom 472, 68%, were also authors). There were 2,037 reviewers for CHI 2024 who did not make a submission.

Associate Chairs undertake half of the reviews on each paper – one internal and one metareview to go with two external ones. This means that an AC produces a mean of 10.7 reviews (SD=2.1), compared to 1.8 for an external (1.2). Figure 2 illustrates the review ‘balance’ for ACs, reviewers, and non-reviewing authors.


A frequency polygon showing the reviewing balance (review load created vs reviews completed) of authors across roles. The plot shows four roles, AC, Non-Reviewer Author, Reviewer and Reviewer-Author. The chart peaks at slightly less than zero because most authors do not complete a review. ACs take a significant load, and the chart shows a peak for ACs at +10.

Figure 2: What is the ‘balance’ of review load created and reviews completed? This is breaking it down by role. Unsurprisingly, ACs (who complete 50% of reviews) have strong positive reviewing balances.

Associate chairs have a balance of +5900: they produced 5900 more reviews than they consume. Authors who also reviewed have a balance of +3633. Authors who did not review have a balance of -9932. Ultimately, we rely on ACs producing a large surplus of reviews in order to have a conference programme.

One final thing to consider here is whether the individual authors on a given submission balance out for the submission. So while one author might be in ‘debit’, their co-author might be in ‘credit’ and the net is that a lot of submissions do ‘cover their costs’ with reviews. This does not really work out, though. Without treating this as a constraint optimisation problem, which we’re not going to do, a rough-and-ready indication of whether a given submission was ‘net zero’ on reviews based on the sum of the balances of its individual authors. If there was one author with a significant deficit (say, the leader of a lab) but the other authors had picked up that slack, then we’d expect to see that in the data. Figure 3 shows the distribution of these per-submission sums. The aggregate balances for a submission range between -48.3 and +40.2, with a mean submission balance of -3.5 (SD=8.5). In other words, most submissions do not cover their own reviewing ‘costs’. (Though given that half of all reviews have to be written by ACs, this deficit is effectively ‘designed in’ to the process.)


A histogram showing the net reviewing balance of a submission. There are extremes at -50 and +30 (i.e., high debt, high surplus), but there's a big spike between -10 and 0; most submissions are falling in this range of reviewing debt.

Figure 3: What are the sums of author balances for each submission? This histogram shows that it’s around -3.5, which is not a surprise; half of all reviewing is done by ACs, and most authors are not ACs. It therefore makes sense that most submissions have a negative review balance when looking across their authors.

If you’ve done a lot of reviewing as an AC or an external reviewer, looking across these plots, it might feel like there’s a bit of a tragedy of the commons happening – nearly 10,000 authors who didn’t contribute any reviews. Recent analysis has shown that individual authors are submitting more and more work to CHI. Everyone in peer review will have noticed that it has got more difficult to find willing reviewers over the last few years. But caveats abound, here. Calculating a “deficit” in this manner ignores the many types of contribution that have to be made for the conference to happen. Any calculus cannot incorporate the efforts of the conference organizing committee, SIGCHI Executive Committee or the CHI Steering Committee. These contributions are all essential, and often leave colleagues with less capacity to commit to reviewing service. There is no satisfactory way to capture this in our analysis. Similarly, authors are encouraged to ‘pay back’ their contributions across SIGCHI conferences and HCI journals. It might be that authors reviewed for CSCW, or were an AC there, and ran up ‘surpluses’. This data is also too difficult to capture and bring to bear in an analysis of this kind.

It’s also worth remembering that just because an author didn’t provide a review, it doesn’t mean that they weren’t willing to. Reviewing relies on networks, and with so many first time authors every year, there are always going to be prospective reviewers who aren’t called on to review. There will also be many authors who won’t make appropriate reviewers, too: undergraduates, perhaps, or folks from other disciplines who have been brought into multidisciplinary papers. The main takeaway from all of this is that the conference ACs are doing really sterling work. Chapeau!

Review lengths and quality

There were 14,883 reviews for submissions that went through the complete Round 1 review process (i.e., not desk rejects, withdrawn papers etc). Of these reviews, 4256 were completed by (self-identified) Experts, 8614 by Knowledgable reviewers, 1872 by reviewers with Passing Knowledge, and one by a reviewer with No Knowledge. A breakdown of expertise by recommendation is given below for all but “No Knowledge” (which would not tell much). These data seem to imply that Expert reviewers are more likely to recommend rejection than other reviewers.

 

Of the reviews, 1575 (11%) were recognised as excellent reviews by ACs. These excellent reviews were produced by 1273 individual reviewers producing 1-5 excellent reviews (M=1.2,SD=0.55). Of the 1575 excellent reviews, 1166 were produced by externals (74%). This probably just represents a difference in propensity to give special recognition to reviews (only seven 1AC reviews were recognised excellent), rather than a meaningful difference in the rate at which different roles produce excellent reviews.

Reviews comprised 8,733,697 words. There were twelve reviews with a review length of zero – most of these were the result of a reviewer or AC pasting their review into the wrong field (e.g., confidential comments, award nominations etc). We discarded these. The rest of the reviews varied in length between 9 and 6903 words (M=593, SD=378). There are 1731 reviews over 1000 words in length (12%), with 407 of these reviews over 1500 (3%).

As you might expect, 1AC metareviews are shorter (M=360,SD=206) than 2ACs’ (M=611, SD=334) and reviewers’ (M=699, SD=412) ‘full’ reviews. Figure 5 shows a stacked histogram of review lengths by reviewer role. There is a long tail! Ignoring 1AC reviews, which are qualitatively different kinds of reviews, Figure 4 show that reviews recognised as excellent by ACs (M=976, SD=486) tend to be longer than regular reviews (M=620, SD=346).


A histogram showing the number of words in a review on the x-axis and number of reviews on the y-axis. The histogram shows the length of reviews split by reviews that were rated as excellent and reviews that were not rated as excellent. There are meny fewer excellent reviews than regular reviews, so the distribution is rendered much smaller on the screen. Two line representing the means of these group help to make it clear what the group means are.

Figure 4: Excellent reviews tend to be quite a bit longer than regular reviews. (Here we’re only looking at reviews that are 2500 words in length or shorter.)


A histogram showing the number of words in a review on the x-axis and number of reviews on the y-axis. The histogram shows three roles, 1AC, 2AC and Reviewer. Their data are stacked in thsi histogram, weith a clear and strong peak at 600 words. The count is well into tailing off by 1500 words.

Figure 5: How long are reviews? About 600 words, give or take. This stacked histogram shows how long reviews from 1ACs, 2ACs and external reviewers are. The metareviews of 1ACs are qualitatively different in style, so it perhaps makes sense that these are shorter. There are some reviews as long as papers amongst them all, too.

Bonus Chartjunk

No bonus Chartjunk for this blog, with many apologies. Suggestions gratefully received at analytics@chi2024.acm.org.

Datatables

Figure 1 his a histogram. We can’t share the raw data for that, but we can share binned data:

Author-created review load, range n
(0.1,0.25] 227
(0.25,0.5] 1929
(0.5,0.75] 2431
(0.75,1] 3459
(1,2] 2756
(2,3] 651
(3,4] 383
(4,10] 348
(10,30] 31

Figure 2, which shows the ‘balance’ of each author likewise uses individual data, so we can instead offer some binned data:

Author review balance, range n
(-30,-10] 16
(-10,-4] 207
(-4,-3] 187
(-3,-2] 785
(-2,-1] 3085
(-1,0] 6266
(0,1] 2096
(1,2] 673
(2,3] 293
(3,4] 136
(4,10] 499
(10,30] 218

Figure 3’s data looks something like this:

Paper review balance, range n
(-30,-10] 472
(-10,-5] 909
(-5,-1] 1223
(-1,0] 131
(0,1] 121
(1,5] 360
(5,10] 332
(10,30] 184

Figure 4’s data:

Review length, range Excellent review n
(0,300] No 2839
(0,300] Yes 14
(300,600] No 5926
(300,600] Yes 319
(600,1000] No 3278
(600,1000] Yes 624
(1000,2000] No 1048
(1000,2000] Yes 571
(2000,4000] No 64
(2000,4000] Yes 44
(4000,10000] No 1
(4000,10000] Yes 3

Figure 5’s data:

Review length, range Role n
(0,300] 1AC 1656
(0,300] 2AC 482
(0,300] Reviewer 715
(300,600] 1AC 1638
(300,600] 2AC 1657
(300,600] Reviewer 2950
(600,1000] 1AC 328
(600,1000] 2AC 1134
(600,1000] Reviewer 2440
(1000,2000] 1AC 46
(1000,2000] 2AC 392
(1000,2000] Reviewer 1181
(2000,4000] 1AC 4
(2000,4000] 2AC 20
(2000,4000] Reviewer 84
(4000,10000] Reviewer 4