15 experiments to improve the way we convene to solve problems.
To accelerate impact, we must change the way we gather.
Hypothesis: experiment with real-time workshop modalities and AI applications to improve community impact, unlocking new opportunities for the future prosperity of Minnesota.
  • Quick Stats
Quick Stats
15
Experiments (all results below)
1.2K
Registrations
7,545
AI assisted interactions
23%
Loved their AI Bios
6
Other states have already requested the next experiment!
100%
Custom & unique experience for every attendee.
Experiment #1: New Name Tag Format
Problem
Most name tags are too small to read and use plastic or magnets excessively.
Hypothesis
Make them legible. Make them informative. Make them support networking!
Good
  • Very legible
  • Very informative
  • Very conversation focused
Bad
  • Many people did not fill them out well
  • Large format is slightly cumbersome
Recommendation
  • Include creating the name tags as part of the first exercise
  • ½ letter size is probably big enough for most events
Applicable for most events? 4/5
Rating system:
1 = don’t try and implement.
5 = it will provide value to any type of gathering and you should start trying immediately
Experiment #2: Event Signage

1

Problem
Conference signage is expensive, uses large format (ink heavy) printers, needs to be done weeks in advance + is environmentally taxing and often not recyclable.

2

Hypothesis
Use reusable markers on recycled cardboard for all signage.

3

Good
It was clear, simple and incredibly cost-effective/environmentally friendly

4

Bad
Small percentage of people found it too simple or even childish. Requires at least 1 hour of 1 persons time with good writing to create them (much, much less that a graphics design team for the traditional approach)
Recommendations
A simplification of signage is a no brain-er and it's an environmental win.
Encourage Art
Incorporate art into your work. Hire artists to help and give them credit. Support local communities.
Use All Caps Block Lettering
Ensure readability by using all caps block lettering for your text.
Experiment #3: Refundable Registration
Problem
Free events without famous people have >50% no-show rate.
Hypothesis
Charge enough to make sure everyone shows up. Offer full refund if they do!
Good
80%+ showed up.
Bad
Net new idea for most people which caused some confusion and additional questions to organizers.
Facts:
  • 1,245 registered
  • Refunds were clearly offered up to 72 hours in advance of the event. We also emailed everyone 2 days before reminding them to cancel if they couldn't make it.
  • 37 individuals canceled in advance and were refunded May 18-20th (event was May 21)
  • ~200 people did not show up to the workshop
Recommendation: Do it again with amendments.
  • Share the timeline for credit card processing and refunds
  • Integrate directly with bank to avoid manual processes
Applicable for most events? 1/5
Experiment #4: No Registration Table (in-person)
Problem
Every event starts the same. Wait in line to get your name tag.
Hypothesis
Don't have a registration desk.
Engage people right away and make it feel different immediately.
Good
  • Participants scanned a QR code that directed them to go to one of eight sections where their name tags were ready.
  • Extroverts and smilers. Greeters set the tone and are the most important job of the day. The warmth and conversations to follow make all the difference.
Bad
  • Hired temporary staff that didn't get the importance of emanating kindness and appreciation of attendance.
  • Instituting universal symbols [ 🔽 ▶️ ] to click and see more details wasn't as universal as we had expected. Many people needed guidance to find their name!
Recommendations: Simply improve the bad.
Applicable for most events? 5/5 all larger events should adopt this!
This is a real photo of a >2 hour wait at SXSW 2023 for name badges.
Experiment #5: Assignments

1

2

3

4

1

Problem
When networking or working in small groups, how might we ensure faster-to-impact collaboration?

2

Hypothesis
Share an "assignment" to guide participant collaboration towards a better outcome.

3

Good
  • Kept most people working collaboratively on the same thing
  • Enabled specific outputs for follow up

4

Bad
  • Some people required more guidance and context (and therefore struggled) while others do the best with what they have (and therefore thrived)
  • The speed of sequencing of assignments was too difficult for some participants
Applicable for most events? 3/5
Experiment #6: Swag

1

Problem
The vast majority of swag ends up in land fills

2

Hypothesis
What if you don't give out any swag?

3

Good
It forces the conversation about the value of swag at events

4

Bad
People really do like swag
Applicable for most events? 5/5… let's start forcing the conversation for the planet at events!
Note that small "mirror" stickers in the shape of MN were offered to highlight this experiment. The decision making process and offsetting the environmental impact of this workshop will be covered in future articles.
Experiment #7: Panel Discussions
1
Problem
Panel discussion format hasn't changed in hundreds (thousands) of years. How might we begin the shift?
2
Hypothesis
Experiment with alternative formats so that more information and more connections can help drive impact from those learnings.
3
Good
  • Net new format for 99% of the participants
  • Positive learning, feedback, and connections from >70% of participants during "expert" round
  • 55% of participants connected with 1 or more people in each of the rounds with a statistical likelihood of 37% to collaborate in the future
  • Research is evolving in this area. We will track data and publish results of why individuals continued to collaborate after this next generation panel
4
Bad
  • Confusion experienced by >30% of attendees with this new format
More details:
This experiment requires more description of how we did it.
  • Round 1: Experts that would normally be on a panel were all asked to gather together and share their latest learnings, developments, or expertise. This became a panel for and by experts, with active participation and balanced conversation in a small group!
  • Round 2: Participants had meaningful conversations with experts in small groups. They could choose the topics they were interested in and meet the experts and others interested in the same topic.
Recommendations:
True to our belief that we need to experiment and challenge the format by which most of our global issues were created, panel discussions are ripe for disruption. What other ways can we share our experiments to improve outcomes?
Applicable for most events? 4/5
Experiment #8: Speeches

1

2

3

4

1

Problem
Most talks don't inspire lasting change and 99% of info shared is forgotten.

2

Hypothesis
Shorter talks are more effective combined with actionable items directly afterwards. Therefore enable 3 successful leaders to speak for <7 minutes about an experience they share aligning what the audience would then work on with fellow participants.

3

Good
Excellent speeches.
Provided time away from talking with others.

4

Bad
TED Talks may be the latest and greatest in the world of speeches, but unfortunately, they didn't make a significant impact in this experiment.
When our speakers took charge and gave instructions for the next round, it didn't quite hit the mark.
We were honored to have Minneapolis Mayor Jacob Frey, AFWERX founder Dr. Brian "Beam" Maue and Congresswoman Betty McCollum speak at this workshop.
Applicable for most events? 2/5
Experiment #9: Artificial Intelligence Bios

1

Problem
Event Bio pages don't tell you enough about a person to be helpful and they aren't quickly actionable.

2

Hypothesis
Enlist AI to analyze a person's digital trail and public data to provide a bio to engage connections and outcomes.

3

Good
23% of attendees reported loving
their AI bio.
This new format generated countless discussions on AI use cases.

4

Bad
While only a small fraction (4.78%) disliked their AI bios, it was enough chatter to impact the workshop and/or distract individuals. We have a long way to go before people see themselves in their digital footprint and AI as a partner!
Recommendations: We will do a complete write-up about the specifics of this learning.
Applicable for most events? 2/5 for most events, but 5/5 for AI related events
Experiment #10: Topics/Event Content
Problem
Most event topics, content, themes, and speeches are decided by an organizing committee.
Hypothesis
What if AI (which knows all participants' interests and can be queried in real time) decided the content of an event?
Good
2/3 of participants fully engaged with topics that interested them and they wanted to learn more about.
Bad
⅓ of the topics were not selected resulting in an overcrowding or confusion in other topic areas.
More context. With 1,247 people registered, we knew a lot about what people were posting about, sharing, working on… heck, had entire careers in. We did the data analysis using traditional methods as well as using AI. The results were not what the organizing team thought was best. We therefore had 5 people for a half day analyze, discuss, share, and reach out to participants to try and find the right balance. On the day of the event, over ⅓ of the topic areas were empty. We had our hypothesis about why… but then we have the data to PROVE that what the Data analysis/AI said should be the topics MATCHED EXACTLY what people actually attended during the event. Did we waste 5 people's time trying to help?
Applicable for most events? 2/5 while there is great potential here, the risks and trust in AI isn't there yet.
Experiment #11: No rows of chairs, breakout rooms, or fancy stage
1
Problem
Why does every workshop, summit, event, and conference have the same stage, rows of chairs, and separate rooms for breakouts. Have we been reduced to one physical format for all gatherings?
2
Hypothesis
  • Don't set up chairs, but make them available.
  • Don't have a stage, but provide a central gathering place.
  • Don't create breakout rooms, but provide numbers to signify where elements take part.
3
Good
  • 85% people stood and talked for 4 hours straight
  • 100% people engaged in breakout groups by floor tile number
  • Impossible to know how many listened to the handful of speakers from the stage but photos indicate over ¾ we fully engaged (also known as not looking down at their phones)
4
Bad
  • Many people reported not knowing they could pick up a chair.
  • Standing, talking, sharing fatigue is real and different people need a variety of ways to enhance the accessibility of the event
Recommendations: We can't find actual numbers, but significant finances are put towards the physical environment of gatherings. We believe this budget item is ripe for disruption and will keep you up to date on our experiments.
Applicable for most events? 3/5
Experiment #12: "No" Agenda
Problem
Agendas for gatherings of over 25 people are frequently established in advance.
Side note - what makes organizers think they know everyone has to use the rest room at the same time?
Hypothesis
How might an agenda of events shift and adapt as participants learn and deep dive into what they are really seeking solutions for?
Good
  • No minute by minute schedule was provided and no one complained in advance
  • ½ the event was determined on the fly by direct participant connections and requests (all processed by AI - otherwise this would be impossible) and connections and learning points only increased
Bad
  • Up to 40% of the participants were confused at times
Recommendations:
As AI advances and event organizers uncover the potential to amplify the experience and impact for participants by a hundredfold, we see an enormous opportunity waiting to be seized.
Applicable for most events? 2/5
Experiment #13: Diversity in Teams

1

Problem
We spend time with people who are like us at most gatherings.

2

Hypothesis
Place people into groups that they have a <5% likelihood of ever meeting in their lifetime.

3

Good
  • No one resisted or left
  • 90% of participants scanned at least one other person (possibly indicating interest in further connecting)

4

Bad
We can't see a lot of bad in increasing diversity, but we won't have enough data for ~6 months to know the effectiveness.
Recommendations: There are countless ways to increase diversity of people, content, ideas in any gathering.
Applicable for most events? 5/5
Experiment #14: Re-Inventing Networking
Problem
Purely serendipitous networking time, cocktail, or app based connecting has had its chance. What's next?
Hypothesis
Event learning and long-term impact can be significantly increased by integrating networking into the duration of an event.
Also, provide multiple ways for people to connect intentionally and independently to drive long term success.
Good
  • All 5 sections/agenda items were kicked off with analog and different ways to connect with people next to you or in groups
  • About half (we can't prove this) of connections made were recorded.
  • Another half were documented with why a connection was made and what will happen next (this will take time to record what happens to make any conclusions/recommendations)
  • 7,545 networking interactions were recorded in order to drive "re-invention" of networking to help improve future gatherings of everyone involved.
  • This is the most data collected to drive a positive impact of any event we can find a record of.
Bad
  • Human nature still defaulted to talking to similar people!
  • We recorded feedback on AI generated asynchronous networking recommendations (i.e., people could say if the other connection was helpful or not) and only 48% were positive! Much more research is being done on this!!
  • Using a website on each person's phone was the primary mechanism to track. We used OpenAI for this and it has a 500 person simultaneous usage limit and so many people got spinning screens with no results
  • Many other technical glitches hampered full adoption
Recommendations: We think we have stumbled upon a necessary change in the way events facilitate connections. We will be publishing many, many more experiments in this space!
Applicable for most events? 5/5
Experiment #15: New Breakout Models
Problem
Consequential work at events usually happens in round tables, breakouts, or other small group interactions
Hypothesis
Re-think how breakouts can be accelerated to help each person in their journey
Good
We had 160 breakouts setup with floor stickers all color coded for easy access. Each participant could get to any topic/breakout in <30 seconds. Each participant could get information and who is working on that topic by scanning an adaptive QR code in the group allowing for instant journey matching.
Bad
Net new for most participants and caused initial confusion. Open format allowed for people to easily depart a breakout. Assignments provided might have encouraged too much working instead of connecting and learning.
Applicable to most events?
2/5
Low score mostly because we are in the early days of innovating here - there doesn't appear to be one format that is significantly better than others
Recommendations
Breakout groups are as old as time itself. This is going to require thousands of more experiments to meaningfully improve these.
We have dozens more experiments to share.
Please share any insights, ideas, tests, experiments and data with us so we can continue to share. Email: uwt @ collaboration.ai
Over 25 companies helped deliver UWT. They will be highlighted in future releases.
Workshop facilitation & design by theDifference. Software & ecosystem by Collaboration.Ai