Humans are natural pattern finders. We can’t help but search for similarities and differences in behaviour, events, data, so that we feel we can understand the world a little better.
We are also notoriously bad at it. Patterns we find often are biased towards our own worldview, and even knowing which data to consider valid is (and always has been) political.
2019 for us has been a year of attempts at pattern recognition. This is Paper Giant’s third full year in business, and we figured three times through the loop is enough to start connecting some dots.
In that time, we’ve noticed the work get more complex. We’ve noticed that rote methods to problem solving have stopped working, if they ever did. We’ve also noticed that people are growing more comfortable with complexity. Our clients still want and need answers, but they also want to learn how to listen more carefully to their customers and communities, so they can change with them. They expect the answers to their questions to be different to the answers they’ve always gotten.
We’re incredibly proud of the year we've had. In 2019, we opened a Canberra studio so that we can more easily service the complex problems of federal government. Through the year, we delivered over 50 projects across government at all levels, in financial services, and for the energy, disability, software and legal sectors. We delivered numerous training courses both privately and publicly, and hosted hundreds of people in our Melbourne studio to hear talks on accessibility, diversity and inclusion, and innovation in the justice sector.
Through all this, one pattern is clearest. Things are changing, rapidly. In order to continue to deliver on our purpose of helping organisations understand and solve complex problems, Paper Giant is going to continue adapting to the needs of modern problems.
2020 is the start of a vital decade, and we're looking forward to taking a rest, rolling up our sleeves, and staying with the trouble.
As we reach the end of 2019, I remember back to five years ago when all the organisations we worked with were developing their ‘2020 visions’. 2020 sounded sufficiently futuristic to open up creative thinking, and close enough to be just achievable.
I doubt many of us would have predicted the level of political, social and ecological change that we’ve faced in those five years.
Knowing now what was unpredictable then, you can ask yourself - how useful was your vision? How did you measure and learn? How did your organisation adapt and respond?
Paper Giant is leaving 2019 with some questions - questions about how design works within organisations, and how it helps them make decisions that lead to positive outcomes. If you work in design, or your job is to make change at the org where you work, chances are you’ve asked some similar questions this year:
What good are post-it notes, if they don’t help you to make your research and ideas tangible?
What good are personas, if they don’t communicate the intricacies of real lives, and the impact your services have on real people?
Why narrow your focus to customers, when we all live as part of complex communities?
What good are journey maps, when people don’t experience your service in a linear way?
What good are service blueprints if they are too big to start?
Why make grand claims about ‘transformation’, without thinking about who will be transformed?
Why make recommendations, without clear pathways or tools to use to put them to use?
Over the course of this year, we’ve been evolving and changing our work, because to create a world that is more just, more equal, and more sustainable, we need to think about design in a bigger way.
An interview with Stan Winford, RMIT Centre for Innovative Justice
People with mental ill health and cognitive impairments are significantly overrepresented in the Victorian criminal justice system. Corrections Victoria research in 2011 estimated that 42% of male and 33% of female prisoners had an Acquired Brain Injury (ABI). Lack of appropriate supports for people with disability, coupled with stigma and discrimination, perpetuate cycles of disadvantage and lead to increased contact with the criminal justice system.
As just one example, a person with ABI may have memory troubles that make it harder to remember and comply with the requirements of a community corrections order, potentially leading to breach of the order which can ultimately result in imprisonment.
The Supporting Justice project, by RMIT’s Centre for Innovative Justice (CIJ), aims to change how people with disability are treated and supported in the criminal justice system.
In 2019, CIJ partnered with Paper Giant to create SupportingJustice.net, a website thatprovides practical resources for legal and court professionals and people with disability. These resources help connect people with disability in the criminal justice system with support, creating the opportunity for fairer outcomes.
To create these resources, the team brought together 12 representatives from the legal profession, courts, the judiciary, the disability support and advocacy sector, and 4 people with lived experience of disability and justice system involvement, to collectively generate ideas for change. Many times, there were people sitting side by side who would typically be separated by a bar, a bench, and centuries of hierarchical protocol.
Paper Giant’s Kate Goodwin sat down with Stan Winford, the Associate Director of Research, Innovation & Reform at CIJ, to discuss the project.
Kate Why was it important to you that the project team included people with lived experience of ABI and justice involvement?
Stan First, because we want the justice system to be effective, and at the moment it’s very ineffective. In Victoria, 44% of people return to prison within two years, nationally the figure is up to 55%, and every year those numbers have been going up. In the last Victorian budget we spent 1.9 billion dollars on prisons.
If we want the justice system to work better, then something needs to change. And to us, the obvious missing element was the perspective of the people closest to the problem.
We can speculate about possible solutions, but they are the people who know what’s not working for them, who know what it’s like to be in prison and not understand what people are telling you to do, or not be able to remember when and where to go to get your medication, and all those sorts of things.
Kate What was the other reason?
Stan The other reason is that it can be transformative. The process itself actually improves the lives of the people involved. Thinking about how we undertook this project, it really changed me, it changed the people from the legal system who were involved, and it also helped give dignity to participants who had disability. It wasn’t just the outcome and the product that meant that we had a better response.
Kate You said the project changed you – can you talk about that?
Stan I always felt that perhaps it would be challenging, and hard, and yes, in some ways it was. But I think the practice also turned out to be even more important and valuable than I thought it would be.
That just comes from moving from theory into applied practice. A lot of the issues, and insights, and problems – you can’t think about them in the abstract. You actually have to test them and do them.
Lawyers think that we know all the answers because we know what the law is, but actually we don’t, and having different perspectives brought to bear on solving problems is definitely better than just having one lens to look at things through.
It can take a bit longer and involve a bit more work than you might anticipate. But you can get a lot more from it than you might think. And some of those things are not the things that you might expect.
Kate What were you expecting?
Stan I had a preconception about what a simple response to the problem would be, which would be a kind of services directory. And we do have that now, it’s one of several elements of the online resource. But what we found during the project is that people use information in funny ways, and in frustrating ways, and in human ways. You can design something that’s really good and accurate and useful, and people won’t necessarily read it because it’s not how they like to consume information.
So what I like is that, as well as the website, we have these physical documents, physical objects that a person can take with them, that even if it doesn’t have all the details on it, is a signal that something’s going on here that requires further investigation, and prompts that process. I think that sort of stuff is very clever, and what’s new and unexpected.
Kate I like that, because if you told someone straight out the gate that we were going to set about designing a PDF on purpose, they’d be like, “It’s not 1995, no one does that anymore.”
But it’s a literal signal – if I turn up to, say, a duty lawyer, and I have this Preparing for Court client form, it’s a signal to that lawyer. Even if they’re not familiar with it, it says on the front, “This has been filled out by someone on behalf of someone who is cognitively impaired”, or whatever.
Stan We’re not doing co-design because we want to try this cool new way of working with sticky notes and great visuals. We actually want to make sure that we’re achieving something for people – that there will be fewer people with ABI in prison.
To me this comes back to broader questions about the purpose and value of innovation – just because you do something differently, that doesn’t mean that it’s better. It’s still got to actually lead to an outcome that’s an improvement on what’s come before.
Kate Has the approach taken in developing the SupportingJustice.net resources carried through to any of your other work?
Stan The experience of working with people with lived experience in designing responses has continued, to the extent where we’ve now got a large group of people who have been trained by Dorothy, a woman with lived experience of ABI and justice system involvement. We’re now able to provide this kind of advice more broadly, not just in relation to developing a resource.
That’s continued, and having huge effects on the system, as far as we can tell.
Kate Who comes to you for that sort of advice?
Stan We get approached by the police, or by people involved in court services who are looking at design questions, or people in corrections who are undertaking a review of the treatment of disability within corrections, and they say, “Can we have those guys come and talk to our people about what’s happening, and what we should be doing?”
Another spin-off has been successfully getting funding to partner with SARU, the Self Advocacy Resource Unit, to develop a group of peer advocates called Voices For Justice. There’s a training program, and they’ve now formed their own group, and they’re now having an influence on people in the system.
We’ve also started work on a couple other projects around housing design, and on providing alternative responses to do with resolving conflict for people with disability living in group homes, so that where it’s not criminal offending, they don’t end up being inappropriately responded to by police. Lots of really interesting things have been happening.
Kate What would you say to someone who is interested in including people with disability in their design process, but is nervous about getting it wrong or being insensitive?
Stan One of the really interesting things that we learned right from the beginning is that people with disability, like ABI, are generally okay about disclosing what their issues are, and what support needs they have, if they trust you and they know that this information is going to be used in ways which are beneficial to them, rather than be used against them.
I think people can be a little bit too frightened of offending, or saying the wrong thing, or appearing to be insensitive. If your heart’s in the right place, I don’t think you can go too far wrong.
But the advice I’d give is to be aware that people in the justice system have been treated very badly, and had experiences of their disability being used against them. They might be quite cagey about disclosure, but if they understand why their information is wanted, and how it could be helpful, then they’ll probably be okay.
But again, here we are talking about them without them. You can ask them. That’s the key to it.
One of the very many profound things Dorothy said was, “When I went through the criminal justice system, no one asked me, ‘How did you get here?’ Nobody asked me, ‘What can I do to help you?’”
Just asking the question is a good start.
Read the full story of the project in our case study
Good design is often about asking the right questions. So when you hear ‘human-centred design’, you need to get more specific: Which humans? What human action, or relationship, or outcome?
I’ve been thinking about this a bit lately because of some work that Paper Giant is involved in around the consumer data rights legislation in Australia. The goal of the CDR is laudable – to give consumers of banking and energy services the right of consent over who has access to their data (such as their account information and smart meter data). The aim of legislation like the CDR is to make it possible for individuals to know who has access to information about them, to know what those people or organisations are using it for, and, crucially, to revoke that access.
In this example of human-centred design, we can ask:
Which humans? Consumers of banking products (i.e. just about everyone)
What action? Informed consent over who has access to my data, and how they use it.
Which leads us to a reframe: ‘What if we designed explicitly for consent?’
Real consent is informed consent – meaning it is not enough for people to click ‘Yes’. They have to understand what they are saying yes to.
Have you ever actually read the terms and conditions on a website or digital service you’ve signed up to use? What exactly are you agreeing to when you tick the ‘I agree’ box?
Informed consent is wilfully ignored by most software and service providers, because it benefits them to ignore it. Websites are designed to make it actively harder to know what you’re agreeing to. Often, the person ‘consenting’ is in the less powerful position. They may not actually have a choice at all, if they rely on the service, and it’s a case of ‘consent or be denied access’. It’s a classic case of ‘corporate-centred design’.
Fitness app Strava is an interesting example of a ‘consent first’ interface design: treating privacy like a nutrition label. Closer to home, Paper Giant recently had to think carefully about research consent processes while working with people with cognitive disability. We produced consent forms in Easy English and explained them in person to ensure that participants understood what we were asking of them.
I’ve only really talked about consent over data usage here, but you can quickly see how a simple reframe – ‘what if we designed explicitly for consent?’ – opens up new possibilities in how we think about design’s role in giving people the power to make decisions about their own lives and communities.
What’s your relationship to sovereignty and what’s your understanding of power?
These are the two hardest questions I am pondering as a facilitator at the moment. They are also the two questions that have most influenced my work since I first heard them.
To answer them, I’m currently playing with six questions.
My starting point is considering Who am I? and Where am I from? From there, I can consider How am I being in relation to... to humans, to non-humans, to history and place, to the topic I’m currently contending with, and so on.
I say I am ‘considering’ these questions, rather than ‘knowing’ because my answer – to who I am, where I’m from, how I am – is part fact and part contextual.
How I answer the first three questions depends on three further questions: Where am I? (in time and place), Who am I working with? (human and non-human beings) and What are we about to do? (for and with whom).
As you read this, imagine I am sitting in front of you asking you these six questions. How would you answer me as a colleague? as a client? as someone from the community you most identify with?
When considering my relationship to sovereignty, here is my current and incomplete answer: I am a first generation migrant. I was born in Kenya and raised on the land of the Wadi Wadi and Gadigal people. I live in Naarm and work as a facilitator on the land of the Wurundjeri and Boon Wurung people of the Kulin Nation. This always was and always will be Aboriginal land.
Artificial Intelligence (AI) and Machine Learning (ML) seem to be the latest in quick-fix silver-bullet technology – with businesses rushing to apply it to everything from online shopping and self-driving cars to medical diagnosis and criminal justice and sentencing.
Because AI and ML contain the words ‘intelligence’ and ‘learning’, it’s natural to attribute a science-fiction level of agency to this technology, but at its core, each AI is still a computer program, and its intelligence is strictly limited to what it has been programmed to do – run through a set of instructions (an algorithm) when it encounters a specific situation.
These algorithms are already being used to make decisions that have real impact on people’s lives. Dan’s link below speaks to one of the newer algorithms in use today (emotion recognition), but there are many others – and countless numbers of them have flaws that have either the potential for, or already documented, negative human consequences:
Or – and as a father with a son about to enter primary school, this is the most terrifying one for me – have your child’s emails, web history and social media use intensively surveilled by an AI that doesn’t understand teen sarcasm and has the authority to notify police?
If we step back from the AI angle for a moment, these are all examples of technological solutions to social problems. Which would be bad enough. But these companies are worse than just misguided – they barely even seem to care that their solutions don’t actually fix the problems. They are just exploiting problems for profit, using technology that is already known to have serious flaws and ethical concerns.
For example: companies that push school surveillance technology make the argument that “the technology is part of educating today’s students in how to be good ‘digital citizens’, and that monitoring in school helps train students for constant surveillance after they graduate.” Now, this might be true, as far as it goes. But statements like this just point out that our current workforce trajectory is one of privacy intrusion and corporate surveillance, something we should be fighting against.
Now – I’m not saying that AI is always or will always be bad, or that ML can never be used for anything positive. All I’m suggesting is that relying on companies that sell AI to fix things for us is a risk – one we need to be aware of, take an ethical stance on, and approach with caution and care.
Conversations around AI are extremely useful and enlightening, because they bring to light societal and political problems that need social solutions, not technological ones. Whenever you see AI put forward as a ‘solution’, ask yourself: What is the real problem that’s been identified here, and how might we fix that?
— Reuben Stanton & the PG Team
For this week’s newsletter, I stole most of my links from @hypervisible – please follow them on Twitter if that’s your thing.
The reason for this disaster? Managerial, design, and engineering decisions, within a system of constraints and a drive for profit above all else, that meant that in certain circumstances, the plane would automatically dive to the ground.
The Virilo quote is often read to mean that ‘any technology comes with unintended negative consequences’, but when thinking about things like what happened with Boeing, I like to read it as ‘decisions made about technology can have unintended consequences’, or to go more broad, ‘design decisions have consequences’.
Now, saying ‘decisions have consequences’ may come across as a bit glib, but in actual fact, making decisions that lead to good consequences isn’t always easy.
The information we base our decisions on, the systemic factors at play, can all make it really hard to make a good decision, or even just avoid a disastrous one. The recent Boeing example is sadly not a unique case – when I teach design research I regularly use the examples of both the Space Shuttle Challenger and Space Shuttle Columbia, and the failures of data analysis, communication design, and management in those instances that led directly to their explosions. (The information designer Edward Tufte has written extensively about this topic).
Making good decisions is of course possible. None of the disasters I’ve mentioned were deliberate, so much as they were avoidable. What’s really positive is that we can learn from the mistakes of others, so that we don’t make disastrous mistakes ourselves – there is a huge wealth of methods available, to both create success and avoid failure. Use the tools and methods at your disposal, and make your decisions count.
Did you notice that the distribution model for media has changed A Whole Lot Very Quickly?
Streaming music and on-demand video have changed how we think of such things as ‘an album’ and ‘not watching all six episodes right before bed’. The latest disruptions include Netflix-like subscription models for content that used to be single-serve, like video games and books. A personalised, endless smorgasbord of ways to spend our attention.
But our attention is not endless. And that makes corporate competition for it cutthroat. The CEO of Netflix once said that their biggest competitor is sleep. We know what a lack of sleep does to our own health, but at a societal level, it’s less an individual hardship and more a public health crisis. And this is all less about sleep than it is about informed consent.
In complex systems, whether they’re economic or political, it’s interesting to think how this works for the person designing the system. Why is the policy maker pulling that specific policy lever to get that outcome? What economic incentives push a creator to make that specific thing instead of another?
So, I think it’s worth asking the question: what happens when platforms pay content creators based on how well they monetise attention?
This is a system that incentivises long-term stickiness and mechanics to grab attention. Algorithms show preference gaps and providers rush to fill it with an exclusive. Anyone that’s opened Instagram recently has a sense for what this feels like.
Product design often aims to reduce a user’s ‘friction’. This might help a user make decisions (the next episode is playing in five seconds) but when we play that decision out to the level of the system (people are sleeping an hour less) we realise that a little friction can be a good thing.
This isn’t a call to quit Insta, cancel your Netflix subscription, and install blackout blinds. But, as people who influence the design of systems, we should be striving to ensure these systems give people the agency to make decisions that are in their best interests.
That means not misleading people in ways that are exploitative of human psychology and behaviour. It means being upfront about what people are getting out of a system and not hiding information from them where it’s necessary to make a decision.
Thinking about the bigger picture is non-negotiable, whether we’re building entertainment platforms or urban infrastructure policies.
“If more and more girls ride a bike to school, it means it’s safer and safer to cycle in traffic.
If more and more girls ride bikes to school, it means that bikes are increasingly accepted as a means of transport [...]
If more and more girls cycle to school, it means that more and more girls are actually going to school…”
You get the idea...*
The many risks of using only economic growth as a proxy measure for progress are already well understood - a rapidly warming world being the clearest negative outcome. GDP is still widely used as if it’s the only way to know if a country is doing well, even though we know that, to transition to a world that sustains us, we need to rapidly come up with new ways of measuring and evaluating progress.
Of course, any one measure of change is dangerous, but I like the ‘girls on bikes’ thought experiment because it shifts our mindset from “what do we usually measure and evaluate?” to “what might we measure and evaluate, and what might that tell us?”
Measurement and evaluation have been on my mind lately for several reasons. Yesterday, I spoke on a panel about digital disruption at the Australian Evaluation Society conference. Due to the magic of the internet, I wrote this piece before I spoke on the panel, so actually I have no idea how the discussion went; perhaps someone that was there can let me know?
Apart from being at the conference, evaluation has been on my mind because we’re having an ongoing discussion at Paper Giant about the best way to measure the impact of design, as well as how to effectively baseline and measure design capability.
At Paper Giant we are frequently trying to help organisations make better decisions – and by ‘better’, we mean decisions that lead to greater justice, equality and sustainability. One thing that helps with decision making: knowing the impact of your decisions!
In this issue we have examples of the power of listening and reporting personal stories, and the importance of paying careful attention to personal interactions rather than just metrics that can be easily captured in digital interactions.
What do you measure and why? Is it telling you what you think it’s telling you?
A number of us here at Paper Giant used to be educators – lecturers or tutors at universities, mostly – and so it’s no surprise that teaching design (or capability building, as it is regularly labelled) has become a core part of our work.
This year, we’ve delivered both public and private training around co-design, design research and service design. We’ve helped multiple government departments establish design education programs, and created resources, guides, toolkits and other support materials that clients use to deliver better projects. On top of that, almost all of our recent project work has had structured elements of mentorship and coaching within it, through which we work alongside client teams to deliver human-centred outcomes.
Most of this education work is about helping people use design. We’re often introducing design to people for the first time, so that they can work with problems in new ways. We also try to help design teams do better, by delivering training around specialties like qualitative research, research ethics, or storytelling and organisational engagement.
What we’ve learned from this work is that design is being widely embraced, and it’s being asked to solve increasingly complex problems. We’ve also learned that practitioners don’t necessarily feel like they have the skills and structures to do what they see as necessary. Imposter syndrome is common, and it appears that, once you’ve learned enough to get your first job, education is largely experience based and self-guided.
A two-day course can only teach so much, and three-year degrees are impractical for many professionals. Which leaves the third option: reflection on experience. We try to bring that to each of our projects, but we understand how difficult it can be to protect the time and space to do it properly.
Like all habits, learning takes work to sustain. From our perspective, what is the biggest lesson? What makes the biggest difference? The commitment to keep trying.
Paper Giant acknowledges the Wurundjeri and Boonwurrung people of the Kulin nation, and the Ngunnawal people as the Traditional Owners of the lands on which our offices are located, and the Traditional Owners of Country on which we meet and work throughout Australia. We recognise that sovereignty over the land has never been ceded, and pay our respects to Elders past, present and emerging.