January 10, 2020

PG #49: Navigate A Noisy World By Mapping Your Values

Illustration of 2 ibis birds facing on another at the top of a waterfall
Bin Chickens – Once We Were Sacred by Sally Browne

Humans are natural pattern finders. We can’t help but search for similarities and differences in behaviour, events, data, so that we feel we can understand the world a little better. 

We are also notoriously bad at it. Patterns we find often are biased towards our own worldview, and even knowing which data to consider valid is (and always has been) political. 

2019 for us has been a year of attempts at pattern recognition. This is Paper Giant’s third full year in business, and we figured three times through the loop is enough to start connecting some dots. 

In that time, we’ve noticed the work get more complex. We’ve noticed that rote methods to problem solving have stopped working, if they ever did. We’ve also noticed that people are growing more comfortable with complexity. Our clients still want and need answers, but they also want to learn how to listen more carefully to their customers and communities, so they can change with them. They expect the answers to their questions to be different to the answers they’ve always gotten.

We’re incredibly proud of the year we've had. In 2019, we opened a Canberra studio so that we can more easily service the complex problems of federal government. Through the year, we delivered over 50 projects across government at all levels, in financial services, and for the energy, disability, software and legal sectors. We delivered numerous training courses both privately and publicly, and hosted hundreds of people in our Melbourne studio to hear talks on accessibility, diversity and inclusion, and innovation in the justice sector.

Through all this, one pattern is clearest. Things are changing, rapidly. In order to continue to deliver on our purpose of helping organisations understand and solve complex problems, Paper Giant is going to continue adapting to the needs of modern problems. 

2020 is the start of a vital decade, and we're looking forward to taking a rest, rolling up our sleeves, and staying with the trouble

Happy new year. 

 — Chris Marmo & the PG Team

Read the rest of Issue #49 here.

December 10, 2019

PG #48: Disability & The Criminal Justice System

A system map illustrating the complex dynamics of working with a person with a disability in the criminal justice system in Victoria
Mapping complex systems dynamics

As we reach the end of 2019, I remember back to five years ago when all the organisations we worked with were developing their ‘2020 visions’. 2020 sounded sufficiently futuristic to open up creative thinking, and close enough to be just achievable. 

I doubt many of us would have predicted the level of political, social and ecological change that we’ve faced in those five years. 

Knowing now what was unpredictable then, you can ask yourself - how useful was your vision? How did you measure and learn? How did your organisation adapt and respond?

Paper Giant is leaving 2019 with some questions - questions about how design works within organisations, and how it helps them make decisions that lead to positive outcomes. If you work in design, or your job is to make change at the org where you work, chances are you’ve asked some similar questions this year:

  • What good are post-it notes, if they don’t help you to make your research and ideas tangible?
  • What good are personas, if they don’t communicate the intricacies of real lives, and the impact your services have on real people?
  • Why narrow your focus to customers, when we all live as part of complex communities?
  • What good are journey maps, when people don’t experience your service in a linear way? 
  • What good are service blueprints if they are too big to start?
  • Why make grand claims about ‘transformation’, without thinking about who will be transformed?
  • Why make recommendations, without clear pathways or tools to use to put them to use?
  • Over the course of this year, we’ve been evolving and changing our work, because to create a world that is more just, more equal, and more sustainable, we need to think about design in a bigger way. 

We think these questions are a reasonable start.

 — Reuben & the PG Team

Read the rest of Issue #48 here.

November 26, 2019

PG #47: Data And Consent – Are Designers Dropping The Ball?

An aboriginal artwork depicting the wet season using a range of blues, aqua and white paints arranged in circular patterns
Image: Wet Season, Renee Clifton

Good design is often about asking the right questions. So when you hear ‘human-centred design’, you need to get more specific: Which humans? What human action, or relationship, or outcome?

I’ve been thinking about this a bit lately because of some work that Paper Giant is involved in around the consumer data rights legislation in Australia. The goal of the CDR is laudable – to give consumers of banking and energy services the right of consent over who has access to their data (such as their account information and smart meter data). The aim of legislation like the CDR is to make it possible for individuals to know who has access to information about them, to know what those people or organisations are using it for, and, crucially, to revoke that access.

In this example of human-centred design, we can ask:

  •     Which humans? Consumers of banking products (i.e. just about everyone)
  •     What action? Informed consent over who has access to my data, and how they use it.

Which leads us to a reframe: ‘What if we designed explicitly for consent?’

Real consent is informed consent – meaning it is not enough for people to click ‘Yes’. They have to understand what they are saying yes to.

Have you ever actually read the terms and conditions on a website or digital service you’ve signed up to use? What exactly are you agreeing to when you tick the ‘I agree’ box?

Informed consent is wilfully ignored by most software and service providers, because it benefits them to ignore it. Websites are designed to make it actively harder to know what you’re agreeing to. Often, the person ‘consenting’ is in the less powerful position. They may not actually have a choice at all, if they rely on the service, and it’s a case of ‘consent or be denied access’. It’s a classic case of ‘corporate-centred design’.

Fitness app Strava is an interesting example of a ‘consent first’ interface design: treating privacy like a nutrition label. Closer to home, Paper Giant recently had to think carefully about research consent processes while working with people with cognitive disability. We produced consent forms in Easy English and explained them in person to ensure that participants understood what we were asking of them.

I’ve only really talked about consent over data usage here, but you can quickly see how a simple reframe – ‘what if we designed explicitly for consent?’ – opens up new possibilities in how we think about design’s role in giving people the power to make decisions about their own lives and communities.

 — Reuben Stanton, Managing Director

Read the rest of Issue #47 here.

November 18, 2019

PG #46: Who is allowed to be controversial?

Painting of a tree in the countryside with shrubbery around it and a blue sky behind
Image: Shade Tree, Sue McCutcheon

What’s your relationship to sovereignty and what’s your understanding of power?

These are the two hardest questions I am pondering as a facilitator at the moment. They are also the two questions that have most influenced my work since I first heard them.

To answer them, I’m currently playing with six questions.

My starting point is considering Who am I? and Where am I from? From there, I can consider How am I being in relation to... to humans, to non-humans, to history and place, to the topic I’m currently contending with, and so on.

I say I am ‘considering’ these questions, rather than ‘knowing’ because my answer – to who I am, where I’m from, how I am – is part fact and part contextual. 

How I answer the first three questions depends on three further questions: Where am I? (in time and place), Who am I working with? (human and non-human beings) and What are we about to do? (for and with whom).

As you read this, imagine I am sitting in front of you asking you these six questions. How would you answer me as a colleague? as a client? as someone from the community you most identify with?

When considering my relationship to sovereignty, here is my current and incomplete answer: I am a first generation migrant. I was born in Kenya and raised on the land of the Wadi Wadi and Gadigal people. I live in Naarm and work as a facilitator on the land of the Wurundjeri and Boon Wurung people of the Kulin Nation. This always was and always will be Aboriginal land.

— Lina Patel, Facilitator and Collaboration Designer
     Part of the team bringing you the Ethical Practice stream at SDNOW4

Read the rest of issue #46 here.

October 29, 2019

PG #45: We Have A Responsibility For What We Make And How It Affects The World

An illustration of a pile of papers that are surrounded by a cup of coffee and a pen to the left and a cup of tea and another pen on the right of the paper.
Illustration by Hope Lumsden-Barry, Communications Designer

Artificial Intelligence (AI) and Machine Learning (ML) seem to be the latest in quick-fix silver-bullet technology – with businesses rushing to apply it to everything from online shopping and self-driving cars to medical diagnosis and criminal justice and sentencing.

Because AI and ML contain the words ‘intelligence’ and ‘learning’, it’s natural to attribute a science-fiction level of agency to this technology, but at its core, each AI is still a computer program, and its intelligence is strictly limited to what it has been programmed to do – run through a set of instructions (an algorithm) when it encounters a specific situation.

These algorithms are already being used to make decisions that have real impact on people’s lives. Dan’s link below speaks to one of the newer algorithms in use today (emotion recognition), but there are many others – and countless numbers of them have flaws that have either the potential for, or already documented, negative human consequences:  

How would you like to be ‘diagnosed’ by medical technology that is flawed such that it favours white patients over black patients – even when the white patients are less sick?

Or get a ‘job interview’ from a face-scanning algorithm that decides whether your “facial expressions, eye-movements, body movements, details of clothes, and nuances of voice” are right for the job?

Or be wrongly identified by a “permanent line-up” law-enforcement algorithm capable of scanning the biometric data of almost every citizen in Australia?

Or – and as a father with a son about to enter primary school, this is the most terrifying one for me – have your child’s emails, web history and social media use intensively surveilled by an AI that doesn’t understand teen sarcasm and has the authority to notify police?

If we step back from the AI angle for a moment, these are all examples of technological solutions to social problems. Which would be bad enough. But these companies are worse than just misguided – they barely even seem to care that their solutions don’t actually fix the problems. They are just exploiting problems for profit, using technology that is already known to have serious flaws and ethical concerns.

For example: companies that push school surveillance technology make the argument that “the technology is part of educating today’s students in how to be good ‘digital citizens’, and that monitoring in school helps train students for constant surveillance after they graduate.” Now, this might be true, as far as it goes. But statements like this just point out that our current workforce trajectory is one of privacy intrusion and corporate surveillance, something we should be fighting against. 

Now – I’m not saying that AI is always or will always be bad, or that ML can never be used for anything positive. All I’m suggesting is that relying on companies that sell AI to fix things for us is a risk – one we need to be aware of, take an ethical stance on, and approach with caution and care.

Conversations around AI are extremely useful and enlightening, because they bring to light societal and political problems that need social solutions, not technological ones. Whenever you see AI put forward as a ‘solution’, ask yourself: What is the real problem that’s been identified here, and how might we fix that?

— Reuben Stanton & the PG Team

For this week’s newsletter, I stole most of my links from @hypervisible – please follow them on Twitter if that’s your thing.

Read the rest of Issue #45 here.

October 15, 2019

PG #44: How Do Good Companies Turn Bad?

An aboriginal artwork depicting bush medicine leaves using a range of blues, greens and whites
Image: Bush Medicine Leaves Dreaming by Louise Numina

Iain’s contribution below opens with a famous quote from Paul Virilio – “When you invent the ship, you also invent the shipwreck.” 

The second part of that quote is “when you invent the plane you also invent the plane crash”, which I’m calling out here because I just read this incredible (long) article about the Boeing 737 MAX disaster, in which 346 passengers and crew were killed in two separate incidents. 

The reason for this disaster? Managerial, design, and engineering decisions, within a system of constraints and a drive for profit above all else, that meant that in certain circumstances, the plane would automatically dive to the ground. 

The Virilo quote is often read to mean that ‘any technology comes with unintended negative consequences’, but when thinking about things like what happened with Boeing, I like to read it as ‘decisions made about technology can have unintended consequences’, or to go more broad, ‘design decisions have consequences’.

Now, saying ‘decisions have consequences’ may come across as a bit glib, but in actual fact, making decisions that lead to good consequences isn’t always easy.

The information we base our decisions on, the systemic factors at play, can all make it really hard to make a good decision, or even just avoid a disastrous one. The recent Boeing example is sadly not a unique case – when I teach design research I regularly use the examples of both the Space Shuttle Challenger and Space Shuttle Columbia, and the failures of data analysis, communication design, and management in those instances that led directly to their explosions. (The information designer Edward Tufte has written extensively about this topic). 

Making good decisions is of course possible. None of the disasters I’ve mentioned were deliberate, so much as they were avoidable. What’s really positive is that we can learn from the mistakes of others, so that we don’t make disastrous mistakes ourselves – there is a huge wealth of methods available, to both create success and avoid failure. Use the tools and methods at your disposal, and make your decisions count. 

 — Reuben Stanton & the PG Team

Read the rest of Issue #44 here.

October 1, 2019

PG #43: The Attention Economy

Painted city landscape depicting a bridge over a river in front of a number of sky scrapers in the distance
Image: My River City Sky by Laing Rahner

Did you notice that the distribution model for media has changed A Whole Lot Very Quickly? 

Streaming music and on-demand video have changed how we think of such things as ‘an album’ and ‘not watching all six episodes right before bed’. The latest disruptions include Netflix-like subscription models for content that used to be single-serve, like video games and books. A personalised, endless smorgasbord of ways to spend our attention.

But our attention is not endless. And that makes corporate competition for it cutthroat. The CEO of Netflix once said that their biggest competitor is sleep. We know what a lack of sleep does to our own health, but at a societal level, it’s less an individual hardship and more a public health crisis. And this is all less about sleep than it is about informed consent.

In complex systems, whether they’re economic or political, it’s interesting to think how this works for the person designing the system. Why is the policy maker pulling that specific policy lever to get that outcome? What economic incentives push a creator to make that specific thing instead of another?

So, I think it’s worth asking the question: what happens when platforms pay content creators based on how well they monetise attention? 

This is a system that incentivises long-term stickiness and mechanics to grab attention. Algorithms show preference gaps and providers rush to fill it with an exclusive. Anyone that’s opened Instagram recently has a sense for what this feels like.

Product design often aims to reduce a user’s ‘friction’. This might help a user make decisions (the next episode is playing in five seconds) but when we play that decision out to the level of the system (people are sleeping an hour less) we realise that a little friction can be a good thing.

This isn’t a call to quit Insta, cancel your Netflix subscription, and install blackout blinds. But, as people who influence the design of systems, we should be striving to ensure these systems give people the agency to make decisions that are in their best interests.

That means not misleading people in ways that are exploitative of human psychology and behaviour. It means being upfront about what people are getting out of a system and not hiding information from them where it’s necessary to make a decision.

Thinking about the bigger picture is non-negotiable, whether we’re building entertainment platforms or urban infrastructure policies.

— Dan Woods, General Manager, Canberra

Read the rest of Issue #43 here.

September 17, 2019

PG #42: The Power Of Personal Stories

An aboriginal artwork depicting the Bush Plum using multiple little purple, gold and white brush strokes
Image: Bush Plum by Polly Ngale

The other day, I came across this interesting thought experiment: what if, instead of using ‘economic growth’ as our measure of progress, we measured the number of girls who ride a bike to school?

“If more and more girls ride a bike to school, it means it’s safer and safer to cycle in traffic.

If more and more girls ride bikes to school, it means that bikes are increasingly accepted as a means of transport [...]

If more and more girls cycle to school, it means that more and more girls are actually going to school…”

You get the idea...*

The many risks of using only economic growth as a proxy measure for progress are already well understood - a rapidly warming world being the clearest negative outcome. GDP is still widely used as if it’s the only way to know if a country is doing well, even though we know that, to transition to a world that sustains us, we need to rapidly come up with new ways of measuring and evaluating progress.    

Of course, any one measure of change is dangerous, but I like the ‘girls on bikes’ thought experiment because it shifts our mindset from “what do we usually measure and evaluate?” to “what might we measure and evaluate, and what might that tell us?” 

Measurement and evaluation have been on my mind lately for several reasons. Yesterday, I spoke on a panel about digital disruption at the Australian Evaluation Society conference. Due to the magic of the internet, I wrote this piece before I spoke on the panel, so actually I have no idea how the discussion went; perhaps someone that was there can let me know?

Apart from being at the conference, evaluation has been on my mind because we’re having an ongoing discussion at Paper Giant about the best way to measure the impact of design, as well as how to effectively baseline and measure design capability.  

At Paper Giant we are frequently trying to help organisations make better decisions – and by ‘better’, we mean decisions that lead to greater justice, equality and sustainability. One thing that helps with decision making: knowing the impact of your decisions! 

In this issue we have examples of the power of listening and reporting personal stories, and the importance of paying careful attention to personal interactions rather than just metrics that can be easily captured in digital interactions.

What do you measure and why? Is it telling you what you think it’s telling you?

— Reuben Stanton & the PG Team

*Thanks Jussi for introducing me to this idea.

Read the rest of Issue #42 here.

September 3, 2019

PG #41: Design Education’s Big Gap

A large number of brown and blue jars hung in a circular pattern from the steel beams supporting a windowed ceiling.
Image by Stephen Bennett

A number of us here at Paper Giant used to be educators – lecturers or tutors at universities, mostly – and so it’s no surprise that teaching design (or capability building, as it is regularly labelled) has become a core part of our work.

This year, we’ve delivered both public and private training around co-design, design research and service design. We’ve helped multiple government departments establish design education programs, and created resources, guides, toolkits and other support materials that clients use to deliver better projects. On top of that, almost all of our recent project work has had structured elements of mentorship and coaching within it, through which we work alongside client teams to deliver human-centred outcomes. 

Most of this education work is about helping people use design. We’re often introducing design to people for the first time, so that they can work with problems in new ways. We also try to help design teams do better, by delivering training around specialties like qualitative research, research ethics, or storytelling and organisational engagement.  

What we’ve learned from this work is that design is being widely embraced, and it’s being asked to solve increasingly complex problems. We’ve also learned that practitioners don’t necessarily feel like they have the skills and structures to do what they see as necessary. Imposter syndrome is common, and it appears that, once you’ve learned enough to get your first job, education is largely experience based and self-guided. 

A two-day course can only teach so much, and three-year degrees are impractical for many professionals. Which leaves the third option: reflection on experience. We try to bring that to each of our projects, but we understand how difficult it can be to protect the time and space to do it properly. 

Like all habits, learning takes work to sustain. From our perspective, what is the biggest lesson? What makes the biggest difference? The commitment to keep trying.

— Chris Marmo & the PG Team

Read the rest of Issue #41 here.

August 20, 2019

PG #40: Design As A ‘Model Of Care’

One of the benefits of working in design consulting is that you are constantly exposed to new ways of working, thinking and considering. For example, about six months ago I was exposed to the concept of ‘models of care’ in healthcare systems.

A model of care is a set of steps, practice guidelines, and instructions on how to provide the best possible care for a patient. This is about more than how to practise medicine, it’s about taking into account individual circumstances, medical best practice, legislation, services offered across the system, and using all this to take a holistic approach to ‘care’ for someone. A good model of care (for example, the Cancer Council’s care pathways) asks ‘how can we deliver the best possible outcomes for each patient?’

This is even more complex for people at the end of their lives, where the ‘best possible care’ can mean many different things. Maintaining dignity, making sure patient choices are respected, and ensuring equality of access are all challenges here. The new ‘assisted dying’ laws in Victoria are an example of legislation that has been put into effect to address some of the care needs of people with terminal illness that were not previously being met by our health care system in Victoria.

We were incredibly privileged to have played a small part in the design of this new system.

Something that is unfortunate about how much of society delivers services, is that even with efforts towards fair access and equity, many of our systems still favour the advantaged. Just staying with health care in Australia – the design of the NDIS, despite its rhetoric around choice, has unfairly advantaged the already privileged.

Maybe approaching the work of design, and thinking about the models we create as ‘models of care’ can help us here?

How do we care for people, society, and the planet, and what models are we following? How can the models and systems we design deliver the best possible outcomes, so that all people can be treated with dignity, equality and respect?

— Reuben Stanton & the PG Team

Read the rest of Issue #40 here.


If you're interested in working with us, just get in touch.

Email: hello@papergiant.net
Call: +61 (03) 9112 0514


A fortnightly update of our latest thinking, straight to your inbox.

Paper Giant Logo

Melbourne (HQ)

Level 3
2 Russell Street
VIC, 3000


Level 3
17-21 University Avenue
ACT, 2600

Paper Giant acknowledges the Wurundjeri and Boonwurrung people of the Kulin nation, and the Ngunnawal people as the Traditional Owners of the lands on which our offices are located, and the Traditional Owners of Country on which we meet and work throughout Australia. We recognise that sovereignty over the land has never been ceded, and pay our respects to Elders past, present and emerging.