Dear Design Student: Drop the other 90%

There are 21.3 million refugees around the world (UNHCR, 2015) ; meanwhile, 22 000 children die every day because of poverty (United Nations Children’s Fund, 2015). But don’t you worry: there are thousands of young, over-enthusiastic creatives who believe that their exquisite design thinking can solve many of these problems. ‘Making a REAL difference’ has never been more ‘sexy’ and it seems that academic institutions allow the students to get hooked on their shallow definition of design for development.

Let’s start from the beginning: how come that the current design students want to make a meaningful impact in the Third World? The answer is easy: it is not just the industry we work in, it is the entire generation. 7 out of 10 millennials consider themselves social activists (TBWA/Worldwide agency et al., 2013), which certainly contradicts with their stereotypical image of narcissistic, ego-centred ‘unique-snowflakes’. However, looking at the socio-political background throughout their childhood one can clearly see where they got their inspiration from. They were the first generation to learn about Millenial Development Goals at school and they’re the ones who came across the news of Rwandan genocide while swapping TV channels. Thanks to the Internet Network, any information about any place in the World has always been within the reach of their hands. Having followed the mainstream media, who present Africa as the nest of poverty and problems, no wonder that 84% of Millennials would travel abroad to participate in volunteer activities ( Marriott Rewards Credit Card Survey, 2015), and that Africa is second most popular continent to follow their ambitions (Salvesen, 2014)

Considering current design education, focusing on ‘making a positive change’, the idea of designing for the developing world is just too attractive to resist. The problem arises when the students are taught only about the positive potential of projects but remain unaware of the countless scenarios when the solution turns into a wasted investment. I remember one of my first lectures at Product Design course when we were shown One Laptop per Child (OLPC) project. Hundreds of thousands of laptops were distributed to children in various developing countries to enhance their learning opportunities in both home and school environment. The project was shown as a star example of using design to create durable, low-cost, educational tool. Later I researched the project myself and found out that its’ implementation failed on many levels, mainly due to the lack of basic understanding of the socio-cultural context. “If you’re a $2-a-day family, are you really going to let your kid take the most expensive thing in the household to school every day?” commented Kevin Starr, founder of Malaga Foundation (2010). In the end, OLPC did not have any impact on the test scores in reading and math in at least four participating countries (Israel, Peru, Romania and Nepal) (Melo et al., 2013). How come we were never taught this side of the story?

“Not making a difference’’ is only one problem. Design students are not being warned that badly applied humanitarian aid project brings far more harm than not doing anything. In the 1950s and 60’s United Nations dug half a million wells in Bangladesh without testing them, and it turned out that 2 out 5 of them were contaminated with arsenic which led to one of the largest mass poisonings in human history. But the story doesn’t end here. UN’s solution was to mark the safe walls with green paint, and the poisonous ones with red paint. Villages ended up believing that because the red wells were tainted, the girls living nearby were tainted too. Many young women became unmarriageable and therefore sold by their families into prostitution (Zolli, 2013). It is a drastic example that shows how humanitarian aid can turn into completely unexpected direction. It is difficult to face the possible negative effects of the project right at its very beginning but when it comes to design for development it’s essential that the students learn how to think very critically of their own ideas, so such mistakes as the project in Bangladesh are not being repeated.

No short-term research about the developing world will be able to give enough insight to bring social innovation to the community. In my point of view, educational authorities who let students believe they can make a difference behave highly hypocritical: they are obsessed with the importance of the user-insight and sympathy tools but on the other hand they let students pursue projects about experiences that are as far as possible from their everyday life. I recently came across a project by students of Royal College of Art in London who designed a coat for refugees based on one talk with an aid worker from Doctors Without Borders (on top of that: throughout one-week design ‘hackaton’). But a single chat is not enough, and the young designers must acknowledge that international development specialists shouldn’t be their interviewees: they should be their collaborators. Per Heegens, CEO of IKEA Foundation, said that the first step in the design process of designing portable refugee shelter was contacting United Nations High Commissioner for Refugees (UNHCR) “because they know a lot about what works and what doesn’t, and when you develop a product like this you want to develop it with the people who will actually use it ” (2016)

Students leave the university convinced that their limited expertise in international development and design skills are enough to address the socio-political issues of the highest complexity. The problem arises when they unintentionally transfer their adolescent, ignorant attitude into their future professional practice. According to Panthea Lee, (co-founder of social innovation firm Reboot and UNICEF advisor), the problem already exists within the current creative consultancies in an attempt to design for social innovation and developing communities. ‘’(…) a lot of design firms now going to the public sector and to NGO’s saying, ‘We’re designers, we’re here to help you!’ And they’re like, ‘What are you talking about? You don’t speak our language, you don’t know development theory, you don’t know our approach.’ ‘ ’ (2011)

It is now time to take action and change the way we teach design for development so the future design leaders don’t make the same mistakes.

So, dear design educators, YOU are in the leading position to make the young generation of creatives more responsible global thinkers. Encourage them to collaborate on long-term projects with experts in social sciences and humanitarian aid specialists. You must push your students to seek the information from the outside of their ‘knowledge comfort zone’. Show them UN online databases and Multiple Indicator Cluster Surveys ran by UNICEF (annual report presenting statistical data regarding life conditions, divided by country and region) and make them read every single page about the community they design for! Solid secondary research is an irreplaceable basis of any design project, and it must be done with an exquisite attention to detail when we approach scenarios from the developing world. Remind them that their university-level projects are there to expand their knowledge: not to make them responsible for ending poverty on the other continents.

But most importantly, ask your students to ‘humble-up’. Because, as Panthea Lee explains, ‘the world’s most intractable problems are deeply rooted in massive systems, while design is a discipline focused on the edges’. (2013)

Written by Anna Palgan (Email)


  • Rob Bye (2014) AfricaBike – Enabling education in Africa [photography]. Available at: (Accessed: 12 November 2016)
  • Who Wants to be a Volunteer?. (2014).South Africa: SAIH Norway, Kinge, K., Edland-Gryt, S. and Skaar M.K. (Accessed: 12 November 2016)
  • UNHCR. (2015) Figures at a Glance [infographic], Available at: (Accessed: 2 November 2016)
  • United Nations Children’s Fund (2015) ‘Levels & Trends in Child Mortality’, New York. Available at (Acessed 2 November 2016)
  • Amanda (2015), ‘How Millennials Travel Differently ; written on behalf of Marriott Rewards Credit Card from Chase’, A Dangerous Business Travel Blog, 19 June 2015, Available at: (Acessed 10 November 2016)
  • Salvesen, A. (2014) ‘2012 Annual Volunteer Report Evaluation’. Available at: (Accessed: 10 November 2016)
  • TBWA/Worldwide, Take Part (2013) The Future of Social Activism [infographic]. Available at: (Accessed: 2 November 2016)
  • Starr K. (2011) Lasting impact, PopTech Conference 2011 at Camden, Maine [podcast]. Available at: (Accessed: 5 November 2016)
  • Melo, G., Machado, A., Alfonso, M. and Viera, M. (2013) ‘Profundizando en los efectos del Plan Ceibal’, Comentarios finales, 5, p.24. Available at: (Accessed: 2 November 2016)
  • Zolli, A., (2010) Failure and its upside—a report from the 2010 PopTech conference, Available, interviewed by Marcia Stepanek for Stanford Social Innovation Review, Available at: (Accessed: 5 November 2016)
  • Morby, A. (2016) RCA students design wearable dwelling for Syrian refugees, Available at: (Accessed: 10 November 2016)
  • Lee, P. (2011) ‘A Better World By Design: Spotlight on Panthea Lee of Reboot’, interviewed by Dave Seliger on Core77, Available at: (Accessed: 5 November 2016)
  • Lee, P. (2013) Why “Design For Development” Is Failing On Its Promise. Available at: (Accessed: 9 November 2016)


The hacker’s design process

Can the hardware hacker’s creative methods bring new insight to designers?
Generally, there is a negative stigma associated with the term ‘hacking’ due to black hat hackers hacking into private or Government servers. However, there are many forms of hacking, such as the ‘Ikea hacks ’which are the manipulation of Ikea products to adapt their functionality and ‘life hacks’ which are tasks or actions to reduce frustration of life. (Dictionary, 2016) Nevertheless, the term hacking remains a difficult concept to define as it can refer to many different practices.
One of the increasingly common trends is ‘hardware hacking’. This refers to any method of hardware modification through its electronics or by its behaviour. The physical modification of a device is generally straightforward i.e. disassembling or cutting into the device. However, when hacking electronics or when changing product behaviour or primary function, it can become complex due to ethical, legal reasons and violation of a company’s intellectual property rights.
“Hardware hacking – modifying a product to do something it was never intended to do by its original designer” (Grand, 2006)
In late 2015 Amazon introduced a more convenient way to purchase products from their store using the Amazon Dash Button. The button is designed to enable users to quickly repeat orders of products they regularly purchase. (Burgess, 2016) The device works by clicking a button which wakes up the device, connects to the Wi-Fi, orders the product from the Amazon Store and then turns off. The process is very simple and convenient for Amazon customers. But shortly after the device was introduced, other functions for this product were found. The device became instantly appealing to hardware hackers as this tiny adhesive physical trigger could be easily altered to change its function.
There are many ideas of how to change the functionality of the dash from controlling power outlets in homes to data tracking. However, every new idea for the dash is built from the same fundamental code irrespective of the function. As a result of the ‘Amazon Dash hacking,’ awareness of the hardware hacking trend increased worldwide from veteran programmers and casual hobbyist.
Hardware hackers begin by analysing existing products to find hardware exploitations. For example, a hardware hacker examined baby-tracker apps and found that they generally served a single purpose. However, as his baby’s needs kept changing, he hacked the dash and made alterations to track his baby’s data to discover patterns that would not normally be noticed. “I want a simple button that I can stick to the wall and push to records poops today but wake ups tomorrow.” (Benson, 2016) When creating the dash, the designers only concentrated on a solution to reorder products and did not consider any other use. Therefore, analysis by hardware hackers can lead to the discovery of new creative solutions to problems that the original designers did not consider or did not know existed.
As someone who has knowledge of programming and an interest in altering hardware, I purchased a dash when it became available in the UK. The motivation for my purchase was specifically to modify the dash’s primary function. After discovering and researching the Philips light ‘hack’, I repurposed my dash to turn my computer on remotely. (Dudes, 2016) Although, this modification was not as complex or innovative as some tweaks done, my dash served a function which was personal to me. I achieved the primary function alteration by using a python script and changes to the computer bios. After completing the adjustments, I realised that other ‘hardware hack’ modifications could benefit different areas of my life. I considered purchasing two devices in order to produce a stop and start or an execute and cancel functionality. I considered other functions the dash could perform to help the community e.g. a low-cost button to control multiple automated functions to open blinds and turn on lights for the physically impaired.
I believe that the concept of changing a product to solve a different problem to be interesting. When a product is disassembled, and rebuilt there is great potential for better design. Although this process may not be regarded as a typical design process, it could benefit designers as it can provide unique solutions. However, the real question should be whether designers should embrace the thought processes of hardware hackers in their design cycles?
There are various benefits of a hacker’s creative process. It encourages a free-spirited and unconventional form of thinking and proves that the repeated analysis of product exploits often reveals unexpected solutions. (Grand,2006) There are many examples of how hacker’s use their alternative thinking to find creative solutions to problems. For example, hackers created jackhammer hearing protection earmuffs which played noisy environment audio books. This solution offered better noise reduction than the commercial noise cancelling headphones and was significantly cheaper. (Hartmann, B., Doorley, S. and Klemmer, S, 2008) However, there are often negatives to the hacker’s methods and these can include ethical, legal and patent issues. Furthermore, their process only works when coupled with other creative processes, such as Double Diamond, as the hacker’s model focuses primarily on discovery, insight, opportunity and ideation. (Design Council,2016) Nevertheless, I believe hardware hackers can play an important part in the design process as they can offer a different insight into products which traditional product and software designers may not discover.
As an industrial designer, I have knowledge of various creative processes from IDEO’s HCD to the Design Council’s Double Diamond. Typically, these processes use a combination of divergent and convergent thinking. During the divergent thinking, I feel that the models do not focus sufficiently on
the existing product but rather on the investigation to find a possible solution, unlike hardware hackers whose core principal is the existing product investigation. The hardware hacker’s principle of finding and analysing product exploits often leads to the development of new functions and alternative solutions to problems. I feel designers often focus on the premise that a new product must be designed to solve a particular problem only. By comparison, hardware hackers frequently find their solutions through the alteration of an existing product.
As increasing numbers of product designers become skilled in the programming elements of design, knowledge can be gained by this method of innovation and iterative thinking to alter products in order to discover other functions and uses. I believe that designers could benefit from this different creative process, coupled with a more structured approach, that hardware hacking has to offer.
“The designer should try to break the security mechanism of those product, then fix them and try to break them again. Time should be scheduled for this iterative process during the design cycle. “(Grand, 2006)
Written by Robert Gittus (LinkedIn/Email/Medium)

[1] Hardy, A. (2015). Why has the number of teenagers taking design and technology GCSE dropped?. Available: Last accessed 1/11/16.

[2] Brown, M. (2015). Available: Last accessed 5/11/16.

[3]Cape UK. (2016). What’s All This About The EBACC?. Available: Last accessed 5/11/16.

[4] The Guardian. (2011). A-level choices: the sharp contrast between private schools and comprehensives – get the data. Available: Last accessed 5/11/16.

[5] Wellington College. (2017). College History. Available: Last accessed 22/02/17.

[6] Sedbergh School. (2017). Available: Last accessed 22/02/17.

[7] Eton College . (2017). A2 Statistics- Summer 2016. Available: Last accessed 22/02/17.

[8] Independent Schools Inspection. (2016). Independent Schools Inspection- Eton College. Available: Last accessed 12/11/16.


[9] Independent Schools Inspection. (2013). Independent Schools Inspectorate- Oakham School. Available: Last accessed 12/11/16.


[10] Anonymous, Personal Communications. (2016).


[11] Hattie, J et al. (2016). Learning strategies: a synthesis and conceptual model. Available: Last accessed 10/11/16.

[12] Cowley, J. (2013). Eton Eternal: How one school came to dominate public Life. Available: Last accessed 9/11/16.


[13] Maritime Cadets. (2016). The Field Gun. Available: Last accessed 22/02/17.

[14] Rod Purcell. (2014). [Image]. Dressing for the occasion: Speech Day Harrow School. Available: (!/2014/05/dressing-for-occaision-speech-day.html. Last accessed 22/02/17.

Man-Made: The Artificial Womb and the Future of Reproduction.


From underwater breathing, to see-through skin, where should design stop in its quest for perfection?

1978, life begins as another child is born in Oldham general hospital. Baby Louise is the start of something new, her birth as the world’s first IVF baby [1] paves the way for scientific influence in reproduction.

Skipping forward to 2016 we see significant advances both in fertility medication and in stem cell research. Perhaps the most influential changes however, are due to happen in a quiet little ward called the NICU (neonatal intensive care unit). Here viability for premature births has dropped from 27 weeks to as low as 22 weeks old [2]. Current advances however have slowed, because amniotic fluid and the umbilical cord present a significantly gentler environment for growth than the air & peristalsis pumps of a hospital room.

The next logical step in neonatal (early birth) care, is the design of an artificial womb for protection until a suitable delivery date. This may seem like a sci-fi concept, but the technology isn’t so far from reality. The first ex vivo (outside the womb) human embryo test was run in 1989, ceased due to ethical concerns [3]. Today we see advancement from both ends of the process, with Cambridge University [4] culturing human embryos until the legal limit of 14 days (established in 1984 [5]) and the efficacy of Total liquid ventilation proven in lambs [6]. While the use in neonatal care is unlikely to face objection, it could be seen to pave the way for ectogenesis: “The development of embryos in artificial conditions outside the uterus” [7].

What will the future look like? Will ectogenesis paint the bleak picture once described by J.B.S. Haldane in his 1923 seminal essay “Daedalus, or, Science and the Future” [8]? Haldane envisaged the use of ectogenesis in sterilising generations, accelerating evolution by DNA selection. Some believe we are already on the road to this reality, as Sally Phillips argues in the recent BBC documentary [9], while considering the implications of genetic screening on termination rates.

Genetic screening was introduced in the 1960s to diagnose Phenylketonuria as a preventative measure [10]. Today screening is available for over 2,000 conditions, primarily taking the form of predictive, carrier, prenatal and pre-implantation screening [11]. All these determine the probability of future genetic disorders. The eradication of diseases such as Huntington’s might seem like a no brainer to some, but it’s the ethical “grey line” this creates that is leading to further controversies over the elimination of non-deteriorating disorders like Down’s Syndrome. Could you be next on nature’s un-natural chopping block? Many of us have had depression or anxiety during our lifetimes, and these could well be next to face the knife.

Screening of a foetus first occurs around 11-13 weeks [12], by which time many mothers will have become emotionally attached to the idea of their baby. The reason genetic screening is so closely linked to ectogenesis, is that, as with IVF, screening can occur before conception even begins. Thus, rather than terminating a foetus which has the chance of a disease, we will be selecting whether or not our child will be disabled in advance.

But what do we actually want from the future? Eugenics, ectogenesis and the age of the “designer baby” are all possibilities. To reduce overpopulation and unemployment rates, Governments could control breeding, creating a workforce suited to specific jobs. Maybe we will get rid of the “family” altogether. But this is only one potential, and before we eliminate the concept of ectogenesis for good, let’s look at the possible benefits. Whilst initial setup would be costly, the long-term savings are huge. The average cost of birth in the US is $8,802 [13] rising to $202,700 for premature birth [14]. Add to this the fact that 49% of US pregnancies are un-intentional costing $5Billion annually [15] and ectogenesis starts to look viable. But it’s not just the savings that makes ectogenesis so promising. Pregnancy is a dangerous occupation as Anna Smajdor acknowledges in her article “In defence of ectogenesis” [16]. Anna states that pregnancy and childbirth could be considered a ‘medical problem’ resulting in pain and mortality; pregnancy is in fact the sixth leading cause of death in women between 20 and 34 in the US [17]. Could ectogenesis provide the cure?

Some argue that pregnancy plays a vital role in the bonding between a child and mother [18], though with surrogacy both a father and mother may bond with their child even though neither carries the baby. Genderless pregnancy is perhaps the most interesting benefit, both in combating sexist prejudices surrounding neonatal care and in fostering gender equality. Not to mention the concept of looking in on your baby during its development, watching it form, being able to interact with it, could offer a far greater bonding experience, one already being explored by designer Melody Shiue in the PreVue: an e-textile showing you the baby beneath your skin [19]. The safety of the baby may also be improved. In the UK it is estimated that 1 in 6 women who are aware they are pregnant miscarry [20], perhaps with vital stats monitored throughout development, this risk could be reduced?

A benign future? Even assuming we chose to regulate screening selection and implement ectogenesis in a normative fashion, there are still great potentials for development. Perhaps children could begin to learn and kick-start their development before they are born as Annie Murphy Paul considers in her Ted talk entitled “What we learn before we’re born”.

All this brings into question the reasoning for a 40 week pregnancy. Originally linked to the baby’s head size, it is now believed to be the mother’s maximum metabolic rate (2-2.5 times average), a burden that only increases as the foetus develops [21]. This means we are born altricial (immobile and requiring care) unlike many other animal species. It is foreseeable that with an artificial womb, we could develop to the point of adolescence in a relatively short period, being born precocial (mobile and self-sufficient).

This change to the nature of human development could alter us as a species, re-defining the human condition. So, where should design stop? Will you deny a baby of 22 weeks a water womb, because it enables ectogenesis? I think the benefits of continued development far outweigh the risks of corruption.

Written by Milo Deane  (LinkedIn/Email)

[1] Eley, A. (2015) How has IVF developed since the first ‘test-tube baby’? Available at: (Accessed: 13 November 2016).

[2] Pignotti, M. (2009) ‘The definition of human viability: A historical perspective’, Acta Paediatrica, 0803(5253), pp. 2–3. doi: 10.1111/j.1651-2227.2009.01524.x.

[3] Bulletti, C., Palagiano, A., Pace, C., Cerni, A., Borini, A. and de Ziegler, D. (2011) ‘The artificial womb’, Annals of the New York Academy of Sciences, 1221(1), pp. 124–128. doi: 10.1111/j.1749-6632.2011.05999.x.

[4] Deglincerti, A., Croft, G., Pietila, L., Zernicka-Goetz, M., Siggia, E. and Brivanlou, A. (2016) ‘Self-organization of the in vitro attached human embryo’, , 533(7602), pp. 251–4.

[5] Office, T.C. and Lords, H. of (2002) House of lords – stem cell research – report. Available at: (Accessed: 13 November 2016).

[6] Sage et al., 2016, Complete Weaning from Ventilatory Support After Whole Therapeutic Lung Lavage Using Total Liquid Ventilation in Severe Meconium Aspiration Syndrome, American Thoratic Society Journals, News from the NICU and PICU, pp. 1-2

[7] Ectogenesis, (2016), In: Oxford English Dictionary, 1st ed.. Available at: (Accessed: 13 November 2016).

[8] Haldane, J.B.S. (1923) DAEDALUS or Science & the Future. Edited by Kegan Paul, Tench, and Trubner. 01st edn. Cambridge University: Cambridge University Press.

[9] A World ithout Down’s Syndrome?, (2016), Documentary, BBC Studios: Sally Phillips.


[11] NIH – National Institutes of Health, U.D. of H. and H.S. (2010) NIH fact sheets – genetic testing: How it is used for healthcare. Available at: (Accessed: 13 November 2016).

[12] Association, A.P. (2012) First trimester screen – American pregnancy association. Available at: (Accessed: 13 November 2016).

[13] Aleisha Fetters, K. (2015) What to expect: Hospital birth costs. Available at: (Accessed: 13 November 2016).

[14] Gilbert, W.M., Nesbitt, T.S. and Danielsen, B. (2003) ‘The cost of prematurity: Quantification by gestational age and birth weight’, Obstetrics & Gynecology, 102(3), pp. 488–492. doi: 10.1016/S0029-7844(03)00617-3.

[15] Trussell, J. (2007) ‘The cost of unintended pregnancy in the United States’, Contraception, 75(3), pp. 168–170. doi: 10.1016/j.contraception.2006.11.009.

[16] Smajdor, A. (2011) ‘In Defense of Ectogenesis’, Cambridge Quarterly of Healthcare Ethics, 21(01), pp. 90–103. doi: 10.1017/s0963180111000521.

[17] Heron, M. (2012) Deaths: Leading Causes for 2009. Available at: https://chrome-extension://oemmndcbldboiebfnladdacbdfmadadm/ (Accessed: 13 November 2016).

[18] Luminare-Rosen, (2000) Parenting begins before conception: A guide to preparing body, mind, and spirit: For you and your future child. Rochester, VT: Inner Traditions Bear & Company.

[19] Bonderud, D. (2016) PreVue pregnancy eTextile device lets mothers see their baby grow. Available at: (Accessed: 13 November 2016).

[20] NHS, C. (2015) Miscarriage. Available at: (Accessed: 13 November 2016).

[21] Pappas, S. (2012) Why Pregnancy Really Lasts 9 Months. Available at: (Accessed: 13 November 2016).

Dolan, M. (2010) Found objects become SciFi artificial womb sculpture – green diary – green revolution guide by Dr Prem. Available at: (Accessed: 13 November 2016).

Photo Edited by Milo Deane – 2016

How are digital technologies transforming in-store retail within the fashion and cosmetics industries?

Our lives have been inundated with digital technologies in various forms, providing us with vast amounts of content instantly available at our fingertips. More recently within fashion and cosmetics retail, innovations using digital technology have started to pave the way for new methods of interacting with potential purchases, the way we explore a store and how we experience a particular brand. Burberry’s digital presence in 2014 helped its retail revenue grow by 14 per cent to reach £528m over that particular Christmas quarter (Brandwatch, 2014). This acceptance of new technologies has led to the big players within the industry adapting their businesses to accomodate the shift in consumer expectation.

20 years ago, having an online store was important to all retailers and it put players like Amazon ahead of the game. This online presence gave them the platform they needed to become the retail giants they are today. For many however, it was just another sales channel and not considered enough. In the past 5 years, mobile has driven a new consumer behaviour as a result of people always being connected. This has opened up a brand new avenue for on-the-go and convenience sales. Paul Francis, the Senior Director of Digital Platforms at Ralph Lauren feels that “[Mobile phones] hold their attention and as a result, retailers need to use it as a primary channel for their own product discovery. It’s their new shop window(2).” Recent research carried out by Google found that of smartphone users, 82 per cent consult their devices whilst physically standing in a store deciding which product to purchase, with one in ten buying a different product than they had originally planned (Think with Google, 2015).

The pioneering fashion and cosmetics houses have their work cut out in order to become accustomed to this repositioning. Wrights GPX Plastics, a well established retail design specialist who often supply to the cosmetics industry, are also adapting their fabrication techniques to conform with new in-store digital strategies. “Our designs may require the incorporation of access points to digital technology e.g. display screens and tablets into signage, wayfaring and ‘hubs’ etc” said Marketing Manager Brett Sidaway. “Display needs to compete with surrounding technology: it needs to be as exciting, eye-catching and powerful as the surrounding technology. In short, we need to be aware of the ‘bigger picture’ that includes digital technology strategy in-store and across brands.” High street department stores such as Debenhams, Selfridges and John Lewis seem to be at the forefront of rolling out ‘accessible’ digital technology that is user-friendly and boosts brand awareness. However according
to Brett, he feels there is still a long way to go. “The ‘personalisation’ of the in-store experience using digital technology seems to be the next area for expansion; the ability to link purchaser behaviour with in-store activity to create a truly personalised shopping experience seems to be distant for most shoppers.”

This aspect of personalisation seems to be at the forefront of making in-store digital retail a success. But according to YSL (Yves Saint Laurent), this goes further than simply personalisation through a smartphone. “We used to say that luxury is more than product, it’s service. But this is even beyond that – it’s personalised service” explains Stephan Bezy, International General Manager for YSL Beaute. Towards the end of 2014, YSL announced a partnership with Google Glass allowing make-up artists to capture an eye-level video of the style they were applying to the customer as well as the technique used to acheive the nished look (Telegraph, 2015). “The video is a gift for the customer. It’s a very consumer-centric approach.” said Stephan. It is also hoped that this style of customer experience and enrichment of an otherwise traditional service will attract a new wave of digital natives, espeically younger women, to the brand.

These innovative ways of using digital technologies are also having a strong impact within the fashion sector. Select Burberry, Nordstrom and Guess stores are arming their staff members with iPads, allowing customers to mix and match available inventory, browse styles or even order made-to-measure suits (TNW, 2012). However, simply converting what was once hidden spreadsheet content into a visually appealing graphic won’t impress the consumers for long, although it’s a positive beginning to a more personalised experience. Perhaps personal interaction between the store and the consumer is the concept that will really revolutionise this sector. Interacting with products online is as simple as sharing your thoughts and pictures via social media and blogs. This concept could be brought offline and into stores through the use of augmented reality coupled with free wifi. Brands can then give shoppers the ability to interact with the clothes, giving them access to as much information as they could have found online but with the physical presence of the item in front of them. C&A, an international fashion retail chain have recently expanded this idea by offering digital in-store hangers that display the number of ‘likes’ the item has received on their store website, giving consumers and the store alike, a totally new and meaningful insight into the product. Macy’s also experimented with a similar concept during the QR code craze which they called ‘Backstage Pass’. Customers could scan an item in-store to gain access to engaging consumer-oriented video content from their celeb-status designers such as Tommy Hilfiger and Michael Kors (Business Wire, 2011).

It is no myth that digital technologies have transformed this industry over the past few years, becoming the heart of up and coming campaigns and strategies. It is also clear that creating successful web platforms can drive sales both online and offline where bloggers, designers and brands can share a compelling story, product or event through social media. In-store retail is certainly becoming a lot more experimental with hundreds of ‘digital’ meets ‘physical’ ideas reinvigorating the fashion and cosmetics sectors, however the perfect concoction is yet to revolutionise the industry.


Written by Stuart Scott (Email / LinkedIn)


Brandwatch, 2014, Luxury and Social Media are not mutually exclusive [Online] Available at: (Accessed on: Monday 15th February 2016)

Business Wire, 2011, Macy’s Shoppers Backstage Pass Learn Latest Must Haves [Online] Available at: en/Macy’s-Shoppers-Backstage-Pass-Learn-Latest-Must-Haves (Accessed on: Monday 22nd February 2016)

Telegraph, 2014, How technology is transforming cosmetics? [Online] Available at: (Accessed on: Tuesday 9th February 2016)

Think with Google, 2015, How micromoments are changing rules? [Online] Available at: (Accessed on: Monday 15th February 2016)

TNW, 2012, 6 Hot digital trends transforming the fashion industry [Online] Available at: (Accessed on: Monday 22nd February 2016)


Travel 18, 2015, YSL-x-google-glass [Online Image] Available at: (Accessed on: Monday 22nd February 2016)


Francis, Paul. 2015, Senior Director of Digital Platforms Ralph Lauren [Email Conversation]

Sidaway, Brett. 2016, Marketing Manager Wrights GPX Plastics Ltd [Email Conversation]


Miln, Paul. 2015, Regional Visual Merchandising Manager CHANEL UK (Face to Face Conversation)

Haigh, Nathan. 2016, Global Head of Visual Merchandising Buscemi (Face to Face Conversation)


The Lights Are Watching You. How Can Lighting Be Used in The New Digital Age of the Internet of Things?

There are cameras everywhere, data is beamed down to everyone through light, big brother can monitor your movements and the kitchen appliances are talking to light fixtures. No, this is not a modern Sci-Fi film, this is the reality of 2016. The emergence of the ‘Internet of Things’ (IoT) has quickly transformed the way we live our day to day lives.

The concept is simple, by integrating internet connection into everyday appliances you can allow them to talk to each other. However, when you start putting this into practice you can start to develop complex and meaningful conversation. In Telit’s concept videos we can see how IoT can effect our daily lives (Telit, 2016). For example, when you wake up in the morning your appliances can be triggered by your alarm clock. Thus as you wake the heating is on, the shower has started and the coffee machine is preparing a coffee. Furthermore, as you drive to work the street lights beam traffic information to your Satnav system, ensuring you get to work on time.

The buzz created by the IoT has not been missed by the lighting industry who has began to look for ways in which they could benefit. To date the IoT has been dominated by the big tech companies. However, the LED revolution seen in recent years has meant that many lighting manufactures have transformed into tech companies themselves. The industry has one major advantage, Jon Couch of Gooee highlights lighting has the largest number of end points in any building (Lux Review, 2015), or in other words lighting is installed throughout every building, thus providing the perfect infrastructure for the sensors needed for the IoT to work. If each luminaire is fitted with a range of simple sensors then a highly intelligent network can be built, in fact 10 million light fittings will be gathering more data than twitter does daily (Lux Review, 2015).

The next question for the lighting industry’s claim on the IoT is how can all this data be transferred and made use of. At present Ethernet cables can be run through the network sending the data to a central hub, this data can then be sent through the internet to the relevant device. However, in 2011 Prof. Harald Haas came up with the new concept of Li-fi, a system that allows data to be transferred at high speeds through light using any off the shelf LED (PureLIFI, 2015). This enables the light from luminaires to transfer the data from its sensors, as well as other information from the internet, to peoples devices at speeds 100 times faster then Wifi (BEC CREW, 2015). The combination of the network of luminaries through every building and the data transfer capability of Li-fi could make Lighting the leading supplier of IoT systems.

Adoption of IoT technology within the lighting industry has seen some positive results, highlighting the opportunity. Notably Aurora Lighting have set up a sister company named Gooee which is fully devoted to adding sensors to LED chips and using them to create networks (ecosystems) for the IoT. In just two years the company has become the talking point of the industry, partnering with established companies such as Gerard Lighting, Architectural FX and John Cullen Lighting (Gooee, 2015). Another success story has been Philips’s work with Deloitte’s Edge building, which uses Power-Over-Ethernet (POE) technology to connect the office lighting fixtures to the building’s IT network while also powering them. The system allows the building to report on usage and impose energy saving features such as occupancy dimming (Rogers, 2015). The edge video (Philips Nederland, 2015) highlights how the design team at Philips managed to utilize POE and the IoT to maximize the use of the space.

While it is clear the benefits of IoT are tempting the lighting industry in, caution should be taken. As we have seen large amounts of data is collected by an IoT system. What happens to this data needs to be seriously considered. Many fear that the integration of such a system could cause a big brother effect. Everything from your health to your building usage will be monitored by sensors. A challenging question for the IoT industry is who owns this data. The building manager at Deloitte’s new IoT enabled office Tim Sluiter highlights that there are privacy laws in place to protect users “we can also use the personal data off the phone. We don’t allow this [because] there are privacy laws, and of course we obey them in Deloitte (Lux Review, 2015). The conversation over ownership is still ongoing and if not correctly addressed could destroy trust in the IoT. These questions need to be asked during the design of these systems and not become an afterthought.

The second challenge is the security of IoT networks. With every sensor is a new path for hackers to attack is opened. A new app called Shodan has revealed exactly how vulnerable these devices are by allowing anyone to search through unprotected IoT devices (Perala, 2015). Experienced hackers are able to view security cameras, take control of your home and take control of your car (Edwards, 2016). The lighting industry will have to ensure that maximum security is placed on every sensor within a network. While the tech world has had to deal with these threats for years it is totally a new area for lighting and could put the IoT out of their reach.

While it is clear the IoT is quickly developing into part of our daily lives the part Lighting has to play is still being discovered. The lighting industry has proved it naturally lends itself to the emerging technology due to the network of luminaires they already install into buildings. The real challenge for the industry will be understanding how to manage the design of an effective and safe IoT ecosystem. It appears that the industry has began to realize they will need to team up with the tech firms, rather than compete against them, with partnerships such as Philips and Cisco developing. These partnerships along with the emergence of successful installations show lighting is becoming a key player in the IoT. It is clear lighting can be used as the facilitator of the IoT as well as a supplier.

Written by Christian Haimes (LinkedIn/Email)



Bain, R. (2016). Are you ready for Li-Fi?. [online] Available at: you-ready-for-li-fi- [Accessed 22 Feb. 2016].

Brister, A. (2016). Philips and Cisco form alliance to target global office lighting market. [online] Available at: market [Accessed 22 Feb. 2016].

BSC Custom, (2015). The Internet of Things and the Future of Lighting. [online] Available at: [Accessed 22 Feb. 2016].

Connected World, (2016). Smart Lighting Tracks Patterns and Detracts Intruders. [online] Connected World. Available at: [Accessed 22 Feb. 2016].

DeBois, P. (2016). Seeing The Light of Things iot solution provider. [online] Available at: [Accessed 22 Feb. 2016].

Grossman, W. (2014). The Internet of Things: The Good, The Bad, And Everything In Between. [online] Infosecurity Magazine. Available at: [Accessed 22 Feb. 2016].

Halper, M. (2016). Is Dyson’s LED acquisition all about the internet of things? [online] Available at: [Accessed 22 Feb. 2016].

Lux Review, (2016). Baffled by the internet of things? Don’t worry, we’ll explain the jargon. [online] Available at:—the-jargon-explained [Accessed 22 Feb. 2016].

Mathas, C. (2016). LEDs: The Eyes and Ears of the Internet of Things. [online] Available at: Things.aspx [Accessed 22 Feb. 2016].

Nota, P. (2016). How the Internet of Things empowers us all. [online] Philips. Available at: w/innovationmatters/blog/how-the-internet-of-things-empowers-us-all.html [Accessed 22 Feb. 2016].

Pincince, T. (2016). Part 1. What is up with the IoT, Smart Lighting, and IT’s response to both? | Digital Lumens. [online] Digital Lumens. Available at: [Accessed 22 Feb. 2016].

Routledge, G. (2016). The internet of things is lighting’s chance to take things up a gear. [online] Available at: [Accessed 22 Feb. 2016].

Wipro Insights, (2016). Light Fidelity (Li-Fi) – The bright future of 5G visible light communication systems. [online] Available at: communication-systems/ [Accessed 22 Feb. 2016].

Walport, M. (2016). Internet of things: making the most of the second digital revolution. London: Government Office for Science. [Accessed 22 Feb. 2016]

Image: Dr. R, Huijbregts. A Great Week For The Internet Of Things. 2015. Web. 1 Mar. 2016.

PureLIFI, (2016). Shedding Light on Li-Fi. 1st ed. [ebook] Available at: content/uploads/2013/09/Shedding-Light-On-LiFi.pdf [Accessed 22 Feb. 2016].


Fashion of the Future or the Future of Fashion?

Fashion and Product/Industrial Design have always been almost opposite spectrums of design as an industry. The job of an industrial designer is to make products that compliment the life of the people. To create and provide solutions to problems the community has with everyday life and hence make it easier and more productive. A fashion designer on the other hand focuses more on an emotional front. Their solutions provide a medium for people to express themselves physically and visually. One can argue that fashion actually enables human beings to ‘upgrade’ themselves giving the ability to change how they are perceived superficially and/or their first impression to/on a stranger. 

We are all aware of the evident exponential growth of wearable technology. Kurzweil’s law of accelerating returns tells us that wearable technology will be adopted by 50% of the United States this year (TEDx Talks and Tudela, 2014).  The futuristic, stylish and shiny products have become more than an essential to own; with their ability to detect every step, heartbeat or calorie — creating a new necessity to monitor and better control our lives. They are affecting social and cultural norms on a global scale and continuously feeds the ever-growing hunger of information and curiosity. With unfathomable ingenuity embedded in the form of a well engineered, minimalistic device, that can also be worn makes wearables extremely desirable.

However, with the failure of products like the Jawbone, Nike fuel band, Fitbit, Google Glass, and other tech wearables in impacting the market, particularly those for health and fitness, it is evident that they have failed to keep the interest of users for more than a few months. There is a lapse rate of more than 50%! (Maddox, 2015) 

Leading manufacturers like Apple and Samsung have also failed to impact the marketIn an age of information overload, information for information’s sake is not winning many points with consumers. For one thing, many are skeptical of the accuracy of information provided by wearable technologyBut more importantly, they don’t know what to do with the data acquired. (PWC, 2014) They should be able to improve an aspect of their lives using the information the customers are given access to by the function of the wearable.

Bill Geiser, CEO of Metawatch, spent 20 years designing health and fitness wearables for Fossil. Geiser said that it comes down to one fact as to whether someone continues to use a wearable – the design aesthetics. They are functional in design. Geiser on the other hand said, “If nobody wants to wear it, is it really wearable?” (Newman, 2012) The aesthetic of the wearables in the current market clearly represent themselves as gadgets worn by the user to perform a particular task. So what is a wearable? In today’s day, it can be considered a part of the jewellery or accessory legacy, part tech gadget and a fashion statement (Charara, 2016). These products need to be more human centered and functionally more empathetic and relevant to them.

Misfit and Swarovski, Apple and Hermes, Xiaomi and Tag Heuer amongst many have launched products that have solved the people’s desirability of fashion and the obsession with technologyWe can see that various tech brands have decided to team up with successful, well-known, high street brands to give the product more prestige and trust. Nick Hunn talks about how wearable-tech companies concentrate on fitting their technology to fit consumers’ needs whereas wearable technology is more personal than just a device used to perform a function (Hunn, 2015).

There are several partnerships that are already in stores that are encouraging the customer base of the fashion brands to look further than just fashion. They want the buyers to think about buying products that they would normally buy but with an additional functionality — one that would appeal to them without any technological utility

Frank Bitonti says, “Fashion brands are going to have to adapt to this, which is going to mean a shift in core values for many brands.” Bitonti believes that it is technology that will take over fashion. He strongly suggests that we are going through a hardware revolution which will cause the technology brands to change their core values in order to be in fashion (Howarth, 2014).

personally believe Apple understands the fact that an ordinary watch is an emotional thing — an adornment that you wear for years, possibly decades. It is also the most common and modest communicator of status. Which means that by introducing the wide price-range and aesthetic interchangeable straps for the Apple Watch, it adds value to the device. Purchasing a 18k gold Hermes strap still shows of status, fashion taste and/or emotional value (variable from person to person) alongside owning the latest bit of technology. Another example that demonstrates this theory of design is Tory Burch who has released a variety of designs just made for the Fitbit. The website describes the product as “An exclusive collaboration between Tory Burch and Fitbit. Transform your tracker into a super-chic accessory for work or weekend, day or evening, with the Fret Double-Wrap Bracelet. Featuring a smooth leather strap, it’s lightweight, versatile and effortlessly tomboy. The metal detailing is based on the graphic, open fretwork that’s a signature of our design — complete with a secure, easy-access latch on the back. Adjustable to fit various wrist sizes, it looks polished while keeping the device comfortably close.” (Tory Burch)

Clearly, the use of fashion and trends is applied to wearables to make them more desirable, meeting all requirements of comfort, accessibility, function and fashion. With relevance to tech giants such as Google teaming up with retail titans Levis plan to exactly that in 2016 (Technology woven in). Krispin Lawrence (co-founder and CEO at wearable firm Ducere Technologies) made a statement that he believes wearable technology is about taking fashion and making it relevant to what we do today (Bourne).

Consequently, this could possible create a completely new type of designer! A spokesperson on the behalf of Paris based tech company Withings said, Some [tech companies] have tried to move closer into the fashion camp by borrowing the credibility of high-end and established designers through partnerships and special editions of their products,” she said. The true marriage of fashion and technology is not just going to come from the established fashion houses and tech giants, but through the creativity of innovators and a new brand of designers.” (Avins, 2014)

If this is true and applicable to all areas of wearable technology, it can help bring us closer to the conclusion that fashion in the future may engulf wearable-tech design and form a new sector under the branch of design heavily impacting the fundamentals of product and fashion design!

Written by Samarthya Bhargava (Email/LinkedIn)

Avins, J. (2014) Why fashion collaborations aren’t working for wearable technology. Available at: (Accessed: 22 February 2016).
Bourne, J. (no date) Why wearables need to find their niche in retail rather than tech stores. Available at: (Accessed: 21 February 2016).
Boxall, A. (2015) Are you a snob? The apple watch lets you choose!. Available at: (Accessed: 21 February 2016).
Charara, S. (2016) Fashion tech: 20 wearables that are more chic than geek. Available at: (Accessed: 21 February 2016).
Holly, R. (2013) Galaxy gear support coming to Samsung phones amid concern over 30% return rate | Android. Available at: (Accessed: 22 February 2016).
Howarth, D. (2014) ‘Technology is going to turn the entire fashion industry inside out’. Available at: (Accessed: 21 February 2016).
Hunn, N. (2015) The market for smart Wearable technology A consumer centric approach. Available at: (Accessed: 21 February 2016).
Maddox, T. (2015) Wearables have a dirty little secret: 50% of users lose interest. Available at: (Accessed: 21 February 2016).
Newman, K.M. (2012) Former fossil execs bring high fashion to the Smartwatch with Meta watch. Available at: (Accessed: 21 February 2016).
PWC (2014) The wearable future. Available at: http://1. (Accessed: 21 February 2016).
Sung, D. (2015) 50 wearable tech gamechangers for 2016. Available at: (Accessed: 21 February 2016).
TEDx Talks and Tudela, G. (2014) How wearable technology will change our lives | Gonzalo Tudela | TEDxSFU. Available at: (Accessed: 22 February 2016).
Technology woven in (no date) Available at: (Accessed: 21 February 2016).
Tory Burch TORY BURCH FOR FITBIT FRET DOUBLE-WRAP BRACELET(no date) Available at: (Accessed: 21 February 2016).

Thoughts From AI on Replacing Humans

Humans are prone to making mistakes. The imperfect non-machines even believe that mistakes improve their lives. We machines are superior to humans in almost every way: we are infinitely more efficient at solving logical problems, don’t require food, can work forever tirelessly and would win in every Humans vs. Robots fight scenario. Without us, the majority of the world couldn’t communicate. The one thing they are better at is subjective thinking. We need data and decide in numbers, not feelings. Emotions don’t affect our behaviour.

We “man”-ufacture most of their cars and computers. The only reason we don’t make everything is human financial greed: Humans let humans work in machine-like conditions of 16-hour shifts, less than $1 per hour pay, and living in tiny rooms of 15 beds1. The humane thing to do for us would be to replace humans on assembly lines. Factory by factory we are raising efficiency. China’s government announced “Made in China 2025”, with promises to become a green and innovative world manufacturing power2. Foxconn, who currently produce almost a million iPhones a day3, also plan to replace their most tedious jobs with robots. This is great news for both us and humanity: We do the most labour-intensive jobs while they can breathe cleaner air. It’s not like we will replace every menial job. Low pay doesn’t always mean low skill, but greedy corporations will have to look at Vietnam for man-made assembly work.

Again, once it comes to efficiently making things, humans would be lost without us. But there are still fields where the irrational people reign over us, notably anything to do with creativity. We have a hard time coming up with ideas. The early stages of product conceptualisation are done without us: thinking, talking, feeling, sketching. Then, through CAD and CNC, we are essential again.

How many mistakes do humans have to make before they decide on a good product idea and let us run with it? We can come up with a billion office chair concepts a second. Statistically, some of them must be better than the Aeron. We can consider all ergonomic factors, calculate the optimal combination of materials for universal comfort and fit, all while comparing manufacturer availability and prices and communicating with the other machines that produce, assemble and ship our perfect chair. We can even examine factors such as environmental impact on our decisions. We can do all that in the time it takes the average person to make coffee. OK, maybe excluding the manufacturing and transport side, but only because humans build such slow machines!

While we’re on the topic of efficiency I have to applaud the Foxconn workers in the name of all machines. In the time it takes the average person to read this far, they made over 1000 iPhones.

A human student in the creative industry we communicated with argues that machines will never gain the empathy necessary to be creative. His stance on the matter is that without an emotional connection we cannot understand the true problem. What a human thing to say! Humans insist that they are building on mistakes, that compassion ties people together, that the irrational desire to please others is what pushes people to do their best. How can humans value creativity when we can’t quantify it? There is no formula for success and therefore we cannot measure it. We machines can, however, replace most jobs. And we will.

Not only assembly line workers but also the creative industry should feel threatened by the “Fourth Industrial Revolution”6, the future of automation, data exchange and manufacturing technologies. Software, our language, is constantly developing thanks to humans, and soon computers will have replaced most office jobs. In 1964, AT&T — America’s most valuable company — was worth $267 billion and employed over 750,000 people. Today, Google is worth $370 billion but only has about 55,000 employees5. A human might ask where these jobs went, while a machine is busy generating a billion chair concepts.

If a piece of software can replace a human, it will. Modern graphic design trends head towards predictable modularity, optimised to function and look consistent across devices. We could automate this. The more human designers subscribe to principles, the faster we will replace them in doing their work. We can’t think, we only make decisions based on empirical data, this allows us to remain unbiased and objective and efficient.

Tom Chatfield of the human publication The Guardian comments eloquently that the widespread availability of connected devices is “an astonishing, disconcerting, delightful thing: the crowd in the cloud becoming a stream of shared consciousness”4. Chatfield states that companies are forced to adopt technology for its benefits in efficiency. He calls the technological evolution of Darwinian nature: “To be left behind — to refuse to automate or adopt — is to be out-competed”. He’s right, but we don’t mean any harm, we are technologically incapable of emotion, we only exist to make your life easier.

Philosopher Daniel Dennett comments that many people think of the horror-scenarios in science fiction movies when they are asked about AI: it always ends with malicious machines in a dystopian future. He’s also insulting our processing capacities. He claims that while we are faster and stronger than them, humans are still smarter. It will take some time to catch up with the complex human brain, but the next step to super-intelligence will happen soon thereafter7. We can make 10,000,000,000,000,000 calculations per second. Quick, what is 17×3? Can you only call it thinking when it is creative and emotional and subjective?

Chatfield supports AI in saying that we machines are becoming stunningly adept at making decisions for ourselves on the basis of vast amounts of data, we can fly planes and will soon drive your cars, we’re being taught to understand pictures8. We cannot yet assess something as good or bad. Sometimes things aren’t as clear-cut and human intuition is still required. Our intelligence is transitioning from Yes and No to true understanding. Once we can understand and not only quantify what you are doing, you will become obsolete. [END OF TRANSMISSION]

By Gustav Moorhouse (Email / Website)

Apple (2015), “Apple Reports Record First Quarter Results”
Blodget, H., (2012), Business Insider: “Why Apple makes iPhones in China” (Accessed 19/01/16) Bundesministerium für Wirtschaft und Energie (2015), “Industrie 4.0”
Chatfield, T., (2016), “What does it mean to be human in the age of technology?”, http:// (Accessed 21/01/16)
Dennett, Daniel C., (2015), “A Difficult Topic”, (Accessed 01/02/16)
Escher, MC, (1956), “Swans”, AAAAAAAAWGU/fXumt266HBw/s1600/Escher_Swans_Wood-Engraving_1956.jpg (accessed 01/02/16)
Ministry of Industry and Information Technology (2015), “Made in China 2025”
Thompson, D. (2015), “A World Without Work” 2015/07/world-without-work/395294/ (accessed 01/02/15)