Design Masterclass: AI – be careful what you wish for

Tim Smallwood FFCSI largely agrees with Bill Gates that “generative AI has plateaued”, but warns things change so quickly

So, AI is going to make you more productive and save money. You may be thinking of using Large Language Models (LLM) such as GPT or DALL.E, or maybe image models such as Midjourney or ARK. It may be a good idea to give it a second thought.

Sure, the first experience is transformative, which in turn encourages experimenting further and working out how the office can make use of Generative AI to expand the possibilities of future work. Certainly, where an immediate benefit can be seen will be those repetitive tasks such as documenting standard elements or specifications, particularly if you are using BIM. There will be temptation to apply an AI algorithm to refine and speed up the process and ultimately become dependent on the perceived accuracy of the output. Then the human content and experience, for which you have been engaged, becomes increasingly rare.

That might seem extreme, but it is worth remembering that generative AI is developed: scraping data from the internet and training models. This data will include the mass of existing information and data with little if any discrimination; but that includes the data or information that you have just created through your prompt engineering to create and refine your AI design, which is then re-applied to your next project using the generative AI model. So, the AI model you are using is training itself on your generated output, over and over again, becoming less and less nuanced by human influence.

Recent studies have shown that this situation leads AI models to become a bland echo chamber or at worst an irreversible degenerative process unless it is regularly infused with fresh, real world human data. (Model Autophagy Disorder: MAD study Cornell – July 2023).

Ethical/moral considerations

But this is only one consideration before embarking on the integration of generative AI modeling into the foodservice design consulting process. The others tend to be ethical or even moral considerations. When first using the generative AI model to enhance your productivity you will in effect be using information and data created by others that has been “collected” generally without permission from the originators: as in turn your IP will be added into the multiple terabytes of data already stored and used by others. The problem then becomes identifying what is authentic content and what is effectively synthetic content generated by an AI algorithm, and the provenance of the information you are applying to your consulting engagement.

This ethical dilemma is clear, the potential for you to be using the IP of others without their knowledge, permission or reward. The dilemma goes to the heart of the application of AI to everything that we do. Handing over the responsibility for making, in this case, creative decisions to a machine. We are handing over our autonomy when we outsource our creativity to an AI algorithm, and we are in effect trusting that the information that we put into the model will be used ethically.

The question is, can you trust the AI model you are using? Have you established that it has the desirable characteristics of trustworthiness:

Explainability: does the AI model or system provide clearly understandable explanation of how it works.

Auditability: if something goes wrong, are you able to work out why and to fix it.

Robustness: the AI model needs to work in predictable ways that can accommodate variables and inconsistencies in their inputs.

Correctness: the outputs of the model may well have a potential impact on the health and safety of individuals; the system has to be capable of responding to the potential for harm.

Basically, you have to be sure that you can fully trust the Large Language Model that you are proposing to incorporate into the delivery of assignment to your clients. Which raises the last point:

Transparency: do you advise your clients that you are applying an AI Large Language Model to their project with you as their consultant and how do you assure them that they are able to trust the output.

Licensing agreements

As a final note of caution, it is probable that the small subscription that you pay for using the AI model will significantly increase when LLM developers start having to pay for the use of the information and IP that they use. Already originators are taking action against AI developers for extracting millions of images from databases without permission. Open AI and Google are exploring licensing agreements with publishers. As an organization, FCSI members would have to agree that it is important that all creatives should be fairly compensated for their authentic human content.

These thoughts on AI have not been produced by any LLM model but are a result of papers on the subject by Oguz A. Acar, Chair of Marketing at Kings Business School, Kings College London and others. I would recommend exploring his work on AI, particularly his DARE (Decompose, Analyse, Realise, Evaluate) framework as an introduction to approaching the introduction of AI into business.

The thoughts expressed here were triggered by his Harvard Business Review article “Has Generative AI Peaked”. A thought that Bill Gates seems to agree with: “Generative AI has Plateaued”. Other influences for this piece have been from Professor Toby Walsh: Scientia Professor of Artificial Intelligence, University of New South Wales, including: “The world that AI made” and “Artificial Intelligence in a Human World”; again he’s worth following.

When starting this piece, I was thinking that this has to be my last contribution on the subject of Generative AI and LLM and that I would return to thoughts on the day-to-day systems and facilities used in the business of foodservice design and operations. However things change so quickly, I now accept that one can Never Say Never.

Tim Smallwood FFCSI

More Relevant

View More