- Apple started with almost no quick example and obtained surprising results
- Starchat-beta was pushed into an unexplored territory without clear orientation
- Nearly a million Swiftui operating programs emerged after repeated iterations
Apple researchers recently revealed an experience in which an AI model has been trained to generate user interface code in Swiftui, even if almost no SWIFTUI example was present in the original data.
The study started with Starchat-Beta, an open source model designed for coding. His sources of training, including Thestack and other collections, almost did not contain a quick code.
This absence meant that the model did not have the advantage of existing examples to guide its responses, which made the results surprising when a stronger system finally emerged.
Create a self-improvement loop
The team’s solution was to create a feedback cycle. They gave Starchat-Beta a set of interface descriptions and asked him to generate Swiftui programs from these guests.
Each program generated has been compiled to make sure it was really executed. The interfaces that operated were then compared to the original descriptions using another model, GPT-4V, which judged whether the output corresponded to the request.
Only those who succeeded in the two stages remained on the data set. This cycle has been repeated five times, and at each turn, the cleaner data set has been returned to the following model.
At the end of the process, the researchers had nearly a million SWIFTUI operating samples and a model they called uicoder.
The model was then measured both in relation to automated tests and human evaluation, where the results showed that it not only worked better than its basic model, but also reached a compilation success rate greater than GPT-4.
One of the striking aspects of the study is that the rapid code had been almost entirely excluded from initial training data.
According to the team, this occurred by accident when creating the back data set, leaving only dispersed examples found on web pages.
This surveillance excludes the idea that Uicoder is simply recycled the code he had already seen – instead, its improvement came from the iterative cycle of generation, filtering and recycling on its own outings.
Although the results are focused on Swiftui, the researchers suggested that the approach “would probably be widespread in other languages and kits of user interface tools”.
If this is the case, this could open ways so that more models are formed in specialized fields where training data is limited.
The perspective raises questions about reliability, sustainability and if the synthetic data sets can continue to evolve without introducing hidden defects.
Uicoder has also been trained in carefully controlled conditions, and its success in wider environments is not guaranteed.
Via 9TO5MAC