Realtime Pytorch GAN Img Generation in Coreml

Matt Davis
2 min readFeb 19, 2022

Synthetic data may be the future of most of the experiences. Synthetic data has the possibility of creating new human ingenuity. Synthetic data could be any form of data and the level of creativity for the generation of new data is almost boundless when you can control the creativity with the types of data that is generated. Imagine a future where your youtube videos and entertainment could be instantly tailored to anyone's exact interests.

Training a model in pytorch is relatively easy to train many different types of models. Getting those models working on Coreml is the hard part.

CoreML Model Setup In Colab:

  • Setup Model from Checkpoint Loaded
  • Eval
  • Trace Model
  • Define any functions not automatically available through python and apple coreml tools conversion
  • Setup Input/Output Typing — ct.converters.mil.input_types
  • Convert Model — ct.convert(traced_model, inputs=[image_input])

Now that we can get the model running on COREML you must then make sure it runs performantly. Here are a few steps in ensuring performance:

  1. Validate Speed of Models Directly and they are performing in the coreml and proper resources architecture.
  2. CoreML will utilize the GPU, and NPU available when you use the API directly.
  3. Model Architecture — Ensure your model architecture is built for real-time and performance. Limit the layers and slow processing type, models.
  4. Validate the data used by the model and transformations in the pipeline in and out of the model doesn’t take up too much CPU. You can focus on keeping your data onto GPU or other resources instead of converting it onto the CPU which can take up large amount of processing timelines.
  5. Validate the performance is within the frame rate needed. 30 FPS is an acceptance criteria of under 33 ms.
  6. Remove and Cache Unnecessary calls
  7. Skip Framing and Processing — If your already processing from previous frames make sure not to add to the load as it will just snowball to a backlog.

Using the above methodology you can convert almost all the pytorch models. The hard part is some of the functions are not supported and converting them to core python functions supporting tensors seems to take the longest.

Typical Issues:

JIT Trace Conversion — Directly unsupported functions and JIT Trace Doesn’t use NPU if available.

Function Support — Some functions like torch.inverse are not supported and need to be rewritten in order to support the models.

Speed — Alot of processing can be put in from converting data from the GPU into processing formats on the CPU, avoid as much as possible. Make sure to setup Ques for the different stages and pipeline tasks.

Reach out anytime to discuss cool synthetic data.

mdavis@virtustructure.com

Matt Davis

--

--