Skin cancer is one of the most common cancers globally – and early detection significantly improves the chances of successful treatment. Unfortunately, many people lack access to dermatologists or advanced diagnostic tools. This research addresses the problem by bringing AI-based diagnostics to low-cost wearable devices.
What did the authors do?
Used MobileNetV2:
A compact neural network architecture optimized for mobile environments. With transfer learning, the model was fine-tuned to classify skin lesions as cancerous or non-cancerous.Compressed and optimized the model:
With NVIDIA TensorRT, the authors compressed MobileNetV2 for the Jetson Orin Nano embedded platform – reducing the model size to about 41% of the original, accelerating inference, and dramatically cutting power consumption. In INT8 precision, power usage dropped by up to 93%.Thoroughly evaluated the performance:
Key metrics reported:- F1‑Score: 87.2% (balance between precision and recall)
- Precision: 93.2% (how many positive predictions were correct)
- Recall: 81.9% (how many actual cancer cases were detected)
What does this contribute?
- Fast, local diagnosis without internet – all processing is done on-device.
- Energy-efficient, making wearable use viable – especially important in underserved areas.
- Scalable – the same method could be applied to other diseases or hardware platforms.
Limitations and future directions
- Needs to be validated on a broader range of skin tones and lesion types.
- Important to address safety concerns – minimize both false negatives (missed cancers) and false positives (unnecessary concern).
- Future research may target other devices (e.g., smartwatches) and benchmark larger models.
Glossary
- Transfer Learning – A technique that leverages pre-trained models to quickly adapt to new tasks.
- MobileNetV2 – A lightweight neural network architecture designed for efficient on-device inference.
- Model Compression – A set of techniques to reduce the size and complexity of a neural network (e.g., quantizing weights to 8-bit).
- INT8 Precision – A low-precision format (8-bit integers) used to make neural networks faster and more energy-efficient.
Summary
The authors demonstrate that a compressed MobileNetV2 model can deliver accurate skin cancer detection on low-cost wearable hardware. With strong performance (87% F1‑Score, 93% precision) and dramatic reductions in size and power usage, this work opens the door to accessible, AI-driven diagnostics for areas with limited healthcare resources.
📎 References
- Based on the publication 📄 arXiv:2507.17125