2025-12-15 01:15:50.330624: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2025-12-15 01:15:50.379550: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-12-15 01:15:51.941159: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
/opt/hostedtoolcache/Python/3.11.14/x64/lib/python3.11/site-packages/keras/src/layers/reshaping/reshape.py:38: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.
super().__init__(**kwargs)
2025-12-15 01:15:52.961884: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
2025-12-15 01:15:53.151103: W external/local_xla/xla/tsl/framework/cpu_allocator_impl.cc:84] Allocation of 188160000 exceeds 10% of free system memory.
AttributeError: 'list' object has no attribute 'shape'
Figure 24.1: FGSM adversarial attack on MNIST
How FGSM Works
The Fast Gradient Sign Method computes:
\[x_{adv} = x + \epsilon \cdot \text{sign}(\nabla_x L(\theta, x, y))\]
where: - \(x\): original image - \(\epsilon\): perturbation magnitude - \(\nabla_x L\): gradient of loss with respect to input
Defense Strategies
Defense
Description
Effectiveness
Adversarial Training
Train on adversarial examples
High
Input Preprocessing
Denoise, quantize inputs
Medium
Model Distillation
Train smaller model on soft labels
Medium
Ensemble Methods
Combine multiple models
High
Edge ML Defense
For edge devices, input validation is often the most practical defense: - Check input ranges - Detect unusual patterns - Reject outliers