Beginners To Experts


The site is under development.

TensorFlow Tutorial

  1. What is TensorFlow? TensorFlow is an open-source machine learning framework developed by Google for building and training machine learning models, especially deep learning applications.
  2. History and Evolution of TensorFlow: TensorFlow was released in 2015 as a successor to DistBelief. Over time, it has evolved into a powerful ecosystem for ML and AI with tools for mobile, web, and production deployment.
  3. TensorFlow 1.x vs 2.x: TensorFlow 1.x required session-based execution and had a steep learning curve. TensorFlow 2.x introduced eager execution by default, better APIs (Keras), and more user-friendly design.
  4. Installing TensorFlow via pip: Install TensorFlow in your environment using:
    pip install tensorflow
  5. TensorFlow Architecture Overview: TensorFlow consists of a computational graph, execution engine, runtime, and APIs in multiple languages. It supports CPU, GPU, and TPU execution.
  6. Tensors and Operations: Tensors are multi-dimensional arrays; operations (like addition, multiplication) are applied to them in a computation graph.
  7. Eager Execution vs Graph Execution: Eager Execution runs operations immediately (good for debugging), while Graph Execution builds a computation graph first and executes later (more optimized).
  8. TensorFlow vs Other ML Libraries: TensorFlow is widely used in production and supports deployment across platforms. Compared to PyTorch (more flexible in dev), TensorFlow has broader ecosystem tools.
  9. Using TensorFlow in Jupyter Notebooks: You can import and use TensorFlow inside a Jupyter cell using `%pip install tensorflow` and `import tensorflow as tf` as needed.
  10. Exploring TensorFlow Documentation: Official docs: https://www.tensorflow.org — covers tutorials, API references, and guides.
  11. Understanding TensorFlow Sessions (TF 1.x): In TensorFlow 1.x, sessions are required to evaluate tensors. Example:
    # TensorFlow 1.x code:
    import tensorflow as tf
    a = tf.constant(2)
    b = tf.constant(3)
    sess = tf.Session()
    print(sess.run(a + b))
    sess.close()
              
  12. TensorFlow Use Cases and Applications: Used for image recognition, NLP, recommendation systems, robotics, forecasting, and deploying models to production.
  13. Setting Up TensorFlow in Colab: Google Colab comes with TensorFlow pre-installed. Just open a new notebook and import it:
    import tensorflow as tf
    print(tf.__version__)
              
  14. Managing GPU/CPU with TensorFlow: TensorFlow can list and control devices:
    from tensorflow.python.client import device_lib
    print(device_lib.list_local_devices())
              
  15. Your First "Hello World" TensorFlow Program: Below is a simple "Hello TensorFlow" program with full comments:
    # Import TensorFlow library
    import tensorflow as tf
    
    # Print TensorFlow version
    print("TensorFlow version:", tf.__version__)
    
    # Create a constant tensor containing a string
    hello = tf.constant("Hello, TensorFlow!")
    
    # In TensorFlow 2.x, eager execution is enabled by default
    # So we can print the value of the tensor directly
    print(hello.numpy().decode())  # Output: Hello, TensorFlow!
              

  1. Tensors: Rank, Shape, and Type
    Tensors are the core data structure in TensorFlow. They represent multi-dimensional arrays. The rank refers to the number of dimensions, shape to the size of each dimension, and type to the data format like float32 or int64.
    import tensorflow as tf
    
    # Create a 2D tensor
    tensor = tf.constant([[1, 2], [3, 4]])
    
    # Display tensor properties
    print("Rank (number of dimensions):", tf.rank(tensor).numpy())
    print("Shape (dimensions):", tensor.shape)
    print("Data type:", tensor.dtype)
            

  2. Creating and Manipulating Tensors
    Tensors can be created with constants, zeros, ones, or random values. You can then manipulate them using built-in TensorFlow functions.
    # Create tensors
    a = tf.constant([1, 2, 3])
    b = tf.ones([2, 2])
    c = tf.zeros([2, 3])
    d = tf.random.uniform([2, 2], minval=0, maxval=10)
    
    # Output tensors
    print("Constant:", a)
    print("Ones:", b)
    print("Zeros:", c)
    print("Random Uniform:", d)
            

  3. Variables and Constants
    Constants are immutable, while variables are mutable and used during model training. You can update variables but not constants.
    # Define a constant and a variable
    const_tensor = tf.constant([5, 6])
    var_tensor = tf.Variable([1.0, 2.0])
    
    # Modify the variable
    var_tensor.assign_add([1.0, 1.0])
    
    print("Constant Tensor:", const_tensor)
    print("Modified Variable Tensor:", var_tensor)
            

  4. Mathematical Operations in TensorFlow
    You can perform element-wise math operations on tensors such as addition, multiplication, and exponentiation.
    x = tf.constant([2.0, 3.0])
    y = tf.constant([1.0, 4.0])
    
    print("Add:", tf.add(x, y))
    print("Multiply:", tf.multiply(x, y))
    print("Power:", tf.pow(x, 2))
            

  5. Reshaping and Broadcasting
    Reshaping adjusts a tensor's shape. Broadcasting allows operations on tensors of different shapes by auto-expanding dimensions.
    # Reshape a tensor
    original = tf.constant([[1, 2], [3, 4], [5, 6]])
    reshaped = tf.reshape(original, [2, 3])
    
    # Broadcast
    a = tf.constant([[1], [2], [3]])  # shape (3, 1)
    b = tf.constant([4, 5])           # shape (2,)
    result = a + b  # broadcasting shapes to (3, 2)
    
    print("Reshaped Tensor:\n", reshaped)
    print("Broadcasted Result:\n", result)
            

  6. Tensor Indexing and Slicing
    Access tensor values using indexing or slicing, similar to Python lists or NumPy arrays.
    tensor = tf.constant([[10, 20], [30, 40], [50, 60]])
    
    # Index and slice
    print("First row:", tensor[0])
    print("Second column:", tensor[:, 1])
    print("Last two rows:\n", tensor[1:, :])
            

  7. Random and Zeros/Ones Initialization
    You can initialize tensors with zeros, ones, or random values for simulations and model weights.
    zeros_tensor = tf.zeros([2, 2])
    ones_tensor = tf.ones([2, 2])
    uniform_tensor = tf.random.uniform([2, 2], 0, 1)
    normal_tensor = tf.random.normal([2, 2], mean=0.0, stddev=1.0)
    
    print("Zeros:\n", zeros_tensor)
    print("Ones:\n", ones_tensor)
    print("Uniform Random:\n", uniform_tensor)
    print("Normal Random:\n", normal_tensor)
            

  8. Type Casting and Conversion
    Convert tensors from one data type to another using `tf.cast`. Useful in computations where types must match.
    float_tensor = tf.constant([1.5, 2.3, 3.7], dtype=tf.float32)
    int_tensor = tf.cast(float_tensor, tf.int32)
    
    print("Float Tensor:", float_tensor)
    print("Converted to Int:", int_tensor)
            

  9. Tensor Aggregation and Reduction
    Reduce functions like sum or mean aggregate values across tensors, often used in loss functions and metrics.
    tensor = tf.constant([[1, 2], [3, 4]])
    
    print("Sum of all elements:", tf.reduce_sum(tensor))
    print("Mean of each column:", tf.reduce_mean(tensor, axis=0))
            

  10. Tensor Comparison and Logic Operations
    You can compare tensors element-wise using logical and relational operators like equal, greater, or logical_and.
    a = tf.constant([2, 4, 6])
    b = tf.constant([1, 4, 8])
    
    print("Equal:", tf.equal(a, b))
    print("Greater:", tf.greater(a, b))
    print("Logical AND:", tf.logical_and(a > 2, b > 2))
            

  11. Performance Tips for Tensor Operations
    Avoid Python loops on tensors. Use vectorized operations and `tf.function` to convert code into efficient graphs.
    x = tf.range(1000000, dtype=tf.float32)
    y = x * 2.0  # Vectorized operation
    
    print("First 5 results:", y[:5])
            

  12. Using tf.function for Graphs
    `tf.function` turns Python code into a TensorFlow graph for faster execution by compiling operations.
    @tf.function
    def multiply(a, b):
        return a * b
    
    x = tf.constant([2.0, 3.0])
    y = tf.constant([4.0, 5.0])
    
    print("Graph output:", multiply(x, y))
            

  13. Device Placement and tf.device
    Explicitly run ops on CPU or GPU using `tf.device`. TensorFlow also does this automatically.
    with tf.device('/CPU:0'):
        cpu_tensor = tf.constant([1.0, 2.0])
    
    # If GPU is available:
    # with tf.device('/GPU:0'):
    #     gpu_tensor = tf.constant([3.0, 4.0])
    
    print("Tensor on CPU:", cpu_tensor)
            

  14. TensorFlow Debugging Tools
    Use `tf.print`, `assert_*`, and eager execution to detect errors during model development.
    x = tf.constant([-1, 0, 1])
    
    # Assert values are non-negative
    try:
        tf.debugging.assert_non_negative(x)
    except tf.errors.InvalidArgumentError as e:
        print("Debugging Error:", e.message)
            

  15. Best Practices for Efficient Tensor Usage
    - Use `tf.function` for faster graph execution
    - Avoid unnecessary copies
    - Prefer vectorized ops
    - Profile performance with `tf.profiler`
    @tf.function
    def compute_squared_sum(tensor):
        return tf.reduce_sum(tensor ** 2)
    
    data = tf.constant([1.0, 2.0, 3.0])
    print("Squared sum:", compute_squared_sum(data))
            

  1. The tf.data API Overview
    TensorFlow's `tf.data` API provides high-performance input pipelines to load, transform, and feed data efficiently.
    import tensorflow as tf
    
    # Create a basic dataset from a list
    dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5])
    for item in dataset:
        print(item.numpy())
            

  2. Creating Datasets from Arrays and Files
    You can build datasets from NumPy arrays, lists, or read directly from files like CSV or images.
    import numpy as np
    
    array = np.array([10, 20, 30])
    ds = tf.data.Dataset.from_tensor_slices(array)
    
    for val in ds:
        print(val.numpy())
            

  3. Data Pipelines: map(), batch(), shuffle()
    These functions transform data: `map()` applies a function, `shuffle()` randomizes data, `batch()` groups elements.
    dataset = tf.data.Dataset.range(10)
    dataset = dataset.shuffle(5).map(lambda x: x * 2).batch(3)
    
    for batch in dataset:
        print(batch.numpy())
            

  4. Reading CSV and JSON with TensorFlow
    Use `tf.data.experimental.make_csv_dataset` or `tf.io.decode_json_example` to parse structured text files.
    csv_ds = tf.data.experimental.make_csv_dataset(
        file_pattern='sample.csv',
        batch_size=2,
        num_epochs=1,
        ignore_errors=True
    )
    
    for row in csv_ds:
        print(row)
            

  5. Image Loading with tf.image
    Load and preprocess images using `tf.io.read_file()` and `tf.image.decode_*()` functions.
    image_path = 'example.jpg'
    image = tf.io.read_file(image_path)
    image = tf.image.decode_jpeg(image, channels=3)
    image = tf.image.resize(image, [128, 128])
    
    print("Image shape:", image.shape)
            

  6. Text Loading and Tokenization
    Use `tf.data.TextLineDataset` and TensorFlow Text/Tokenizer API for NLP preprocessing.
    text_ds = tf.data.TextLineDataset("sample.txt")
    
    for line in text_ds.take(3):
        print(line.numpy().decode())
            

  7. TFRecord Format and Serialization
    TFRecord is a binary format for storing datasets efficiently for large-scale training.
    # Writing an example
    def _bytes_feature(value):
        return tf.train.Feature(bytes_list=tf.train.BytesList(value=[tf.io.encode_base64(value).numpy()]))
    
    example = tf.train.Example(features=tf.train.Features(feature={
        'feature_name': _bytes_feature(tf.constant(b"example"))
    }))
    
    # Serialize to string
    serialized = example.SerializeToString()
    print(serialized)
            

  8. Using Datasets with Batching and Prefetching
    `batch()` groups samples, and `prefetch()` loads next batches in background for better performance.
    ds = tf.data.Dataset.range(100).batch(10).prefetch(1)
    
    for batch in ds.take(2):
        print(batch.numpy())
            

  9. Data Augmentation Techniques
    Common in image processing — includes flipping, rotating, cropping, brightness, etc.
    image = tf.random.uniform([256, 256, 3])
    
    # Apply augmentations
    flipped = tf.image.flip_left_right(image)
    bright = tf.image.random_brightness(image, max_delta=0.5)
    
    print("Augmented image shape:", flipped.shape)
            

  10. Splitting Data into Train, Test, Validation
    Use `take()` and `skip()` to split datasets for training, validation, and testing.
    dataset = tf.data.Dataset.range(100)
    
    train = dataset.take(60)
    val = dataset.skip(60).take(20)
    test = dataset.skip(80)
    
    print("Train sample:", list(train.as_numpy_iterator())[:5])
    print("Validation sample:", list(val.as_numpy_iterator())[:5])
    print("Test sample:", list(test.as_numpy_iterator())[:5])
            

  11. Handling Large Datasets Efficiently
    For large datasets, use generators, streaming reads, compression, and `AUTOTUNE` for optimal performance.
    large_ds = tf.data.TFRecordDataset("large_data.tfrecord.gz", compression_type="GZIP")
    large_ds = large_ds.batch(32).prefetch(tf.data.AUTOTUNE)
    
    print("Dataset ready for large input pipelines.")
            

  12. Creating Custom Data Loaders
    Use Python generators wrapped in `from_generator()` for complex or nonstandard inputs.
    def custom_gen():
        for i in range(5):
            yield i * 2
    
    custom_ds = tf.data.Dataset.from_generator(custom_gen, output_types=tf.int32)
    
    for item in custom_ds:
        print(item.numpy())
            

  13. Integrating Pandas with TensorFlow
    You can convert pandas DataFrames into TensorFlow datasets using `from_tensor_slices()`.
    import pandas as pd
    
    df = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
    ds = tf.data.Dataset.from_tensor_slices(dict(df))
    
    for item in ds:
        print(item)
            

  14. Streaming and Real-Time Data Handling
    For real-time streams, integrate with socket inputs or use `from_generator()` with live feeds.
    def stream():
        import time
        for i in range(3):
            yield i
            time.sleep(1)
    
    stream_ds = tf.data.Dataset.from_generator(stream, output_types=tf.int32)
    
    for item in stream_ds:
        print(item.numpy())
            

  15. Performance Optimization in Pipelines
    Combine `cache()`, `prefetch()`, and `AUTOTUNE` to speed up training input pipelines.
    dataset = tf.data.Dataset.range(1000)
    dataset = dataset.cache().shuffle(100).batch(64).prefetch(tf.data.AUTOTUNE)
    
    print("Optimized input pipeline built.")
            

  1. Introduction to Neural Networks
    Neural networks are made up of interconnected layers of neurons. Each neuron receives inputs, applies weights, and passes the result through an activation function.
    import tensorflow as tf
    
    # Define a basic model
    model = tf.keras.Sequential([
        tf.keras.layers.Dense(4, input_shape=(3,), activation='relu'),
        tf.keras.layers.Dense(1, activation='sigmoid')
    ])
    print(model.summary())
            

  2. Structure of a Neuron and Layers
    Each neuron computes a weighted sum of its inputs, adds a bias, then applies an activation function. Layers group multiple neurons.
    # One neuron layer example
    layer = tf.keras.layers.Dense(1, activation='relu', input_shape=(2,))
    output = layer(tf.constant([[1.0, 2.0]]))
    print("Output:", output.numpy())
            

  3. Activation Functions (ReLU, Sigmoid, Tanh)
    Activation functions introduce non-linearity. Common ones include ReLU, Sigmoid, and Tanh.
    x = tf.constant([-1.0, 0.0, 1.0])
    
    print("ReLU:", tf.nn.relu(x).numpy())
    print("Sigmoid:", tf.nn.sigmoid(x).numpy())
    print("Tanh:", tf.nn.tanh(x).numpy())
            

  4. Forward and Backward Propagation
    Forward pass calculates outputs, backward pass (via gradient tape) updates weights based on loss.
    x = tf.constant([[1.0]])
    y = tf.constant([[0.0]])
    
    model = tf.keras.Sequential([tf.keras.layers.Dense(1)])
    loss_fn = tf.keras.losses.MSE
    
    with tf.GradientTape() as tape:
        prediction = model(x)
        loss = loss_fn(y, prediction)
    
    grads = tape.gradient(loss, model.trainable_variables)
    print("Gradients:", grads)
            

  5. Loss Functions Overview
    Loss functions measure prediction error. Examples: MSE for regression, binary crossentropy for binary classification.
    y_true = tf.constant([1.0, 0.0])
    y_pred = tf.constant([0.8, 0.2])
    
    mse = tf.keras.losses.MSE(y_true, y_pred)
    bce = tf.keras.losses.BinaryCrossentropy()(y_true, y_pred)
    
    print("MSE:", mse.numpy())
    print("Binary Crossentropy:", bce.numpy())
            

  6. Optimizers: SGD, Adam, RMSProp
    Optimizers adjust weights during training. Common ones include SGD, Adam, and RMSProp.
    opt1 = tf.keras.optimizers.SGD()
    opt2 = tf.keras.optimizers.Adam()
    opt3 = tf.keras.optimizers.RMSprop()
    
    print("Available optimizers created.")
            

  7. Overfitting and Underfitting
    Overfitting means your model memorizes training data. Underfitting means it can't learn enough patterns.
    # Simulate by comparing train/val accuracy trends
    print("Detect using validation metrics — use dropout, regularization to prevent overfitting.")
            

  8. Regularization: L1, L2, Dropout
    Regularization reduces overfitting. L1/L2 add penalty to weights; Dropout randomly disables neurons during training.
    model = tf.keras.Sequential([
        tf.keras.layers.Dense(32, kernel_regularizer=tf.keras.regularizers.l2(0.01)),
        tf.keras.layers.Dropout(0.5),
        tf.keras.layers.Dense(1)
    ])
    print("Model with L2 regularization and dropout.")
            

  9. Batch Normalization Basics
    BatchNorm normalizes layer outputs, speeding up training and improving stability.
    model = tf.keras.Sequential([
        tf.keras.layers.Dense(64),
        tf.keras.layers.BatchNormalization(),
        tf.keras.layers.Activation('relu')
    ])
    print("Model with batch normalization.")
            

  10. Epochs, Batches, and Iterations
    An epoch is one full pass over data, batch is a subset, iteration is one batch update.
    # Train with batch size and epochs
    model.compile(optimizer='adam', loss='mse')
    model.fit(tf.random.normal([100, 3]), tf.random.normal([100, 1]), epochs=2, batch_size=10)
            

  11. Gradient Descent Visualization
    Gradient descent minimizes loss using calculated gradients. Visualize by plotting loss over time.
    import matplotlib.pyplot as plt
    
    loss_history = []
    
    # Dummy training
    for step in range(10):
        loss = 1 / (step + 1)
        loss_history.append(loss)
    
    plt.plot(loss_history)
    plt.title("Gradient Descent Loss Curve")
    plt.xlabel("Step")
    plt.ylabel("Loss")
    plt.show()
            

  12. Accuracy, Precision, and Recall Metrics
    Accuracy = correct predictions. Precision = true positives / predicted positives. Recall = true positives / actual positives.
    y_true = [1, 0, 1, 1]
    y_pred = [1, 0, 0, 1]
    
    acc = tf.keras.metrics.BinaryAccuracy()
    acc.update_state(y_true, y_pred)
    print("Accuracy:", acc.result().numpy())
            

  13. Learning Rate Schedules
    Use schedules to adjust learning rate during training. Examples: exponential decay, step decay.
    lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
        initial_learning_rate=0.1,
        decay_steps=100,
        decay_rate=0.96
    )
    
    optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule)
    print("Learning rate schedule created.")
            

  14. Binary vs Multiclass Classification
    Binary uses sigmoid output + binary crossentropy; Multiclass uses softmax output + categorical crossentropy.
    # Binary classification model
    binary_model = tf.keras.Sequential([
        tf.keras.layers.Dense(1, activation='sigmoid')
    ])
    
    # Multiclass model
    multi_model = tf.keras.Sequential([
        tf.keras.layers.Dense(3, activation='softmax')
    ])
    print("Defined binary and multiclass models.")
            

  15. Multi-Layer Perceptrons in TensorFlow
    MLPs have multiple dense layers and can approximate complex functions.
    mlp = tf.keras.Sequential([
        tf.keras.layers.Dense(64, activation='relu'),
        tf.keras.layers.Dense(32, activation='relu'),
        tf.keras.layers.Dense(1, activation='sigmoid')
    ])
    mlp.compile(optimizer='adam', loss='binary_crossentropy')
    print("MLP model defined.")
            

  1. What is Keras?
    Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow. It simplifies building deep learning models.
    import tensorflow as tf
    print("Keras version:", tf.keras.__version__)
            

  2. Sequential API Basics
    The Sequential API lets you build models layer-by-layer in a linear stack.
    model = tf.keras.Sequential([
        tf.keras.layers.Dense(64, activation='relu', input_shape=(10,)),
        tf.keras.layers.Dense(1, activation='sigmoid')
    ])
    model.summary()
            

  3. Functional API Introduction
    The Functional API lets you build complex models with non-linear topology, multiple inputs or outputs.
    inputs = tf.keras.Input(shape=(10,))
    x = tf.keras.layers.Dense(64, activation='relu')(inputs)
    outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)
    model = tf.keras.Model(inputs=inputs, outputs=outputs)
    model.summary()
            

  4. Defining Input and Output Layers
    Inputs specify the shape of data, outputs define the prediction layer.
    input_layer = tf.keras.Input(shape=(20,))
    hidden_layer = tf.keras.layers.Dense(10, activation='relu')(input_layer)
    output_layer = tf.keras.layers.Dense(3, activation='softmax')(hidden_layer)
    model = tf.keras.Model(inputs=input_layer, outputs=output_layer)
    model.summary()
            

  5. Compiling a Model
    Compilation configures the model with optimizer, loss, and metrics.
    model.compile(
        optimizer='adam',
        loss='sparse_categorical_crossentropy',
        metrics=['accuracy']
    )
            

  6. Training with model.fit()
    Train the model using `.fit()` which accepts input data, labels, epochs, and batch size.
    import numpy as np
    
    x_train = np.random.random((100, 20))
    y_train = np.random.randint(3, size=(100,))
    
    model.fit(x_train, y_train, epochs=5, batch_size=16)
            

  7. Evaluating with model.evaluate()
    Evaluate the trained model on test data to get loss and metrics.
    x_test = np.random.random((20, 20))
    y_test = np.random.randint(3, size=(20,))
    
    loss, accuracy = model.evaluate(x_test, y_test)
    print("Test loss:", loss)
    print("Test accuracy:", accuracy)
            

  8. Making Predictions with model.predict()
    Use `.predict()` to generate output predictions from input samples.
    new_data = np.random.random((3, 20))
    predictions = model.predict(new_data)
    print("Predictions:\n", predictions)
            

  9. Saving and Loading Models
    Save entire model or weights for later use.
    model.save('my_model.h5')  # Save entire model
    
    # Load model later
    loaded_model = tf.keras.models.load_model('my_model.h5')
    print("Model loaded successfully.")
            

  10. Using Callbacks and Checkpoints
    Callbacks let you customize training. Checkpoints save model weights during training.
    checkpoint_cb = tf.keras.callbacks.ModelCheckpoint("best_model.h5", save_best_only=True)
    earlystop_cb = tf.keras.callbacks.EarlyStopping(patience=3)
    
    model.fit(
        x_train, y_train,
        epochs=20,
        validation_split=0.2,
        callbacks=[checkpoint_cb, earlystop_cb]
    )
            

  11. Early Stopping and Model Monitoring
    Early stopping stops training when validation loss stops improving to prevent overfitting.
    early_stop = tf.keras.callbacks.EarlyStopping(
        monitor='val_loss',
        patience=2,
        restore_best_weights=True
    )
    
    model.fit(x_train, y_train, epochs=50, validation_split=0.2, callbacks=[early_stop])
            

  12. TensorBoard Integration
    TensorBoard visualizes training metrics, graphs, and more.
    tensorboard_cb = tf.keras.callbacks.TensorBoard(log_dir='./logs')
    
    model.fit(x_train, y_train, epochs=5, callbacks=[tensorboard_cb])
    # Run `tensorboard --logdir=./logs` in terminal to view
            

  13. Custom Metrics and Loss Functions
    You can define custom functions for metrics or losses.
    def custom_mse(y_true, y_pred):
        return tf.reduce_mean(tf.square(y_true - y_pred))
    
    model.compile(optimizer='adam', loss=custom_mse)
    print("Model compiled with custom loss function.")
            

  14. Transfer Learning in Keras
    Reuse pre-trained models and fine-tune for your task.
    base_model = tf.keras.applications.MobileNetV2(input_shape=(224,224,3),
                                                   include_top=False,
                                                   weights='imagenet')
    base_model.trainable = False
    
    model = tf.keras.Sequential([
        base_model,
        tf.keras.layers.GlobalAveragePooling2D(),
        tf.keras.layers.Dense(1, activation='sigmoid')
    ])
    print("Transfer learning model created.")
            

  15. Best Practices for Keras Models
    Use proper data preprocessing, callbacks, clear model summaries, and monitor training closely.
    # Always start with:
    model.summary()
    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
    
    # Use validation data and callbacks
    model.fit(x_train, y_train, validation_split=0.2, epochs=10, callbacks=[earlystop_cb])
            

  1. Intro to Computer Vision Tasks
    Computer vision involves teaching machines to interpret and understand visual information from images or videos, such as classification, detection, and segmentation.
    # TensorFlow supports many computer vision tasks like classification, detection, and segmentation.
    print("TensorFlow Computer Vision Capabilities Ready")
            

  2. Image Classification Basics
    Image classification assigns a label to an entire image, e.g., classifying photos as cats or dogs.
    # Example: Simple image classification model architecture
    model = tf.keras.Sequential([
        tf.keras.layers.Flatten(input_shape=(28, 28, 1)),
        tf.keras.layers.Dense(128, activation='relu'),
        tf.keras.layers.Dense(10, activation='softmax')
    ])
    model.summary()
            

  3. Preprocessing Images
    Preprocessing includes resizing, normalization, and converting images into tensors for model input.
    import tensorflow as tf
    
    def preprocess(image):
        image = tf.image.resize(image, [28, 28])
        image = image / 255.0  # normalize pixel values
        return image
    
    sample_image = tf.random.uniform([100, 100, 3])
    processed_image = preprocess(sample_image)
    print("Processed image shape:", processed_image.shape)
            

  4. Convolutional Neural Networks (CNNs)
    CNNs use convolutional layers to detect local patterns like edges and shapes in images.
    model = tf.keras.Sequential([
        tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
        tf.keras.layers.Flatten(),
        tf.keras.layers.Dense(10, activation='softmax')
    ])
    model.summary()
            

  5. Pooling Layers: Max, Average
    Pooling layers reduce spatial dimensions, e.g., max pooling takes the max value in a region.
    model = tf.keras.Sequential([
        tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
        tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
        tf.keras.layers.Flatten(),
        tf.keras.layers.Dense(10, activation='softmax')
    ])
    model.summary()
            

  6. Building CNNs in Keras
    Stack convolutional and pooling layers followed by dense layers to build CNN architectures.
    model = tf.keras.Sequential([
        tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
        tf.keras.layers.MaxPooling2D(),
        tf.keras.layers.Conv2D(64, 3, activation='relu'),
        tf.keras.layers.MaxPooling2D(),
        tf.keras.layers.Flatten(),
        tf.keras.layers.Dense(64, activation='relu'),
        tf.keras.layers.Dense(10, activation='softmax')
    ])
    model.summary()
            

  7. Data Augmentation for Images
    Augmentation artificially expands the dataset by applying transformations like flip, rotate, zoom.
    data_augmentation = tf.keras.Sequential([
        tf.keras.layers.RandomFlip('horizontal'),
        tf.keras.layers.RandomRotation(0.1),
        tf.keras.layers.RandomZoom(0.1)
    ])
    
    sample_image = tf.expand_dims(processed_image, 0)  # batch dimension
    augmented_image = data_augmentation(sample_image)
    print("Augmented image shape:", augmented_image.shape)
            

  8. Using Pretrained Models (VGG, ResNet)
    Transfer learning uses pretrained CNNs like VGG16 or ResNet as feature extractors.
    base_model = tf.keras.applications.VGG16(
        input_shape=(224, 224, 3),
        include_top=False,
        weights='imagenet'
    )
    base_model.trainable = False
    print("Loaded VGG16 base model")
            

  9. Fine-Tuning CNN Models
    After transfer learning, unfreeze some layers for fine-tuning on your dataset.
    base_model.trainable = True
    for layer in base_model.layers[:-5]:
        layer.trainable = False
    print("Fine-tuning last 5 layers")
            

  10. Object Detection Overview
    Object detection identifies and localizes objects within an image using bounding boxes.
    # TensorFlow supports models like SSD, Faster R-CNN for object detection.
    print("Object detection models available via TensorFlow Model Zoo")
            

  11. Using TensorFlow Hub for Vision Models
    TensorFlow Hub hosts pretrained models which can be loaded easily for vision tasks.
    import tensorflow_hub as hub
    
    model = hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v2_140_224/classification/4")
    print("Loaded MobileNetV2 from TensorFlow Hub")
            

  12. Image Segmentation Basics
    Segmentation divides an image into meaningful parts or regions, e.g., semantic segmentation.
    # Segmentation uses models like U-Net or DeepLab
    print("Use segmentation models for pixel-level classification")
            

  13. Custom CNN Architectures
    You can design your own CNN layers and connections depending on the task.
    inputs = tf.keras.Input(shape=(64, 64, 3))
    x = tf.keras.layers.Conv2D(32, 3, activation='relu')(inputs)
    x = tf.keras.layers.MaxPooling2D()(x)
    x = tf.keras.layers.Conv2D(64, 3, activation='relu')(x)
    x = tf.keras.layers.GlobalAveragePooling2D()(x)
    outputs = tf.keras.layers.Dense(10, activation='softmax')(x)
    custom_cnn = tf.keras.Model(inputs, outputs)
    custom_cnn.summary()
            

  14. Visualizing Activations and Filters
    Visualizing what filters detect can give insights into the CNN's learning process.
    import matplotlib.pyplot as plt
    layer_output = custom_cnn.layers[1].output  # First Conv layer output
    print("Use TensorBoard or matplotlib to visualize activations and filters")
            

  15. Case Study: Classifying Handwritten Digits (MNIST)
    MNIST is a standard dataset for handwritten digit classification.
    from tensorflow.keras.datasets import mnist
    
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
    x_train = x_train[..., tf.newaxis]/255.0
    x_test = x_test[..., tf.newaxis]/255.0
    
    model = tf.keras.Sequential([
        tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28,28,1)),
        tf.keras.layers.MaxPooling2D(),
        tf.keras.layers.Flatten(),
        tf.keras.layers.Dense(10, activation='softmax')
    ])
    model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
    model.fit(x_train, y_train, epochs=5, validation_data=(x_test, y_test))