VM-LEARNING /class.x ·track.ai ·ch-b7 session: 2026_27
$cd ..

~/Advance Python

root@vm-learning ~ $ open ch-b7
PART B ▪ UNIT 7
12
Advance Python
Jupyter · Virtual Env · NumPy · Pandas · Matplotlib · OpenCV
Advance Python builds on the basics learnt in Class IX. This unit covers the professional workflow used by data scientists and AI developers — working in Jupyter Notebook, creating virtual environments, installing Python packages using pip, and using powerful libraries like NumPy, Pandas, Matplotlib and OpenCV to solve real problems.
This unit is practical-based — assessed through a hands-on examination. You must be able to write and run the 8 mandatory CBSE programs covering list operations, statistics (NumPy), charts (Matplotlib), CSV files (Pandas), and image handling (OpenCV).
Learning Outcome 1: Recap Python basics from Class IX

7.1 Python Recap — Essentials from Class IX

Before diving into Advance Python, quick refresher of the essentials you learnt last year:

🐍 What is Python?Popular, easy-to-learn programming language used worldwide. Created by Guido van Rossum in 1991. #1 language for AI, Data Science and ML.
✨ Key FeaturesSimple · Readable · Interpreted · Free & Open-source · Huge library support · Cross-platform.
📝 Comments# single line comment
""" multi-line
comment """
🔢 Data Typesint (whole numbers) · float (decimals) · str (text) · bool (True/False)
🎯 Variablesx = 10 · name = "Ravi" · No need to declare type — Python figures it out.
➕ OperatorsArithmetic: + − * / // % **
Comparison: == != < > <= >=
Logical: and · or · not
📥 Input & Outputprint() — displays output
input() — takes user input (always returns str).
🔀 Conditionalsif · elif · else — use 4-space indentation.
🔁 Loopsfor (fixed times) · while (until condition false). range(start, stop, step) generates sequences.
📋 ListsOrdered collection in [ ]. Indexing starts at 0. Methods: append · insert · remove · pop · extend · sort · reverse · len.
Learning Outcome 2: Work with Jupyter Notebook

7.2 Jupyter Notebook

Jupyter Notebook is an open-source, web-based tool that allows you to create and share documents containing live code, equations, visualisations and narrative text. It is the most popular environment for data-science and AI development.
🔹 Why Use Jupyter Notebook?
🧩 InteractiveRun code cell by cell — see output immediately below.
📊 Rich OutputCharts, tables, images and HTML render inline.
📝 Markdown SupportMix code with formatted notes, equations (LaTeX), headings.
💾 Save as .ipynbShare complete notebook — code + output + notes — in one file.
🌐 Browser-BasedWorks in any modern browser — Chrome, Firefox, Edge.
🤝 ShareableUpload to GitHub / Kaggle / Google Colab for collaboration.
🔹 Installing Jupyter Notebook

Option 1 — Anaconda Distribution (recommended):

  1. Download Anaconda from anaconda.com/download.
  2. Install it following the on-screen instructions.
  3. Open Anaconda Navigator → click Launch under Jupyter Notebook.

Option 2 — Using pip:

# Install Jupyter
pip install jupyter

# Launch Jupyter Notebook
jupyter notebook
🔹 Parts of Jupyter Notebook
ComponentPurpose
Code CellWrite and execute Python code. Press Shift + Enter to run.
Markdown CellWrite formatted text, headings, lists, equations.
KernelThe engine that runs your Python code. Can be restarted.
ToolbarSave · Add cell · Cut/Copy/Paste · Run · Restart Kernel.
Menu BarFile · Edit · View · Insert · Cell · Kernel · Help.
🔹 Common Keyboard Shortcuts
ShortcutActionShortcutAction
Shift + EnterRun cell + move to nextAInsert cell above
Ctrl + EnterRun cell, stay putBInsert cell below
Alt + EnterRun cell + insert new belowDDDelete cell (press D twice)
MChange to MarkdownYChange to Code

7.3 Virtual Environments

A Virtual Environment is an isolated Python environment where you can install packages without affecting your global Python installation. Each project can have its own venv with its own dependencies and versions.
🔹 Why Virtual Environments?
  • Isolation — keep project dependencies separate.
  • Version control — Project A can use NumPy 1.20, Project B can use NumPy 1.24 — no conflict.
  • Clean global — don't clutter your global Python install.
  • Reproducibility — share exact package versions with teammates via requirements.txt.
  • Safe experimentation — test new packages without breaking existing projects.
🔹 Creating a Virtual Environment
# Create a virtual environment named 'myenv'
python -m venv myenv

# Activate it — Windows
myenv\Scripts\activate

# Activate it — Mac / Linux
source myenv/bin/activate

# Your prompt will now show (myenv) — indicating activation
# Install packages inside this env
pip install numpy pandas matplotlib

# Save installed packages to a file
pip freeze > requirements.txt

# Deactivate when done
deactivate
🔹 Using Conda Environments (Anaconda)
# Create conda environment
conda create -n myenv python=3.11

# Activate
conda activate myenv

# Install package
conda install numpy pandas

# List environments
conda env list

# Deactivate
conda deactivate

7.4 Installing Python Packages (pip)

pip is the package installer for Python. It lets you install, upgrade and remove packages from the Python Package Index (PyPI) — a repository of 400,000+ free packages.
🔹 Common pip Commands
CommandPurpose
pip install package_nameInstall a package (e.g., pip install numpy)
pip install package==versionInstall a specific version (e.g., numpy==1.24.0)
pip install --upgrade packageUpgrade to the latest version
pip uninstall packageRemove a package
pip listList all installed packages
pip freezeList installed packages with exact versions (for requirements.txt)
pip install -r requirements.txtInstall all packages from a requirements file
pip show packageShow details about an installed package

7.5 Essential Python Libraries for AI

🔢 NumPy

Numerical Python — high-performance arrays and mathematical operations. Foundation of all data science in Python.

🐼 Pandas

Data analysis library — handles CSV, Excel, SQL data in DataFrames. Reading, cleaning, transforming, summarising.

📊 Matplotlib

Plotting library — create line charts, scatter plots, bar charts, histograms and more.

📷 OpenCV

Computer Vision library — read, edit, transform images and videos. Face detection, object tracking.

🧠 Scikit-learn

Machine Learning library — classification, regression, clustering ready to use.

🔥 TensorFlow / Keras

Deep-Learning libraries — build and train neural networks and CNNs.

Learning Outcome 3: Practical Programs (CBSE Suggested List)

7.6 Suggested Programs — CBSE Practical Examination

The CBSE Class X practical exam requires you to write and execute these 8 programs. Each is covered below with full code and expected output.

📋 Program 1 — Add Elements of Two Lists

Program 1: Add Elements of Two Lists
list1 = [10, 20, 30, 40, 50]
list2 = [1, 2, 3, 4, 5]

# Add element-wise using list comprehension
result = [list1[i] + list2[i] for i in range(len(list1))]

print("List 1:", list1)
print("List 2:", list2)
print("Sum of elements:", result)
List 1: [10, 20, 30, 40, 50] List 2: [1, 2, 3, 4, 5] Sum of elements: [11, 22, 33, 44, 55]

Alternative — using NumPy (cleaner):

import numpy as np

list1 = np.array([10, 20, 30, 40, 50])
list2 = np.array([1, 2, 3, 4, 5])
result = list1 + list2

print("Sum:", result)
Sum: [11 22 33 44 55]

🧮 Program 2 — Mean, Median and Mode using NumPy

Program 2: Calculate Mean, Median and Mode using NumPy
import numpy as np
from scipy import stats

data = np.array([5, 2, 9, 4, 7, 2, 8, 2, 3])

mean_value   = np.mean(data)
median_value = np.median(data)
mode_value   = stats.mode(data, keepdims=True).mode[0]

print("Data :", data)
print("Mean  :", mean_value)
print("Median:", median_value)
print("Mode  :", mode_value)
Data : [5 2 9 4 7 2 8 2 3] Mean : 4.666666666666667 Median: 4.0 Mode : 2
Note: NumPy alone has np.mean() and np.median() — for mode, use scipy.stats.mode() or use Python's built-in statistics.mode() from the statistics module.

📈 Program 3 — Line Chart from (2,5) to (9,10)

Program 3: Display a Line Chart
import matplotlib.pyplot as plt

x = [2, 9]
y = [5, 10]

plt.plot(x, y, marker='o', color='blue', linewidth=2)
plt.title("Line Chart from (2,5) to (9,10)")
plt.xlabel("X-axis")
plt.ylabel("Y-axis")
plt.grid(True)
plt.show()
▶ A line chart window opens — a straight blue line going from point (2,5) to point (9,10), with circular markers at both endpoints, grid turned on, and the title "Line Chart from (2,5) to (9,10)".

⚫ Program 4 — Scatter Chart for 5 Points

Program 4: Display Scatter Chart for (2,5), (9,10), (8,3), (5,7), (6,18)
import matplotlib.pyplot as plt

x = [2, 9, 8, 5, 6]
y = [5, 10, 3, 7, 18]

plt.scatter(x, y, color='red', s=100, marker='o')
plt.title("Scatter Chart")
plt.xlabel("X-axis")
plt.ylabel("Y-axis")
plt.grid(True)
plt.show()
▶ A scatter plot opens — 5 red circular dots at the coordinates (2,5), (9,10), (8,3), (5,7), (6,18) on a grid, no line connecting them.

📑 Program 5 — Read CSV and Display 10 Rows

Program 5: Read CSV File and Display First 10 Rows
import pandas as pd

# Read CSV file
df = pd.read_csv('data.csv')

# Display first 10 rows
print("First 10 rows of the dataset:")
print(df.head(10))
First 10 rows of the dataset: Name Age Marks City 0 Rahul 15 87 Delhi 1 Priya 16 92 Mumbai 2 Amit 14 78 Pune 3 Sneha 15 85 Chennai 4 Ravi 16 90 Bangalore 5 Anjali 15 81 Delhi 6 Karan 14 88 Mumbai 7 Isha 16 76 Kolkata 8 Vikram 15 95 Delhi 9 Neha 14 82 Pune
Tip: Use df.head() for first 5 rows, df.tail() for last 5, df.head(n) for first n rows.

ℹ️ Program 6 — Read CSV and Display Info

Program 6: Read CSV File and Display Information
import pandas as pd

df = pd.read_csv('data.csv')

print("Shape of dataset:", df.shape)
print("\nColumn names:", df.columns.tolist())
print("\nData types:")
print(df.dtypes)
print("\nStatistical summary:")
print(df.describe())
print("\nFull info:")
df.info()
Shape of dataset: (10, 4) Column names: ['Name', 'Age', 'Marks', 'City'] Data types: Name object Age int64 Marks int64 City object dtype: object Statistical summary: Age Marks count 10.000000 10.000000 mean 15.000000 85.400000 std 0.816497 6.168017 min 14.000000 76.000000 50% 15.000000 86.000000 max 16.000000 95.000000 Full info: <class 'pandas.core.frame.DataFrame'> RangeIndex: 10 entries, 0 to 9 Data columns (total 4 columns) 0 Name 10 non-null object 1 Age 10 non-null int64 2 Marks 10 non-null int64 3 City 10 non-null object dtypes: int64(2), object(2) memory usage: 448.0+ bytes

🖼️ Program 7 — Read and Display an Image

Program 7: Read and Display an Image using Python
import cv2

# Read the image
img = cv2.imread('photo.jpg')

# Display the image in a window
cv2.imshow('My Image', img)

# Wait for any key press
cv2.waitKey(0)

# Close the window
cv2.destroyAllWindows()
▶ A window opens displaying the image "photo.jpg". The program waits until any key is pressed, then closes.

Alternative — using Matplotlib:

import cv2
import matplotlib.pyplot as plt

img = cv2.imread('photo.jpg')
# OpenCV reads in BGR, convert to RGB for correct colours
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

plt.imshow(img_rgb)
plt.axis('off')
plt.show()

📐 Program 8 — Read an Image and Identify its Shape

Program 8: Read an Image and Identify its Shape
import cv2

img = cv2.imread('photo.jpg')

print("Shape of image:", img.shape)
print("Height (rows)  :", img.shape[0])
print("Width  (cols)  :", img.shape[1])
print("Number of Channels :", img.shape[2])
print("Total pixels        :", img.shape[0] * img.shape[1])
print("Image data type     :", img.dtype)
Shape of image: (480, 640, 3) Height (rows) : 480 Width (cols) : 640 Number of Channels : 3 Total pixels : 307200 Image data type : uint8
Understanding img.shape:
  • img.shape[0] = Height (number of rows of pixels)
  • img.shape[1] = Width (number of columns of pixels)
  • img.shape[2] = Channels: 3 for RGB (actually BGR in OpenCV), 1 for grayscale
  • img.dtype = data type of pixel values — typically uint8 (0-255)

7.7 Combining Libraries — A Complete Example

Here's how libraries work together in a real AI workflow:

Example: Read CSV → Analyse → Visualise
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

# 1. Read data (Pandas)
df = pd.read_csv('students.csv')
print("Dataset Shape:", df.shape)

# 2. Calculate statistics (NumPy)
avg_marks = np.mean(df['Marks'])
print("Average marks:", avg_marks)

# 3. Visualise (Matplotlib)
plt.bar(df['Name'], df['Marks'], color='skyblue')
plt.title("Student Marks")
plt.xlabel("Student Name")
plt.ylabel("Marks")
plt.xticks(rotation=45)
plt.tight_layout()
plt.show()
Dataset Shape: (10, 4) Average marks: 85.4 ▶ A bar chart opens showing each student's marks with their names on the x-axis.

7.8 Best Practices for Advance Python

  • Always use virtual environments — keep projects isolated.
  • Use meaningful namesstudent_marks is better than sm.
  • Comment your code — future you will thank present you.
  • Use libraries instead of reinventing — NumPy is 100× faster than a Python loop.
  • Handle errors gracefully — use try / except around file reads.
  • Save your work frequently in Jupyter.
  • Keep a requirements.txt — so others can reproduce your setup.
  • Read documentation — NumPy, Pandas, Matplotlib all have excellent docs.
  • Practise on real datasets — Kaggle has thousands of free datasets.

7.9 Common Errors & How to Fix

ErrorCauseFix
ModuleNotFoundErrorLibrary not installedpip install <name>
FileNotFoundErrorCSV or image not in current folderUse full path or check spelling
IndentationErrorMixed tabs and spacesUse 4 spaces consistently
NameErrorVariable used before definedDefine variable above its use
TypeErrorMixing types (string + int)Convert with str() or int()
KeyErrorDataFrame column doesn't existCheck column name with df.columns
IndexErrorAccessing list/array out of rangeCheck with len() first

Quick Revision — Key Points to Remember

  • Jupyter Notebook = web-based tool for interactive Python — code + output + notes in one file (.ipynb). Install via Anaconda or pip install jupyter.
  • Jupyter shortcuts: Shift+Enter run, A/B insert above/below, DD delete, M markdown, Y code.
  • Virtual Environment = isolated Python env. Create: python -m venv myenv. Activate on Windows: myenv\Scripts\activate.
  • pip = Python package installer. pip install package, pip freeze > requirements.txt.
  • 6 essential AI libraries: NumPy (arrays) · Pandas (DataFrames) · Matplotlib (charts) · OpenCV (images) · Scikit-learn (ML) · TensorFlow/Keras (DL).
  • Program 1 — Add lists: loop or np.array + np.array.
  • Program 2 — Mean/Median/Mode: np.mean(), np.median(), stats.mode().
  • Program 3 — Line chart: plt.plot(x, y) + plt.show().
  • Program 4 — Scatter chart: plt.scatter(x, y).
  • Program 5 — Read CSV 10 rows: pd.read_csv('file.csv').head(10).
  • Program 6 — CSV info: df.shape, df.columns, df.dtypes, df.describe(), df.info().
  • Program 7 — Read & display image: cv2.imread + cv2.imshow + cv2.waitKey(0) + cv2.destroyAllWindows().
  • Program 8 — Image shape: img.shape → (Height, Width, Channels).
  • Shape interpretation: shape[0]=Height, shape[1]=Width, shape[2]=3 for RGB or 1 for Grayscale.
  • Common errors: ModuleNotFoundError · FileNotFoundError · IndentationError · TypeError · KeyError · IndexError.
🧠Practice Quiz — test yourself on this chapter