Lab 02: Flet GUI basics

Flet is an opensource Python framework for building real, reactive user interfaces—web, desktop, and mobile—using simple Python code. It wraps Google’s Flutter engine under the hood, so you get native-feeling UI, smooth animations, and a rich set of ready-made controls without touching Dart. You write components, manage state, and handle events in Python, then run the same app in a browser, as a desktop app, or packaged for phones. It’s great for quickly turning scripts and data tools into polished, shareable apps.

Installation

  • Windows/macOS: Download from python.org downloads (on Windows, tick “Add Python to PATH”).
  • Linux (Debian/Ubuntu):

sudo apt update && sudo apt install -y python3 python3-pip python3-venv

  • Create a Virtual Environment: (recommended)

# macOS/Linux
python3 -m venv .venv
source .venv/bin/activate
 
# Windows (PowerShell)
python -m venv .venv
.\.venv\Scripts\Activate.ps1

  • Install Flet:

pip install --upgrade pip
pip install flet==0.28.3
Note: Flet is in very active development - we will be using the 0.28.3 release specifically.
Tip: In VS Code, pick the right interpreter: Cmd/Ctrl+Shift+P → Python: Select Interpreter and choose .venv.

Your system may need a restart before flet becomes a globally recognized command.

Hello, Flet!

Create a new Flet project by running the following command in your workspace directory:

flet create --project-name MLE-flet-demo --description "becm33mle app demo"

You may encounter an error if you do not have GIT installed.

Directory structure created by flet create (with a local Python virtual environment .venv at the project root).

├── README.md
├── pyproject.toml
├── .venv
├── src
│   ├── assets
│   │   └── icon.png
│   └── main.py
└── storage
    ├── data
    └── temp

Run the template main.py app:

flet run

main.py is the default entry-point of any Flet app. Different files can be launched using flet run <path_to_file>. Flet apps can also be launched using python3 <path_to_file>, though this approach is not recommended!

You should see the following interactive window:

Running the app... Everywhere!

Flet can be run as a desktop app:

flet run

Or a web app

flet run --web

Or on a mobile phone using the official companion app (iOS, Android):

# Android
flet run --android
# iOS
flet run --ios
Note: devices must be on the same local network.

You can use the hot-reload feature for fast iterative GUI development by running flet run -d -r. This will watch for any changes in all subdirectories.

Routing in Flet

Depending on the target platform Flet can be a multi-user app (web). For this purpose, Flet spawns a new Page for each connected user (on desktop and mobile usually just a single page). Each Page can have multiple Views, which are stacked in a list, acting as sort of navigation history. Views can be appended (opening a new page) or poped (going back). Each View has an assortment of Controls, building the GUI of that particular View.

The Page exposes current route and an on_route_change event handler. This can be used to handle View poping/appending and unknown route handling (404 page not found).

Route is the part of the URL after the / symbol (inclusive) - i.e. mydomain.cz/route/to/somewhere

Each page below is a class inheriting from View. The app builds the view stack in on_route_change and handles back navigation in on_view_pop.

Routing demo

import flet as ft
 
class MainView(ft.View):
    def __init__(self, page: ft.Page):
        super().__init__(
            route="/",
            appbar=ft.AppBar(title=ft.Text("Main")),
            controls=[
                ft.Text("This is the main page"),
                ft.Row([
                    ft.ElevatedButton("Go to 1", on_click=lambda _:page.go("/1")),
                    ft.ElevatedButton("Go to 2", on_click=lambda _:page.go("/2")),
                ])
            ],
        )
 
class PageOne(ft.View):
    def __init__(self, page: ft.Page):
        super().__init__(
            route="/1",
            appbar=ft.AppBar(title=ft.Text("Page 1")),
            controls=[
                ft.Text("Hello from page 1"),
                ft.ElevatedButton("Home", on_click=lambda _:page.go("/")),
            ],
        )
 
class PageTwo(ft.View):
    def __init__(self, page: ft.Page):
        super().__init__(
            route="/2",
            appbar=ft.AppBar(title=ft.Text("Page 2")),
            controls=[
                ft.Text("Hello from page 2"),
                ft.ElevatedButton("Home", on_click=lambda _:page.go("/")),
            ],
        )
 
def main(page: ft.Page):
    page.title = "Routing demo"
 
    def route_change(e: ft.RouteChangeEvent):
        page.views.clear()
        page.views.append(MainView(page))
        if page.route == "/1":
            page.views.append(PageOne(page))
        if page.route == "/2":
            page.views.append(PageTwo(page))
        page.update()
 
    def view_pop(e: ft.ViewPopEvent):
        page.views.pop()
        page.go(page.views[-1].route)
 
    page.on_route_change = route_change
    page.on_view_pop = view_pop
    page.go(page.route)
 
ft.app(target=main)

  • ft.app(target=main) starts the app and creates a Page (one per user)
  • Page, View, and anything placed in the Views are so-called controls (i.e. ft.Text, ft.ElevatedButton, …)
  • For changes made to a control to take effect, we must call it's .update() method or any of it's parent's .update() methods.
  • It can be handy to pass the reference to Page to our Views.
  • All controls have a .page property, which is only accessible if the control is currently “visible” on the page (usually not accessible in init).

Custom controls

Flet ships with many built-ins, but you can create reusable components by styling a built-in control or composing several controls into one. Pick the closest existing control (e.g., Container, Row, Text, ElevatedButton, or even View) and inherit from it.

You can mirror the Routing demo where each page inherited from ft.View — here you inherit from the most appropriate control and specialize it.

Patterns

  • Styled control (inherit & set defaults): Ideal for consistent buttons, texts, inputs used across the app.
  • Composite control (inherit a container): Build a mini-widget by composing children and exposing a tiny API.
  • State & updates: Keep state in the class and call ``self.update()`` (or update child props then ``self.update()``).

Example 1 — Styled control

A primary button with consistent look & behavior used app-wide.

import flet as ft
 
class PrimaryButton(ft.ElevatedButton):
    def __init__(self, **kwargs):
        super().__init__(
            bgcolor=ft.Colors.BLUE,
            color=ft.Colors.WHITE,
            style=ft.ButtonStyle(shape=ft.RoundedRectangleBorder(radius=8)),
            **kwargs
        )

Example 2 — Composite control (counter)

A small component composed of a label + value + two buttons. We inherit from a Row and manage internal state.

import flet as ft
 
class Counter(ft.Row):
    def __init__(self, value: int = 0):
        super().__init__(alignment=ft.MainAxisAlignment.START, spacing=20)
        self._value = value
 
        self.value_text = ft.Text(str(self._value), weight=ft.FontWeight.BOLD)
        dec_btn = ft.IconButton(ft.Icons.REMOVE, on_click=lambda _:self._value_change(delta=-1))
        inc_btn = ft.IconButton(ft.Icons.ADD, on_click=lambda _:self._value_change(delta=1))
 
        self.controls = [ft.Text("Count:"), self.value_text, dec_btn, inc_btn]
 
    def _value_change(self, delta=0):
        self._value += delta
        self.value_text.value = str(self._value)
        self.update()

Minimal demo of custom controls

A tiny app demo using both custom controls.

import flet as ft
 
def main(page: ft.Page):
    page.title = "Custom controls (modern API)"
    out = ft.Text("Click the button or use the counter.")
    page.add(
        PrimaryButton(text="Primary action",
                      on_click=lambda e: (setattr(out, "value", "Clicked!"), page.update())),
        Counter(3),
        out
    )
 
ft.app(target=main)

When to pick the base class

  • Looks like text: inherit Text (e.g., timer, counter, clock,…).
  • Horizontal/vertical layout: inherit Row / Column.
  • Enclosed container with controls: inherit Container (most common).
  • Whole screen: inherit View (as in the routing example).

Local Ollama chatbot demo

Two ways to run the demo:

  • Local machine: install Ollama on your laptop/desktop and run it there.
  • CTU GPU servers: SSH to gpu.fel.cvut.cz and run Ollama remotely, then forward the API port to your machine.
Ollama serves an HTTP API on port 11434 bound to localhost by default. When running remotely, use SSH local port forwarding so your laptop can call the remote API safely.

Option A — Local install (short)

You need a dedicated GPU with ~8GB of VRAM to run LLM localy!

  1. Install Ollama for your OS (Linux/macOS/Windows).
  2. Start server: ollama serve
  3. Pull a small model (e.g., Llama 3 8B): ollama pull llama3:8b

Option B — CTU GPU servers

1) SSH with local port forwarding

From your laptop, open the tunnel first:

ssh -L 11434:localhost:11434 <your_ctu_username>@gpu.fel.cvut.cz
Explanation: -L local_port:remote_host:remote_port maps your local 11434 to remote localhost:11434. After connecting, any request you send to http://localhost:11434 on your laptop goes through the SSH tunnel to the server’s Ollama.

Keep this SSH session open while you work. If you close it, the tunnel (and access to the API) closes.

You must be to the CTU network or connected via the CTU VPN. More info about the CTU GPU cluster can be found here.

2) Install Ollama (Linux x86_64 tarball)

Download the latest release (as of writing v0.12.4):

wget https://github.com/ollama/ollama/releases/download/v0.12.4-rc6/ollama-linux-amd64.tgz

Unpack to a folder in your home:

mkdir ollama
tar -xvzf ollama-linux-amd64.tgz -C ollama

3) Start the Ollama server

Run the server (keeps the process in the foreground):

./ollama/bin/ollama serve

4) Download a small Llama 3 model

In a new SSH tab, pull the model:

./ollama/bin/ollama pull llama3:8b
(Adjust the tag if you prefer a different small variant.)

5) Test from your laptop via the tunnel

With the SSH tunnel still open, you can hit the remote API at your localhost:

# list models
curl http://localhost:11434/api/tags

# quick generate call
curl http://localhost:11434/api/generate -d '{"model":"llama3:8b","prompt":"Say hello from CTU GPU."}'

Do not change Ollama’s bind address to ``0.0.0.0`` on the server. Keep it on localhost + SSH tunneling.

Flet app demo

A Flet UI using the ollama python package (pip install ollama) API on http://localhost:11434.

import threading
from dataclasses import dataclass
from typing import List, Dict, Optional
 
import flet as ft
 
try:
    import ollama
except ImportError:
    ollama = None
 
 
# -----------------------------
# Data structures
# -----------------------------
@dataclass
class ChatMsg:
    is_user: str
    content: str
 
 
# -----------------------------
# UI Message Bubble
# -----------------------------
class MessageBubble(ft.Container):
    def __init__(self, is_user: bool, text: str):
        super().__init__()
        self.padding = 12
        self.margin = ft.margin.only(
            left=40 if is_user else 0,
            right=0 if is_user else 40,
            top=6,
            bottom=6,
        )
        self.bgcolor = ft.Colors.BLUE_800 if is_user else ft.Colors.GREY_800
        self.border_radius = ft.border_radius.all(14)
        self.content = ft.Text(text, selectable=True, color=ft.Colors.WHITE, size=14)
 
    def set_text(self, text: str):
        if isinstance(self.content, ft.Text):
            self.content.value = text
 
 
# -----------------------------
# Main app
# -----------------------------
class FletOllamaDemo:
    def __init__(self, page: ft.Page):
        self.page = page
        self.page.title = "Ollama Demo"
        self.page.theme_mode = ft.ThemeMode.DARK
        self.page.padding = 0
        self.page.window.width = 480
        self.page.window.height = 640
 
        # chat state
        self.messages: List[ChatMsg] = []
        self.streaming_thread: Optional[threading.Thread] = None
        self.stop_event = threading.Event()
        self.model = "llama3:8b"
 
        # controls
        self.chat_list = ft.ListView(expand=True, spacing=0, padding=16)
 
        self.input_field = ft.TextField(
            hint_text="Type and press Enter…",
            autofocus=True,
            shift_enter=True,
            min_lines=1,
            max_lines=5,
            expand=True,
            on_submit=self._on_send_clicked,
        )
        self.send_btn = ft.FilledButton("Send", icon=ft.Icons.SEND_ROUNDED, on_click=self._on_send_clicked)
        self.stop_btn = ft.OutlinedButton("Stop", icon=ft.Icons.STOP, on_click=self._on_stop_clicked, disabled=True)
 
        bottombar = ft.Container(
            padding=12,
            content=ft.Row([self.input_field, self.send_btn, self.stop_btn], vertical_alignment=ft.CrossAxisAlignment.CENTER),
        )
 
        body = ft.Column([
            ft.Container(content=self.chat_list, expand=True),
            bottombar,
        ], expand=True)
        self.page.add(body)
 
    def _ensure_model(self) -> bool:
        if ollama is None:
            self.page.open(ft.SnackBar(ft.Text("Ollama not available: pip install ollama and run the server."), open=True))
            self.page.update()
            return False
        try:
            have = {m.get("model") for m in (ollama.list() or {}).get("models", [])}
            if self.model not in have:
                ollama.pull(self.model)
            return True
        except Exception as ex:
            try:
                ollama.pull(self.model)
                return True
            except Exception:
                self.page.open(ft.SnackBar(ft.Text(f"Model setup error: {ex}"), open=True))
                self.page.update()
                return False
 
    def _on_send_clicked(self, e: Optional[ft.ControlEvent] = None):
        text = (self.input_field.value or "").strip()
        if not text:
            return
        self.input_field.value = ""
        self._append_user_message(text)
        self._start_streaming()
 
    def _on_stop_clicked(self, e: Optional[ft.ControlEvent] = None):
        self.stop_event.set()
 
    def _append_user_message(self, text: str):
        self.messages.append(ChatMsg(is_user=True, content=text))
        self.chat_list.controls.append(MessageBubble(is_user=True, text=text))
        self.chat_list.controls.append(MessageBubble(is_user=False, text="")) # AI reply
        self.page.update()
 
    def _start_streaming(self):
        if self.streaming_thread and self.streaming_thread.is_alive():
            return
        self.stop_event.clear()
        self.send_btn.disabled = True
        self.stop_btn.disabled = False
        self.page.update()
 
        def run():
            err = None
            try:
                self._stream_from_ollama()
            except Exception as ex:
                err = ex
            finally:
                self.send_btn.disabled = False
                self.stop_btn.disabled = True
                if err:
                    self.page.open(ft.SnackBar(ft.Text(f"Error: {err}"), open=True))
                self.page.update()
 
        self.streaming_thread = threading.Thread(target=run, daemon=True)
        self.streaming_thread.start()
 
    def _collect_messages(self) -> List[Dict[str, str]]:
        msgs: List[Dict[str, str]] = []
        for m in self.messages:
            msgs.append({"role": "user" if m.is_user else "assistant", "content": m.content})
        return msgs
 
    def _stream_from_ollama(self):
        # find the last bubble to stream to
        if not self.chat_list.controls or not isinstance(self.chat_list.controls[-1], MessageBubble):
            return
        assistant_bubble: MessageBubble = self.chat_list.controls[-1]
 
        if not self._ensure_model():
            assistant_bubble.set_text(
                "Model unavailable. Ensure Ollama client/server are installed and running.\n"
                f"- pip install ollama\n- ollama serve\n- ollama pull {self.model}"
            )
            self.page.update()
            return
 
        msgs = self._collect_messages()
 
        full_text = ""
        try:
            stream = ollama.chat(model=self.model, messages=msgs, stream=True)
            for part in stream:
                if self.stop_event.is_set():
                    break
                delta = part.get("message", {}).get("content", "")
                if not delta:
                    continue
                full_text += delta
                assistant_bubble.set_text(full_text)
                self.chat_list.scroll_to(offset=-1, duration=100)
                self.page.update()
        except Exception as ex:
            assistant_bubble.set_text(f"Error contacting model: {ex}")
            self.page.update()
            return
 
        if full_text.strip():
            self.messages.append(ChatMsg(is_user=False, content=full_text))
        else:
            self.chat_list.controls.pop()
        self.page.update()
 
 
def main(page: ft.Page):
    FletOllamaDemo(page)
 
 
if __name__ == "__main__":
    ft.app(target=main)

Run with:

flet run
Try to run the code on your mobile device and/or in the browser.

If you are not using the default port, you can export OLLAMA_HOST=http://127.0.0.1:12345 to change the expected API port.

Useful links

HW02: Gitlab Readme.md and project setup

Create project repo structure on FEE gitlab + README.md with required fields (up to 5p) The fields do not have to be finalized, as many features of your project are yet to be added. Focus on drafting out the general structure of your project.

Requirements for README.md:

  • Short project description
  • Features
  • Architecture overview (diagram + data flow)
  • Project timeline (1st, 2nd and final milestones highlighted)
  • Install & run (HW/SW reqs, requirements.txt/pyproject.toml, how to run client/server, CLI args/kwargs.)
  • Models used (name/version/source/license/intended use)
  • Datasets used (name/version/source/license/intended use/structure summary)
  • Reproducibility: how to retrain/re-export/regenerate datasets
  • Troubleshooting (optional)
  • Contributing (optional)
  • License

Note about milestones:

  1. Milestone: 5th week
  2. Milestone: 9th week
  3. Milestone: 13th (submission) week
courses/becm33mle/tutorials/lab_flet.txt · Last modified: 2025/10/14 08:31 by 2a00:102a:403b:2de4:1:0:d7e8:1336