Using BeautifulSoup to exploit a URL and its dependent pages and store results in csv?

This code does not crash, which is good. However, it generates and empty icao_publications.csv f. I want to populate icao_publications.csv with all record on all the pages from the URL and capture all the pages. The dataset should be about 10,000 rows or their about in all. I want to get these 10,000 or so rows in the csv file.

import requests, csv
from bs4 import BeautifulSoup

url = 'https://www.icao.int/publications/DOC8643/Pages/Search.aspx'

with open('Test1_Aircraft_Type_Designators.csv', "w", encoding="utf-8") as f:
    writer = csv.writer(f)
    writer.writerow(["Manufacturers", "Model", "Type_Designator", "Description", "Engine_Type", "Engine_Count", "WTC"])

    while True:
        html = requests.get(url)
        soup = BeautifulSoup(html.text, 'html.parser')
        for row in soup.select('table tbody tr'):
            writer.writerow([c.text if c.text else '' for c in row.select('td')])


        if soup.select_one('li.paginate_button.active + li a'):
            url = soup.select_one('li.paginate_button.active + li a')['href']
        else:
            break

Here you go:

import requests
import pandas as pd

url = 'https://www4.icao.int/doc8643/External/AircraftTypes'
resp = requests.post(url).json()
df = pd.DataFrame(resp)
df.to_csv('aircraft.csv',encoding='utf-8',index=False)
print('Saved to aircraft.csv')