How to retrieve the values of dynamic html content using Python

Assuming you are trying to get values from a page that is rendered using javascript templates (for instance something like handlebars), then this is what you will get with any of the standard solutions (i.e. beautifulsoup or requests).

This is because the browser uses javascript to alter what it received and create new DOM elements. urllib will do the requesting part like a browser but not the template rendering part. A good description of the issues can be found here. This article discusses three main solutions:

  1. parse the ajax JSON directly
  2. use an offline Javascript interpreter to process the request SpiderMonkey, crowbar
  3. use a browser automation tool splinter

This answer provides a few more suggestions for option 3, such as selenium or watir. I've used selenium for automated web testing and its pretty handy.


EDIT

From your comments it looks like it is a handlebars driven site. I'd recommend selenium and beautiful soup. This answer gives a good code example which may be useful:

from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Firefox()
driver.get('http://eve-central.com/home/quicklook.html?typeid=34')

html = driver.page_source
soup = BeautifulSoup(html)

# check out the docs for the kinds of things you can do with 'find_all'
# this (untested) snippet should find tags with a specific class ID
# see: http://www.crummy.com/software/BeautifulSoup/bs4/doc/#searching-by-css-class
for tag in soup.find_all("a", class_="my_class"):
    print tag.text

Basically selenium gets the rendered HTML from your browser and then you can parse it using BeautifulSoup from the page_source property. Good luck :)


I used selenium + chrome

 from selenium import webdriver
 from selenium.webdriver.chrome.options import Options

 url = "www.sitetotarget.com"
 options = Options()
 options.add_argument('--headless')
 options.add_argument('--disable-gpu')
 options.add_argument('--no-sandbox')
 options.add_argument('--disable-dev-shm-usage')`

Building off another answer. I had a similar issue. wget and curl do not work well anymore to get the content of a web page. It's particularly broken with dynamic and lazy content. Using Chrome (or Firefox or Chromium version of Edge) allows you to deal with redirects and scripting.

Below will launch an instance of Chrome, increase the timeout to 5 sec, and navigate this browser instance to a url. I ran this from Jupyter.

import time
from tqdm.notebook import trange, tqdm
from PIL import Image, ImageFont, ImageDraw, ImageEnhance
from selenium import webdriver
driver = webdriver.Chrome('/usr/bin/chromedriver')
driver.set_page_load_timeout(5)
time.sleep(1)
driver.set_window_size(2100, 9000)
time.sleep(1)
driver.set_window_size(2100, 9000)
## You can manually adjust the browser, but don't move it after this.
## Do stuff ...
driver.quit()

Example of grabbing dynamic content and screenshots of the anchored (hence the "a" tag) HTML object, another name for hyperlink:

url = 'http://www.example.org' ## Any website
driver.get(url)

pageSource = driver.page_source
print(driver.get_window_size())

locations = []

for element in driver.find_elements_by_tag_name("a"):

    location = element.location;
    size = element.size;
    # Collect coordinates of object: left/right, top/bottom 
    x1 = location['x'];
    y1 = location['y'];
    x2 = location['x']+size['width'];
    y2 = location['y']+size['height'];
    locations.append([element,x1,y1,x2,y2, x2-x1, y2-y1])
locations.sort(key = lambda x: -x[-2] - x[-1])     
locations = [ (el,x1,y1,x2,y2, width,height)
    for el,x1,y1,x2,y2,width,height in locations
    if not (        
            ## First, filter links that are not visible (located offscreen or zero pixels in any dimension)
            x2 <= x1 or y2 <= y1 or x2<0 or y2<0
            ## Further restrict if you expect the objects to be around a specific size
            ## or width<200 or height<100
           )
]

for el,x1,y1,x2,y2,width,height in tqdm(locations[:10]):
    try:
        print('-'*100,f'({width},{height})')
        print(el.text[:100])
        element_png = el.screenshot_as_png
        with open('/tmp/_pageImage.png', 'wb') as f:
            f.write(element_png)
        img = Image.open('/tmp/_pageImage.png')
        display(img)
    except Exception as err:
        print(err)

enter image description here

Installation for mac+chrome:

pip install selenium
brew cask install chromedriver
brew cask install google-chrome

I was using Mac for the original answer and Ubuntu + Windows 11 preview via WSL2 after updating. Chrome ran from Linux side with X service on Windows to render the UI.

Regarding responsibility, please respect robots.txt on each site.