Mar. 09, 2016

Beautiful Soup 4 Python


This article is an introduction to BeautifulSoup 4 in Python.

If you want to know more I recommend you to read the official documentation
found here.

What is Beautiful Soup?

Beautiful Soup is a Python library for pulling data out of HTML and XML files. 

BeautifulSoup 3 or 4?

Beautiful Soup 3 has been replaced by Beautiful Soup 4.

Beautiful Soup 3 only works on Python 2.x, but Beautiful Soup 4 also works on
Python 3.x. 

Beautiful Soup 4 is faster, has more features, and works with third-party parsers
like lxml and html5lib. 

You should use Beautiful Soup 4 for all new projects.

Installing Beautiful Soup

If you run Debian or Ubuntu, you can install Beautiful Soup with the system
package manager
apt-get install python-bs4
Beautiful Soup 4 is published through PyPi, so if you can’t install it with the
system packager, you can install it with easy_install or pip. 

The package name is beautifulsoup4, and the same package works on Python 2 and
Python 3.
easy_install beautifulsoup4

pip install beautifulsoup4
If you don’t have easy_install or pip installed, you can download the Beautiful
Soup 4 source tarball and install it with

python install

BeautifulSoup Usage

Right after the installation you can start using BeautifulSoup. 

At the beginning of your Python script, import the library

Now you have to pass something to BeautifulSoup to create a soup object. 

That could be a document or an URL. 

BeautifulSoup does not fetch the web page for you, you have to do that yourself. 

That's why I use urllib2 in combination with the BeautifulSoup library.


There are some different filters you can use with the search API. 

Below I will show you some examples on how you can pass those filters into
methods such as find_all 

You can use these filters based on a tag’s name, on its attributes, on the text
of a string, or on some combination of these.
A string
The simplest filter is a string. 

Pass a string to a search method and Beautiful Soup will perform a match against
that exact string. 

This code finds all the 'b' tags in the document (you can replace b with any
tag you want to find)
If you pass in a byte string, Beautiful Soup will assume the string is encoded
as UTF-8. 

You can avoid this by passing in a Unicode string instead.
A regular expression
If you pass in a regular expression object, Beautiful Soup will filter against
that regular expression using its match() method. 

This code finds all the tags whose names start with the letter "b",
in this case, the 'body' tag and the 'b' tag:
import re
for tag in soup.find_all(re.compile("^b")):
This code finds all the tags whose names contain the letter "t":
for tag in soup.find_all(re.compile("t")):
A list
If you pass in a list, Beautiful Soup will allow a string match against any
item in that list. 

This code finds all the 'a' tags and all the 'b' tags
print soup.find_all(["a", "b"])
The value True matches everything it can. 

This code finds all the tags in the document, but none of the text strings:
for tag in soup.find_all(True):

A function

If none of the other matches work for you, define a function that takes an
element as its only argument. 

Please see the official documentation if you want to do that. 

BeautifulSoup Object

As an example, we'll use the very website you currently are on 

To parse the data from the content, we simply create a BeautifulSoup object for it

That will create a soup object of the content of the url we passed in. 

From this point, we can now use the Beautiful Soup methods on that soup object.

We can use the prettify method to turn a BS parse tree into a nicely formatted
Unicode string
The Find_all method
The find_all method is one of the most common methods in BeautifulSoup. 

It looks through a tag’s descendants and retrieves all descendants that match
your filters. 

soup.find_all("p", "title")


Let's see some examples on how to use BS 4
from bs4 import BeautifulSoup
import urllib2

url = ""

content = urllib2.urlopen(url).read()

soup = BeautifulSoup(content)

print soup.prettify()

print title
>> 'title'? Python For Beginners

print soup.title.string
>> ? Python For Beginners

print soup.p

print soup.a Python For Beginners

Navigating the Parse Tree

If you want to know how to navigate the tree please see the official documentation

There you can read about the following things:

Going down
        Navigating using tag names
        .contents and .children
        .strings and stripped_strings

Going up

Going sideways
        .next_sibling and .previous_sibling
        .next_siblings and .previous_siblings

Going back and forth
        .next_element and .previous_element
        .next_elements and .previous_elements

Extracting all the URLs found within a page 'a' tags

One common task is extracting all the URLs found within a page’s 'a' tags

Using the find_all method, gives us a whole list of elements with the tag "a". 
for link in soup.find_all('a'):

Extracting all the text from a page

Another common task is extracting all the text from a page:
Python For Beginners
Python Basics

Get all links from Reddit

As a last example, let's grab all the links from Reddit

from bs4 import BeautifulSoup
import urllib2

redditFile = urllib2.urlopen("")
redditHtml =

soup = BeautifulSoup(redditHtml)
redditAll = soup.find_all("a")
for links in soup.find_all('a'):
    print (links.get('href'))
For more information, please see the official documentation.

Share this article

Recommended Python Training – DataCamp

For Python training, our top recommendation is DataCamp.

Datacamp provides online interactive courses that combine interactive coding challenges with videos from top instructors in the field.

Datacamp has beginner to advanced Python training that programmers of all levels benefit from.


Download Our Free Guide To Learning Python

* indicates required

Read more about:
Disclosure of Material Connection: Some of the links in the post above are “affiliate links.” This means if you click on the link and purchase the item, I will receive an affiliate commission. Regardless, only recommend products or services that we try personally and believe will add value to our readers.