While working as a freelancer or working on a pet project, many times I needed to download files with python. It was good to download files with normal requests with python and writing the response content to a file. But when we are downloading big files we don't want to store the file in a variable until the download has been completed. Of course, it will take a whole lot of memory and we don't know what else can happen.  For downloading small files we can write simple code as:


import requests
res = requests.get(file_url)
with open(file_name,'wb' ) as f:
    f.write(res.content)


By this way you can download small files with python, but imagine you are downloading a file which is 300 MB in size, do you think its a good idea to store that whole in a res variable. Obviously not, so here we have another solution.


What about we download the file in the chunks of 1000 kb and write that chunk to a file then go for next chunk, so the file will be downloaded in parts and will be written to the local file in chunks. 


To do this we have a piece of code which is gonna help you. 


def download_file(url, local_filename ):
    with requests.get(url, stream=True) as res:
        res.raise_for_status()
        with open(local_filename, 'wb') as f:
            for chunk in res.iter_content(chunk_size=1000): 
                if chunk: 
                    f.write(chunk)
    return local_filename



So you can just use this method in your code, and call this method with just URL and the local filename you want to store the downloaded file in.


Source : https://endnote.com for image
Continue Reading