Thứ Sáu, 31 tháng 5, 2013

How to download files from the Linux command line

# sudo apt-get install wget

One of the above should do the trick for you. Otherwise, check with your Linux distribution’s manual to see how to get and install packages. wget has also been ported to Windows. Users on Windows can access this website. Download the following packages: ssllibs and wget. Extract and copy the files to a directory such as C:\Program Files\wget and add that directory to you system’s path so you can access it with ease. Now you should be able to access wget from your Windows command line.

The most basic operation a download manager needs to perform is to download a file from a URL. Here’s how you would use wget to download a file:
# wget http://www.sevenacross.com/photos.zip

Yes, it’s that simple. Now let’s do something more fun. Let’s download an entire website. Here’s a taste of the power of wget. If you want to download a website you can specify the depth that wget must fetch files from. Say you want to download the first level links of Yahoo!’s home page. Here’s how would do that:
# wget -r -l 1 http://www.yahoo.com/

Here’s what each options does. The -r activates the recursive retrieval of files. The -l stands for level, and the number 1 next to it tells wget how many levels deep to go while fetching the files. Try increasing the number of levels to two and see how much longer wget takes.
Now if you want to download all the “jpeg” images from a website, a user familiar with the Linux command line might guess that a command like “wget http://www.sevenacross.com*.jpeg” would work. Well, unfortunately, it won’t. What you need to do is something like this:
# wget -r -l1 –no-parent -A.jpeg http://www.sevenacross.com

Another very useful option in wget is the resumption of a download. Say you started downloading a large file and you lost your Internet connection before the download could complete. You can use the -c option to continue your download from where you left it.
# wget -c http://www.sevenacross.com/ubuntu-live.iso

Now let’s move on to setting up a daily backup of a website. The following command will create a mirror of a site in your local disk. For this purpose wget has a specific option, –mirror. Try the following command, replacing http://sevenacross.com with your website’s address.
# wget –mirror http://www.sevenacross.com/

When the command is done running you should have a local mirror of your website. This make for a pretty handy tool for backups. Let’s turn this command into a cool shell script and schedule it to run at midnight every night. Open your favorite text editor and type the following. Remember to adapt the path of the backup and the website URL to your requirements.
#!/bin/bash
YEAR=`date +”%Y”`
MONTH=`date +”%m”`
DAY=`date +”%d”`
BACKUP_PATH=`/home/backup/` # replace path with your backup directory
WEBSITE_URL=`http://www.sevenacross.net` # replace url with the address of the website you want to backup
# Create and move to backup directory
cd $BACKUP_PARENT_DIR/$YEAR/$MONTH
mkdir $DAY
cd $DAY
wget –mirror ${WEBSITE_URL}

Now save this file as something like website_backup.sh and grant it executable permissions:
# chmod +x website_backup.sh

Open your cron configuration with the crontab command and add the following line at the end:
0 0 * * * /path/to/website_backup.sh
You should have a copy of your website in /home/backup/YEAR/MONTH/DAY every day. For more help using cron and crontab, see this tutorial.
There’s a lot more to learn about wget than I’ve mentioned here. Read up wget’s man page.

Không có nhận xét nào:

Đăng nhận xét

Học lập trình web căn bản với PHP

Bài 1: Các kiến thức căn bản Part 1:  https://jimmyvan88.blogspot.com/2012/05/can-ban-lap-trinh-web-voi-php-bai-1-cac.html Part 2:  https://...