Showing posts with label files. Show all posts
Showing posts with label files. Show all posts

Saturday, November 19, 2011

Back Up Files With Déjà Dup (Linux Mint 11)

Please notice that I selected a local folder only fordemonstrational purposes. A back up is supposed to save and restoreyour files in case of emergency, which is most likely a corruption ofthe hard drive the data is on, that is why you should either select anexternal hard drive or an online server you have access to. The nexttab is the Files tab, where you specify which files you want to back up and, if that is the case, which files to exclude from backing up.



View the Original article

Thursday, June 2, 2011

Monitor your changed files in real-time in Linux

Everybody knows top or htop. Ever wished there was something similar but to monitor your files instead of CPU usage and processes? Well, there is.
Run this:

watch -d -n 2 ‘df; ls -FlAt;’

and you’ll get to spy on which files are getting written on your system. Every time a file gets modified it will get highlighted for a second or so. The above command is useful when you grant someone SSH access to your box and wish to know exactly what they’re modifying.



View the Original article

Wednesday, April 6, 2011

Make Browsers Cache Static Files With mod_expire On Lighttpd (Debian Squeeze)

This tutorial explains how you can configure Lighttpd to set the Expires HTTP header and the max-age directive of the Cache-Control HTTP header of static files (such as images, CSS and Javascript files) to a date in the future so that these files will be cached by your visitors' browsers. This saves bandwidth and makes your web site appear faster (if a user visits your site for a second time, static files will be fetched from the browser cache). This tutorial was written for Debian Squeeze.


I do not issue any guarantee that this will work for you!


I'm assuming you have a working Lighttpd setup on your Debian Squeeze server, e.g. as shown in this tutorial: Installing Lighttpd With PHP5 And MySQL Support On Debian Squeeze


You *could* enable mod_expire with the command lighty-enable-mod expire, however this gives you no control in which order Lighttpd modules are loaded, and as stated in the Troubleshoot section on http://redmine.lighttpd.net/wiki/1/Docs:ModExpire, it is strongly recommended to load mod_expire before all other modules.


Therefore we open /etc/lighttpd/lighttpd.conf...

vi /etc/lighttpd/lighttpd.conf


... and add mod_expire as the first module in the server.modules stanza:

server.modules = ( "mod_expire", "mod_access", "mod_alias", "mod_compress", "mod_redirect",# "mod_rewrite",)[...]

Restart Lighttpd afterwards:

/etc/init.d/lighttpd restart


The mod_expire configuration can be placed in the overall Lighttpd server configuration or inside a virtual host container.


In this example, I will place it in the overall server configuration (i.e., this configuration is active for all vhosts):

vi /etc/lighttpd/lighttpd.conf


On Lighttpd, Expires headers are set based on the directory where a file is located, not on the file type (this is different from Apache). For example, a valid mod_expire configuration would be as follows:

[...]expire.url = ("/images/" => "access plus 7 days", "/jquery/" => "access plus 2 weeks", "/js/" => "access plus 2 months", "/misc" => "access plus 1 days", "/themes/" => "access plus 7 days", "/modules/" => "access plus 24 hours")[...]

In the above example, all files from the /images/ directory (and its subdirectories) get an Expires header with a date 7 days in the future from the browser access time. Therefore, you should make sure that the directories you list in the expire.url directive really only contain static files that can be cached by browsers.


Restart Lighttpd after your changes:

/etc/init.d/lighttpd restart


You can use the following time units in your configuration:

yearsmonthsweeksdayshoursminutesseconds

Please note that you must use these time units in plural because otherwise Lighttpd will refuse to start. So you must not use access plus 1 day, but access plus 1 days instead (this is also different from Apache where both singular and plural are allowed).


It is possible to combine multiple time units, e.g. as follows:

"access plus 1 months 15 days 2 hours"


Also note that if you use a far future Expires header you have to change the component's filename whenever the component changes. Therefore it's a good idea to version your files. For example, if you have a file javascript.js and want to modify it, you should add a version number to the file name of the modified file (e.g. javascript-1.1.js) so that browsers have to download it. If you don't change the file name, browsers will load the (old) file from their cache.


Instead of basing the Expires header on the access time of the browser (e.g. "access plus 60 days"), you can also base it on the modification date of a file (please note that this works only for real files that are stored on the hard drive!) by using the modification keyword instead of access:

"modification plus 7 days"


It is also possible to include your mod_expire rules inside a condition, e.g. as follows:

[...] $HTTP["url"] =~ "^/images/" { expire.url = ( "" => "access plus 1 hours" )}[...]

This tells Lighttpd to add an Expires header to all files where the URL begins with /images/ (like http://www.example.com/images/subdir/1.png).


To test if your configuration works, you can install the Live HTTP Headers plugin for Firefox and access a static file through Firefox (e.g. an image). In the Live HTTP Headers output, you should now see an Expires header and a Cache-Control header with a max-age directive (max-age contains a value in seconds, for example 604800 is one week in the future):


Click to enlarge

View the original article here


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Thursday, March 24, 2011

Making browsers Cache static files with mod_expires on Apache2 (Debian Squeeze)

This tutorial explains how you can configure Apache2 to set the HTTP Expires header and directive max - age of the HTTP Cache-Control header for static files (such as images, CSS and Javascript files) to a date in the future so that these files will be cached by browsers of your visitors. This saves bandwidth and makes your web site appear more quickly (if a user visits your site for a second time, static files will be recovered in the browser cache). This tutorial was written for Debian Squeeze.

For more information on HowtoForge

Comments (0)Add Comment
You must be logged on to post a comment. Please register if you do not yet have an account.
busy

View the original article here


This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.

Tuesday, January 25, 2011

Recovery of deleted files and repair file systems on Linux

Linux is as solid an operating system will ever use as you – but that doesn't mean that the hardware where you do it equally sound. Hard drives are file systems are as susceptible to errors. And no matter how stable an OS, you can't prevent accidental deletion of files, folders. But don't despair: Linux is equipped with a number of tools that you can repair deleted file system errors and reclaim.

What tools? To begin, e2fsck, scalpel and lsof get the most. Let us see how each of these can be used to help your file systems, free of errors and share your files from accidental deletion.

Ext2/ext3-/ Ext4 filesystems check with e2fsck

The e2fsck utility takes the original UNIX fsck utility, but uses the ext2/ext3-/ Ext4 family of file systems to check. It is used to verify and repairing filesystems were down, have Uncleanly or otherwise developed errors.

A problem that is most users face, operate the e2fsck tool only to unmounted partitions. This can cause problems when the file system check is also that you are working. Many recommend that your current system to run level 1 command (run as the user with administrative privileges):

Init 1

However, I recommend you take a step further and use a Live distro like Puppy Linux, Knoppix, your distribution live CD, if any. By booting into a live distribution of your hard disks to be mounted and can safely check for errors. However you must ensure that that you want to change 1 and then unmount the partition to run level, you check the live distribution to use. For example, say you want to check partition/dev/sdb1. To do so would 1 first on runlevel (command shown above), and then run the command switch:

Umount/dev/sdb1

They are displayed with the target partition ready to start the check run. Doing this you give the command:

e2fsck-y / dev/sdb1

The option-y assumes the answer "Yes" to all questions, is the command you will present. Depending on the size of the disk and the amount this repair may take quite some time errors on your drive. Once the repair process is complete, you can always lead you to re-examine the command if no errors were missing. If the drive clean can into your normal system restart (if you are running a live CD e2fsck, remember to remove the disk when the live) or remount the unmounted partition.

Restore deleted files

Now let us look at the process of restoring files deleted. The reason for this even is, is that a file is actually only a link to an inode on disk. This inode contains information for the file. If you delete a file literally break the link to the inode, so can the file really only not found. The actual inode itself remains on your hard drive... but only temporarily. Provided as long as a process that deleted file open is the inode for writing available. So, this method has actually a deadline, and pretty quickly at that time. The key to this recovery is the/proc directory. Every process on your system has a directory within / proc, listed by its name. When you run the command ls/proc, you see a bunch of directories with numeric names as well as the directories and files names that trust should look. The most important directories are named numerically. These figures are process IDs (PIDs) of running applications. Use always the PS command to find the PID of the application you are looking for.

After you correct process in / proc found may have to grab the data from the correct directory and save it again. File restored. Take a look at the entire process. This is shown with a fairly simple example that you can pretty easily extend.

We create a file (say, it is a bash script or configuration file) called Test_file. Create this file with the command:

"This is my test document" cat >



View the Original article