<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Linux &#124; egrep</title>
	<atom:link href="https://www.linuxegrep.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.linuxegrep.com</link>
	<description>Extended search for information</description>
	<lastBuildDate>Tue, 08 Jan 2019 06:23:44 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.5.2</generator>
		<item>
		<title>Easy commandline argument parsing in shell script</title>
		<link>https://www.linuxegrep.com/archives/script/shellscript/easy-commandline-argument-parsing-in-shell-script/</link>
		<comments>https://www.linuxegrep.com/archives/script/shellscript/easy-commandline-argument-parsing-in-shell-script/#comments</comments>
		<pubDate>Tue, 02 Mar 2010 14:34:39 +0000</pubDate>
		<dc:creator>meharo</dc:creator>
				<category><![CDATA[Shell Scripting]]></category>
		<category><![CDATA[argument]]></category>
		<category><![CDATA[commandline]]></category>

		<guid isPermaLink="false">http://www.findlinuxhelp.com/?p=195</guid>
		<description><![CDATA[Here is a fairly easy method which can parse all input arguments passed to a shell script in commandline. This block of code can be either included in top of the script or source it from an external file. Using this code within a shell script is very simple, just needs to define a global [...]]]></description>
				<content:encoded><![CDATA[<p>Here is a fairly easy method which can parse all input arguments passed to a shell script in commandline. This block of code can be either included in top of the script or source it from an external file.</p>
<p>Using this code within a shell script is very simple, just needs to define a global variable named <span style="color: #ff6600;"><span style="font-family: courier new,courier;">ARGS_INPUT_FORMAT</span></span> and set appropriate values to it. The argument parsing code will read this variable and set appropriate inputs as well as outputs.</p>
<p>Set values to the variable <span style="color: #ff6600;"><span style="font-family: courier new,courier;">ARGS_INPUT_FORMAT</span></span> as shown below.</p>
<p><span id="more-195"></span></p>
<pre lang="text">ARGS_INPUT_FORMAT="OPTION STRING 1|OPTION STRING 2(OPTIONAL)|OPTION STRING N(OPTIONAL):VARIABLE NAME WHICH HOLDS OPTION VALUE:DEFAULT OPTION VALUE(OPTIONAL):DEFAULT OPTION VALUE IF USED AS SWITCH(OPTIONAL)"</pre>
<h2>Example Usage</h2>
<p>These are some examples:</p>
<pre lang="text">ARGS_INPUT_FORMAT="--ignore-case|-i:check_case:NO:YES"
ARGS_INPUT_FORMAT="--user|-u|-U:username:-1:-1,--ignore-case|-i:ignore_case:NO:YES"
ARGS_INPUT_FORMAT="--color:bgcolor"</pre>
<p>Some example script usages are:</p>
<pre lang="text">./main-script.sh --ignore-case="YES"
./main-script.sh -i -u "Find Linux Help"
./main-script.sh --color="Yellow"</pre>
<p>Now access the value of color using variable <span style="color: #ff6600;"><span style="font-family: courier new,courier;">bgcolor</span></span> as shown below.</p>
<pre lang="text">echo "You have chosen $bgcolor color."</pre>
<p>There are some advantages of this code; it is not utilizing the capabilities of an external binary but parse using shell script itself. It never shifts or alters the original arguments supplied to main script. It retains the position and count of arguments and can be re-used them inside the main script again.</p>
<h2>Argument Parsing Code</h2>
<pre lang="text">for arg_opts in $(echo "$ARGS_INPUT_FORMAT" | tr "," " ") ; do
  opt_names=$(echo "$arg_opts" | cut -d':' -f1)
  opt_var=$(echo "$arg_opts" | cut -d':' -f2)
  opt_def_val=$(echo "$arg_opts" | cut -d':' -f3)
  opt_switch_val=$(echo "$arg_opts" | cut -d':' -f4)
  eval "$opt_var=$opt_def_val"
  for opt_name in $(echo "$opt_names" | tr "|" " ") ; do
    arg_pos=1
    while [ $arg_pos -le $# ] ; do
      case "$(eval "echo $$arg_pos")" in
        "$opt_name="*)
              eval "$opt_var="$(echo "$(eval "echo $$arg_pos")" | sed 's/^'"$opt_name"'=//')""
              ARGS_STAT[$arg_pos]=0
              ;;
        "$opt_name")
              if [ $(expr $arg_pos + 1) -le $# ] &amp;&amp; [ $(echo "$(eval "echo $$(expr $arg_pos + 1)")" | grep -c '^-') -eq 0 ] ; then
                ARGS_STAT[$arg_pos]=0
                arg_pos=$(expr $arg_pos + 1)
                eval "$opt_var="$(eval "echo $$arg_pos")""
              else
                eval "$opt_var="$opt_switch_val""
              fi
              ARGS_STAT[$arg_pos]=0
              ;;
        "$opt_name"*)
              eval "$opt_var="$(echo "$(eval "echo $$arg_pos")" | sed 's/^'"$opt_name"'//')""
              ARGS_STAT[$arg_pos]=0
              ;;
        *)
              [ -z "${ARGS_STAT[$arg_pos]}" ] &amp;&amp; ARGS_STAT[$arg_pos]=1
              ;;
      esac
      arg_pos=$(expr $arg_pos + 1)
    done
  done
done</pre>
<p>Download the script file: [download id="5"]</p>
<h2>Script Usage</h2>
<p>Define a global variable as <span style="color: #ff6600;"><span style="font-family: courier new,courier;">ARGS_INPUT_FORMAT</span></span> and set an appropriate value as shown above. Define this variable on top of the main script.</p>
<p>Save the argument parsing script to an external file and source it into the main script. The same source script can be included in as many scripts wherever  you need argument processing. Even the code block can be directly pasted below the global variable definition inside your main script for better portability.</p>
<h2>About Debugging</h2>
<p>The script initializes an array variable named <span style="color: #ff6600;"><span style="font-family: courier new,courier;">ARGS_STAT</span></span> and set debug values to it. If it finds an invalid argument in commandline, the corresponding array index is updated with a dirty flag &#8220;1&#8243;.</p>
<p>Here is a sample code which can be included in the main script to display invalid arguments supplied and their respective position numbers.</p>
<pre lang="text">echo "**Invalid args**"
count=1
while [ $count -le $# ] ; do
  if [ ${ARGS_STAT[$count]} -eq 1 ] ; then
    echo "$(eval echo $$count) ($count)"
  fi
  count=$(expr $count + 1)
done
</pre>
<p>Download the script file: [download id="6"] </p>
]]></content:encoded>
			<wfw:commentRss>https://www.linuxegrep.com/archives/script/shellscript/easy-commandline-argument-parsing-in-shell-script/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Turn your website into maintenance mode &#8211; SEO friendly solution using .htaccess file</title>
		<link>https://www.linuxegrep.com/archives/howto/webmaster-howto/turn-your-website-into-maintenance-mode-seo-friendly-solution-using-htaccess-file/</link>
		<comments>https://www.linuxegrep.com/archives/howto/webmaster-howto/turn-your-website-into-maintenance-mode-seo-friendly-solution-using-htaccess-file/#comments</comments>
		<pubDate>Tue, 24 Nov 2009 18:57:51 +0000</pubDate>
		<dc:creator>meharo</dc:creator>
				<category><![CDATA[Webmaster HOWTOs]]></category>
		<category><![CDATA[maintenance]]></category>
		<category><![CDATA[seo]]></category>

		<guid isPermaLink="false">http://www.findlinuxhelp.com/?p=170</guid>
		<description><![CDATA[Here is a simple trick which enables to turn your website into maintenance mode using .htaccess file. If you want to give an informative message saying &#8220;We are under maintenance&#8221;, create a custom HTML page containing the message and put it in the website root. For e.g., we create an HTML page called &#8220;under-maintenance.html&#8221; and [...]]]></description>
				<content:encoded><![CDATA[<p>Here is a simple trick which enables to turn your website into maintenance mode using .htaccess file.</p>
<p>If you want to give an informative message saying &#8220;We are under maintenance&#8221;, create a custom HTML page containing the message and put it in the website root. For e.g., we create an HTML page called &#8220;under-maintenance.html&#8221; and place it inside the website root, so that we can access it through the URL http://www.yourdomain.com/under-maintenance.html</p>
<h3>Setup status &#8220;503&#8243; and header &#8220;Retry-After&#8221;</h3>
<p>Depending on the maintenance duration, we also have to care about the search engines which may visit the website during maintenance. For doing this, we will send an http status code 503, &#8220;Service Unavailable&#8221;, along with the HTML page containing maintenance information. Also, we will inform search engines, visit the website only after the maintenance is completed.</p>
<p><span id="more-170"></span>First, setup an error document for http status 503. Specify your maintenance page as the error document as shown below.</p>
<pre lang="apache">ErrorDocument 503 /under-maintenance.html</pre>
<p>Now add a small trick using mod rewrite to flag all http responses from the website using status 503.</p>
<pre lang="apache">RewriteEngine On
RewriteCond %{REQUEST_URI} !^/under-maintenance.html$
RewriteRule .* - [R=503,L]</pre>
<p>Note: The &#8216;R&#8217; flag can be used to redirect to a destination page only when the status code specified is ranging from 300 to 399. If it&#8217;s outside the range, rewrite engine will skip the destination URL and we just gave a &#8211; (dash) to specify the destination URL will not be changed.</p>
<p>We have used here a conditional rewrite where the URI is not equal to &#8220;/under-maintenance.html&#8221;. This is to avoid looping errors which can occur when the rule is enabled.</p>
<p>Additionally, if you are using images or external style sheets inside your maintenance page, don&#8217;t forget to exclude them from being redirected. For e.g., add an additional condition per external resource below the existing condition.</p>
<pre lang="apache">RewriteCond %{REQUEST_URI} !^/styles/maintenance.css$</pre>
<p>The effective rule will be, URI is not equal to &#8220;/under-maintenance.html&#8221; AND &#8220;/styles/maintenance.css&#8221;.</p>
<p>Now, we have one more thing to do, tell search engines visiting your site during maintenance period, come back only after specified period of time to re-check the site. Add below block of code to your .htaccess file to set a &#8220;Retry-After&#8221; response header indicating a time period or a fixed time.</p>
<p>To re-visit the site only after 10 hours,</p>
<pre lang="apache">&lt;IfModule mod_headers.c&gt;
Header set Retry-After: 36000
&lt;/IfModule&gt;</pre>
<p>To re-visit after a fixed time,</p>
<pre lang="apache">&lt;IfModule mod_headers.c&gt;
Header set Retry-After: "Wed, 25 Nov 2009 06:30:00 GMT"
&lt;/IfModule&gt;</pre>
<p>Note: &#8220;headers_module&#8221; must be enabled to use the &#8220;Header&#8221; directive inside your .htaccess file. Make sure a similar line of code is enabled inside your httpd.conf file in apache.</p>
<pre lang="apache">LoadModule headers_module modules/mod_headers.so</pre>
<p>At last, the final block of code to be inserted at the end of .htaccess file is:</p>
<pre lang="apache">&lt;IfModule mod_rewrite.c&gt;
 ErrorDocument 503 /under-maintenance.html
 RewriteEngine On
 RewriteCond %{REQUEST_URI} !^/under-maintenance.html$
 RewriteCond %{REQUEST_URI} !^/styles/maintenance.css$
 RewriteRule .* - [R=503,L]
 &lt;IfModule mod_headers.c&gt;
 Header set Retry-After: 36000
 &lt;/IfModule&gt;
&lt;/IfModule&gt;</pre>
]]></content:encoded>
			<wfw:commentRss>https://www.linuxegrep.com/archives/howto/webmaster-howto/turn-your-website-into-maintenance-mode-seo-friendly-solution-using-htaccess-file/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Understanding Cache control and enabling it for optimal results</title>
		<link>https://www.linuxegrep.com/archives/howto/webmaster-howto/understanding-cache-control-and-enabling-it-for-optimal-results/</link>
		<comments>https://www.linuxegrep.com/archives/howto/webmaster-howto/understanding-cache-control-and-enabling-it-for-optimal-results/#comments</comments>
		<pubDate>Sat, 07 Nov 2009 05:16:20 +0000</pubDate>
		<dc:creator>meharo</dc:creator>
				<category><![CDATA[Webmaster HOWTOs]]></category>
		<category><![CDATA[cache]]></category>

		<guid isPermaLink="false">http://www.findlinuxhelp.com/?p=128</guid>
		<description><![CDATA[Cache control is a mechanism used to control the caching behavior of a web browser. By default, all modern browsers know what content to be cached and reuse when again it requires. Browsers are using their own unique algorithms to decide whether the cached content can be safely re-used or not. Normally, browsers cache all [...]]]></description>
				<content:encoded><![CDATA[<p>Cache control is a mechanism used to control the caching behavior of a web browser. By default, all modern browsers know what content to be cached and reuse when again it requires. Browsers are using their own unique algorithms to decide whether the cached content can be safely re-used or not. Normally, browsers cache all the images in a web page and just check its freshness against the server&#8217;s copy whenever a new request comes. If the content is modified on server, it will simply re-download it.</p>
<p>The mechanism of content caching is pretty simple but an effective configuration of server will only allow utilizing this feature fully and properly.</p>
<p>Mainly five response headers and their values control the caching behavior in an http client-server communication.</p>
<p style="padding-left: 30px;"><span style="color: #000000;"><span style="font-family: courier new,courier;">1. Date<br />
2. Last-Modified<br />
3. Cache-Control<br />
4. ETag<br />
5. Expires</span></span></p>
<h3>Date &amp; Last-Modified</h3>
<p>A web server sets the <span style="font-family: courier new,courier;">Date</span> and <span style="font-family: courier new,courier;">Last-Modified</span> headers default in its response unless directed otherwise. The <span style="font-family: courier new,courier;">Last-Modified</span> header value helps a browser to identify weather the content is modified or not on server. Browsers can send conditional requests and verify the freshness of a resource whenever requires. This header is a minimum requirement at least when no other cache control headers are present.</p>
<h3>Cache-Control</h3>
<p><span style="font-family: courier new,courier;">Cache-Control</span> header values are a set of control instructions which one end can force the other to obey. A browser can request its cache preferences to server. Also, a web server can force the browser to follow some caching behaviors. This may include commonly used directives like &#8220;no-cache&#8221;, &#8220;no-store&#8221;, &#8220;max-age&#8221; or &#8220;must-revalidate&#8221;. These control instructions always get high preference over any other cache control header values and browser algorithms.</p>
<h3>ETag &amp; Expires</h3>
<p>They are two different methods to implement improved client side caching. ETag (Entity Tag) is actually a checksum of the resource&#8217;s attributes and it is a unique value dynamically generated by server or application. For a &#8216;static file&#8217; resource, this can be the &#8216;sum&#8217; of size, modification time and inode number. An application can generate this tag value based on some specific criterias and set in resource&#8217;s response header.</p>
<p>Expires is an entirely different technology which guarantees a valid cached resource until the expiry time is reached. A combination of Expires and Last-Modified values help managing the cache control of static files simply and more efficiently.</p>
<h3>How to setup my server to use appropriate cache control headers</h3>
<p>Many people prefer avoiding ETag unless it is required for any specific reason. Also many believe it can help improve the server&#8217;s performance itself if removed. The concept of ETag is similar to what Last-Modified header does. A conditional request based on the last known ETag value can send to server to verify the freshness of a resource. Server will answer with a 304 response (Not Modified) if not modified. ETag ensures more accuracy when compared to Last-Modified header, but for static file resources, most of the time it is not necessary to generate this extra header. ETag is useful when you need strong validation of some kind of dynamic generated contents. There is a known issue too with ETag where you use multiple servers to load balance the traffic. The generated ETag will be different on different servers and this will create confusion to clients.</p>
<p>You can set or unset the ETag header as shows below:<br />
To generate ETag for static files in Apache using all available attributes,</p>
<pre lang="apache">FileETag All</pre>
<p style="text-align: justify;">To unset this behavior explicitly,</p>
<pre lang="apache">&lt;IfModule headers_module&gt;
   Header unset ETag
&lt;/IfModule&gt;
FileETag None</pre>
<p><span style="color: #ff0000;">Note: &#8216;headers_module&#8217; needs to be enabled this to work.</span></p>
<p>The combination of Expires and Last-Modified headers with appropriate values give the best result. See below how to set Expires directives in Apache</p>
<pre lang="apache">&lt;IfModule expires_module&gt;
   ExpiresActive On
   ExpiresByType application/javascript A604800
   ExpiresByType application/x-javascript A604800
   ExpiresByType text/css A604800
   ExpiresByType image/gif A604800
   ExpiresByType image/jpeg A604800
   ExpiresByType image/png A604800
   ExpiresByType image/x-icon A604800
   ExpiresByType application/x-shockwave-flash A604800
&lt;/IfModule&gt;</pre>
<p>If &#8216;expires_module&#8217; is enabled and ExpiresActive set to &#8216;On&#8217;, Expires functionality turns on. ExpiresByType directive allows specifying different &#8216;interval&#8217; values for different MIME Types. See <a title="Understanding Cache control and enabling it for optimal results" href="http://httpd.apache.org/docs/2.2/mod/mod_expires.html" target="_blank">http://httpd.apache.org/docs/2.2/mod/mod_expires.html</a> to understand how to use the interval syntax and a default expiry time using ExpiresDefault.</p>
<p>Expires module sets a &#8216;max-age&#8217; value equal to the &#8216;interval&#8217; specified and adds to the Cache-Control header value.</p>
<pre style="padding-left: 30px;">Cache-Control max-age=604800</pre>
<p>Note, if you remove or unset Last-Modified header when Expires is active, browsers will be forced to not use any conditional checking at all. This behavior is not recommended even through if we can eliminate the extra overhead of sending a conditional checking. Browsers will handle this situation intelligently and they will only re-validate the cached content when you re-request (Refresh) a web page or max-age is reached. RFC standards recommend Last-Modified header to be send with every response header.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.linuxegrep.com/archives/howto/webmaster-howto/understanding-cache-control-and-enabling-it-for-optimal-results/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>HTTP / Web server troubleshooting using Wget.</title>
		<link>https://www.linuxegrep.com/archives/howto/webmaster-howto/http-web-server-troubleshooting-using-wget/</link>
		<comments>https://www.linuxegrep.com/archives/howto/webmaster-howto/http-web-server-troubleshooting-using-wget/#comments</comments>
		<pubDate>Tue, 16 Jun 2009 14:34:53 +0000</pubDate>
		<dc:creator>meharo</dc:creator>
				<category><![CDATA[Webmaster HOWTOs]]></category>
		<category><![CDATA[troubleshooting]]></category>
		<category><![CDATA[wget]]></category>

		<guid isPermaLink="false">http://www.findlinuxhelp.com/?p=73</guid>
		<description><![CDATA[There are few useful options to the powerful wget command, a non-interactive Linux/Unix command line downloader which helps you identifying various http server responses, performance related issues and optional feature supports. For probing an http server and identifying its response, we can use the &#8211;spider option. wget --spider http://www.google.com &#8211;06:24:36&#8211; http://www.google.com/ Resolving www.google.com&#8230; 74.125.53.103, 74.125.53.99, [...]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;">There are few useful options to the powerful wget command, a non-interactive Linux/Unix command line downloader which helps you identifying various http server responses, performance related issues and optional feature supports.</p>
<p style="text-align: justify;">For probing an http server and identifying its response, we can use the <strong><span style="font-family: 'courier new', courier;">&#8211;</span><span style="font-family: 'courier new', courier;">spider</span></strong><span style="font-family: 'courier new', courier;"> </span>option.</p>
<pre lang="bash">wget --spider http://www.google.com</pre>
<p style="padding-left: 30px; text-align: justify;"><span style="color: #000000;"><span style="color: #c0c0c0;"><span style="color: #999999;"><span style="font-family: 'courier new', courier;">&#8211;06:24:36&#8211; http://www.google.com/<br />
Resolving www.google.com&#8230; 74.125.53.103, 74.125.53.99, 74.125.53.104, &#8230;<br />
Connecting to www.google.com|74.125.53.103|:80&#8230; connected.<br />
HTTP request sent, awaiting response&#8230; 200 OK<br />
Length: unspecified [text/html]<br />
200 OK</span></span></span></span></p>
<p style="text-align: justify;"><span id="more-73"></span></p>
<p style="text-align: justify;">It will say you the http response code and return a corresponding exit code. If the server responded properly with a successful response, the program will exit with error code 0. When you add the switch <strong><span style="font-family: 'courier new', courier;">&#8211;</span><span style="font-family: 'courier new', courier;">spider</span></strong>, the real file will not be downloaded to local.</p>
<p style="text-align: justify;">To analyse the http server response header, use <strong><span style="font-family: 'courier new', courier;">-S</span></strong> (<strong><span style="font-family: 'courier new', courier;">&#8211;server-response</span></strong>) switch. It prints the header sent by the server.</p>
<pre lang="bash">wget --spider -S http://www.google.com</pre>
<p style="padding-left: 30px; text-align: justify;"><span style="color: #000000;"><span style="color: #c0c0c0;"><span style="color: #999999;"><span style="font-family: 'courier new', courier;">&#8211;06:23:14&#8211; http://www.google.com/<br />
Resolving www.google.com&#8230; 74.125.53.103, 74.125.53.99, 74.125.53.104, &#8230;<br />
Connecting to www.google.com|74.125.53.103|:80&#8230; connected.<br />
HTTP request sent, awaiting response&#8230;<br />
HTTP/1.0 200 OK<br />
Cache-Control: private, max-age=0<br />
Date: Tue, 16 Jun 2009 13:23:14 GMT<br />
Expires: -1<br />
Content-Type: text/html; charset=ISO-8859-1<br />
Set-Cookie: PREF=ID=640b4463b5aaaadf:TM=1245158594:LM=1245158594:S=35L2K0_MlEo7Cka5; expires=Thu, 16-Jun-2011 13:23:14 GMT; path=/; domain=.google.com<br />
Server: gws<br />
Length: unspecified [text/html]<br />
200 OK</span></span></span></span></p>
<p style="text-align: justify;">This is useful to check any additional parameters like the MIME type sent by server, charset and last updated time of the file.</p>
<p style="text-align: justify;">Wget supports sending custom/altered header fields in its request header.</p>
<pre lang="bash">wget --header="Host: www.mysite.com" --spider http://192.168.0.1</pre>
<p style="text-align: justify;">This command will request for site www.mysite.com hosted on server 192.168.0.1 using name based virtual hosting.</p>
<pre lang="bash">wget --spider --header="Accept-Encoding: compress, gzip" http://www.mysite.com</pre>
<p style="text-align: justify;">This request will tell the server, accepting <em><strong>compress</strong></em> and <em><strong>gzip</strong></em> encoding methods. If server supports sending compressed http packets, it will respond with a <em><strong>Content-Encoding</strong></em> flag in its response header.</p>
<p style="padding-left: 30px; text-align: justify;"><span style="color: #000000;"><span style="color: #999999;"><span style="font-family: 'courier new', courier;">HTTP/1.1 200 OK<br />
Date: Tue, 16 Jun 2009 13:32:08 GMT<br />
Server: Apache/2.2.8 (Unix) mod_ssl/2.2.8 OpenSSL/0.9.8b DAV/2 mod_jk/1.2.26<br />
Vary: Accept-Encoding<br />
Content-Encoding: gzip<br />
Keep-Alive: timeout=5<br />
Connection: Keep-Alive<br />
Content-Type: text/html</span></span></span></p>
<p style="text-align: justify;">Also try various timeouts and re-try values to benchmark your server response and performance.</p>
<pre lang="bash">wget --spider --wait=0 --waitretry=0 -T 0.1 -t 10 http://www.mysite.com</pre>
<p style="text-align: justify;">This checks your server for faster response and if it could not get a page within the specified time, returns a non-zero exit value.</p>
<p style="padding-left: 30px; text-align: justify;"><span style="color: #000000;"><span style="color: #999999;"><span style="font-family: 'courier new', courier;">-t number<br />
&#8211;tries=number<br />
Set number of retries to number.</span></span></span></p>
<p style="padding-left: 30px; text-align: justify;"><span style="color: #000000;"><span style="color: #999999;"><span style="font-family: 'courier new', courier;">-T seconds<br />
&#8211;timeout=seconds<br />
Set the network timeout to seconds seconds. This is equivalent to specifying &#8211;dns-timeout, &#8211;connect-timeout, and &#8211;read-time-out, all at the same time.</span></span></span></p>
<p style="padding-left: 30px; text-align: justify;"><span style="color: #000000;"><span style="color: #999999;"><span style="font-family: 'courier new', courier;">-w seconds<br />
&#8211;wait=seconds<br />
Wait the specified number of seconds between the retrievals.</span></span></span></p>
<p style="padding-left: 30px; text-align: justify;"><span style="color: #000000;"><span style="font-family: 'courier new', courier;"><span style="color: #999999;"><span style="font-family: 'courier new', courier;">&#8211;waitretry=seconds</span></span></span></span></p>
<p style="text-align: justify;">If you don&#8217;t want Wget to wait between every retrieval, but only between retries of failed downloads, you can use this option.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.linuxegrep.com/archives/howto/webmaster-howto/http-web-server-troubleshooting-using-wget/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>kSar &#8211; an easy sar grapher utility</title>
		<link>https://www.linuxegrep.com/archives/review/application/ksar-an-easy-sar-grapher-utility/</link>
		<comments>https://www.linuxegrep.com/archives/review/application/ksar-an-easy-sar-grapher-utility/#comments</comments>
		<pubDate>Fri, 12 Jun 2009 09:59:39 +0000</pubDate>
		<dc:creator>meharo</dc:creator>
				<category><![CDATA[Application Reviews]]></category>
		<category><![CDATA[graph]]></category>
		<category><![CDATA[report]]></category>

		<guid isPermaLink="false">http://www.findlinuxhelp.com/?p=54</guid>
		<description><![CDATA[kSar is an easy to use application for creating graphical reports from your sar daily report. kSar is written in java and it supports generating the report in PDF, HTML, CSV, JPG and PNG formats. Screenshot Sar is a system activity reporting tool which comes with the sysstat rpm package. Sar will collect your system [...]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;">kSar is an easy to use application for creating graphical reports from your sar daily report. kSar is written in java and it supports generating the report in PDF, HTML, CSV, JPG and PNG formats.</p>
<h2>Screenshot</h2>
<div id="attachment_56" class="wp-caption alignleft" style="width: 408px"><img class="size-large wp-image-56" title="kSar CPU Statistics" alt="kSar CPU Statistics" src="https://www.linuxegrep.com/wp-content/uploads/2009/06/jobs01_sar_statistics_page_18-1024x723.png" width="398" height="281" /><p class="wp-caption-text">kSar CPU Statistics</p></div>
<p style="text-align: justify;">Sar is a system activity reporting tool which comes with the sysstat rpm package. Sar will collect your system activities every 10 minutes and store in /var/log/sa/saXX where XX is the zero-padded two digit day of month. Once you have sar is ready in your linux system, kSar can read the daily report file generated by sar from /var/log/sa/sarXX and generate the graphical report.</p>
<p style="text-align: justify;">kSar can work in GUI as well as in CLI modes. GUI mode gives you the visualization of sar generated report with customizable easy to read graphs.</p>
<p style="text-align: justify;">The CLI mode gives you the flexibility of reading sar reports from the specified input file and create graphical output in various formats like PDF, HTML or images. The command line interface will be useful if you want to generate a report on daily basis and send it via email using cron job.</p>
<p style="text-align: justify;"><span id="more-54"></span>kSar supports a variety of platforms like Solaris, Linux, HP-UX, AIX and Mac OS/X</p>
<h2>Usage</h2>
<p style="text-align: justify;">GUI only :</p>
<pre lang="bash">java -jar kSar-x.x.x.jar</pre>
<p style="text-align: justify;">GUI and collect :</p>
<pre lang="bash">java -jar kSar-x.x.x.jar -input /var/log/sa/sarXX</pre>
<p style="text-align: justify;">To run kSar on the command line :</p>
<pre lang="bash">java -jar kSar-x.x.x.jar -input /var/log/sa/sarXX -outputPDF today.pdf</pre>
<p style="text-align: justify;">kSar can support reading input from different sources like &#8216;ssh:// or file:// or cmd://&#8217;. It can read the input from the output of a command (cmd://) or from a remote host (ssh ://)</p>
<p style="text-align: justify;">Download kSar at<span style="color: #0000ff;"><strong><a title="http://ksar.atomique.net/" href="http://ksar.atomique.net/" target="_blank">http://ksar.atomique.net/</a></strong></span></p>
]]></content:encoded>
			<wfw:commentRss>https://www.linuxegrep.com/archives/review/application/ksar-an-easy-sar-grapher-utility/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>FTP firewall issues in Passive mode</title>
		<link>https://www.linuxegrep.com/archives/howto/networking-howto/ftp-firewall-issues-in-passive-mode/</link>
		<comments>https://www.linuxegrep.com/archives/howto/networking-howto/ftp-firewall-issues-in-passive-mode/#comments</comments>
		<pubDate>Sat, 18 Apr 2009 20:20:14 +0000</pubDate>
		<dc:creator>meharo</dc:creator>
				<category><![CDATA[Networking HOWTOs]]></category>
		<category><![CDATA[Security HOWTOs]]></category>
		<category><![CDATA[firewall]]></category>
		<category><![CDATA[ftp]]></category>

		<guid isPermaLink="false">http://www.findlinuxhelp.com/?p=39</guid>
		<description><![CDATA[In Linux, the default FTP mode is &#8220;Passive&#8221; where it is &#8220;Active&#8221; in Windows. The Passive mode FTP causes client to connect to high port in server. This high port is unpredictable and can range from 1024 to 65535 (high ports). Different client connections use different ports and it is difficult to identify the port [...]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;">In Linux, the default FTP mode is &#8220;Passive&#8221; where it is &#8220;Active&#8221; in Windows. The Passive mode FTP causes client to connect to high port in server. This high port is unpredictable and can range from 1024 to 65535 (high ports). Different client connections use different ports and it is difficult to identify the port which needs to be opened in server side to establish data connection from client in Passive mode. Normally if you use a firewall (say iptables) and block all the ports except 21 (FTP control port), the data transfer between client and server will be blocked in Passive mode.</p>
<p style="text-align: justify;"><span id="more-39"></span></p>
<p style="text-align: justify;">Here is a simple solution using iptables to overcome this situation by allowing all high ports in server. (configuration will be saved in <em>/etc/sysconfig/iptables</em>)</p>
<pre lang="text">-A INPUT -p tcp -m tcp -s 0/0 -d 0/0 --dport 21 -j ACCEPT
-A INPUT -p tcp -m tcp -s 0/0 -d 0/0 --dport 1024: -j ACCEPT</pre>
<p style="text-align: justify;">The INPUT chain will accept all incoming connections on any high port in server. But it is less secure and not recommended to leave your server ports open to everyone.</p>
<p style="text-align: justify;">So here is a better solution implemented using the <em>connection tracking mechanism </em>for FTP in iptables. This is achieved in iptables by loading an additional module called &#8220;ip_conntrack_ftp&#8221;. This module will keep track of your established control connections and will find out the required data connections to be opened by analysing <em>PORT</em> command sent over the control channel. ip_conntrack_ftp module will open those required high ports only in server and allow data transfer.</p>
<p style="text-align: justify;">You can enable this module in two ways. For loading the module on demand, execute</p>
<pre lang="bash">/sbin/modprobe ip_conntrack_ftp</pre>
<p style="text-align: justify;">And for loading it when iptable starts, modify the iptables configuration file &#8220;<em>/etc/sysconfig/iptables-config</em>&#8221; as given below</p>
<pre lang="text">IPTABLES_MODULES="ip_conntrack_netbios_ns ip_conntrack_ftp"</pre>
<p style="text-align: justify;">Addition modules can be loaded by placing their names separated by space.</p>
<p style="text-align: justify;">So the new iptables entry will be like this</p>
<pre lang="text">-A INPUT -p tcp -m tcp -s 0/0 -d 0/0 --dport 21 -j ACCEPT</pre>
<p style="text-align: justify;">Here you are specifying only the control connection in iptables and data connection will be opened on demand by ip_conntrack_ftp module.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.linuxegrep.com/archives/howto/networking-howto/ftp-firewall-issues-in-passive-mode/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Rsync: The powerful network copy tool</title>
		<link>https://www.linuxegrep.com/archives/tool/networking/rsync-the-powerful-network-tool/</link>
		<comments>https://www.linuxegrep.com/archives/tool/networking/rsync-the-powerful-network-tool/#comments</comments>
		<pubDate>Sat, 04 Apr 2009 21:23:30 +0000</pubDate>
		<dc:creator>meharo</dc:creator>
				<category><![CDATA[Networking Tools]]></category>
		<category><![CDATA[copy]]></category>
		<category><![CDATA[filesystem]]></category>

		<guid isPermaLink="false">http://www.findlinuxhelp.com/?p=25</guid>
		<description><![CDATA[rsync is a small, light weight, easy to use linux command line tool which can transfer N number of files from source to destination over any kind of network. Especially when copying files over limited bandwidth, rsync is much faster and reliable. rsync is basically used for two filesystem/directory tree synchronization which are placed in [...]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;"><em>rsync</em> is a small, light weight, easy to use linux command line tool which can transfer N number of files from source to destination over any kind of network. Especially when copying files over limited bandwidth, rsync is much faster and reliable.</p>
<p style="text-align: justify;"><span id="more-25"></span></p>
<p style="text-align: justify;"><em>rsync</em> is basically used for two filesystem/directory tree synchronization which are placed in two different network locations. It also can be used for website mirroring.</p>
<p style="text-align: justify;">Here find some simple example usages of <em>rsync</em> :</p>
<h4>1. Copying/syncing the contents of one directory called &#8220;/src&#8221; to another directory called &#8220;/dst&#8221; :</h4>
<pre lang="bash">rsync -a /src/ /dst</pre>
<p style="text-align: justify;">Note: &#8220;-a&#8221; option represents <em>archive mode</em>, which preserves most of the file attributes when copying.</p>
<p style="text-align: justify;">If you miss a trailing slash (/) after source directory, it will create the directory as &#8220;/dst/src&#8221; in the destination and will transfer the files into it.</p>
<h4>2. Copying the contents of a directory called &#8220;/src&#8221; in computer 1 to another directory called &#8220;/dst&#8221; in computer 2 :</h4>
<pre lang="bash">rsync -a -e ssh /src/ computer2:/dst</pre>
<p style="text-align: justify;">Assume you are in computer 1, you can execute the above command to copy files to computer 2 using &#8220;ssh&#8221;. It will establish an ssh connection from computer 1 to computer 2 and transfer files over that connection. Also, it will ask you ssh login details on computer 2 to establish the connection before start transferring the data.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.linuxegrep.com/archives/tool/networking/rsync-the-powerful-network-tool/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

Page Caching using disk: enhanced

 Served from: linuxegrep.com @ 2026-05-07 11:03:52 by W3 Total Cache -->