Google SAS Search

Add to Google

Wednesday, July 30, 2008

JavaScript Object Notation

I use Javascript quite a bit for my i-Doc site and some of the projects I work on. I really like the language because it's syntax is comfortable and you can do a bunch with it without having to know a lot about it. And getting stuff done quickly is pretty much the whole reason for programming.

But the "looseness" of Javascript can also tempt you to fall into programming habits that don't scale well.

When I first started learning Javascript I approached it as a purely function based language. Probably because I was already familiar with SCL (Screen Control Language) in SAS/AF and that's what I likened Javascript to. But anyways, all my JS code looked like this:


function doSomething(someVal) {
var someLocal;
// do some stuff
return rValue;
}

function doSomethingElse(someVal) {
var someLocal;
// do some other stuff
return rValue;
}

And I would store it in a file and include the file as a link in my header tags.
This works perfectly well and so I had no incentive to change it. Until I started getting lots of functions in lots of files. It doesn't scale well. But by changing the coding style just a little bit, I am able to write my JS code so it is much easier to maintain. Using JavaScript Object Notation (or JSON) I can fake namespaces. This lets me take more control over the design of my JS code. Using JSON the above would be rewritten:


myNameSpace = {
doSomething function(someVal) {
var someLocal;
// do some stuff
return rValue;
} ,
doSomethingElse function(someVal) {
var someLocal;
// do some other stuff
return rValue;
}
};

Then when I want to use one of the functions, I just preface it with the object name ( myNameSpace.doSomething(withThis); )
I usually choose the object name to be the same as the name of the javascript file. That way I avoid name collisions, and I can quickly find where a function is defined if I need to look at the source code.

Certainly, this is not a great leap forward in web programming. But I still see so much function-style javascript online that I thought it would be useful to pass it along.

Thursday, July 24, 2008

SAS Macro Nesting

I'd like to share a nifty SAS option that will help tremendously with debugging SAS macros. The mprintNest system option will show nesting information in your log. This is a big improvement over mprint that showed which macro you were in, but made it nearly impossible to tell which macros may have contained the macro call.

With mprintNest you can see exactly where you are in the executing macro stack.

You must use mprintNest with mprint. It cannot be set on it's own.

Wednesday, June 04, 2008

Data Set Sorted By Information

#UPDATE# Please see the comments for a better way to get the sorted by information from a data set! #UPDATE#

The other day a colleague mentioned that it was not possible to get the sort information for a data set from the dictionary tables. Intrigued, I
took up the challenge. While it's true that you cannot get the sorted by information directly, it is possible to get the necessary information
and put it together.

First of all, a little test data set:


data myData;
input key1 nonKey keyB key3;
cards;
1 2 6 3
1 5 5 7
2 7 4 4
3 3 5 9
4 5 3 2
1 5 4 6
3 3 9 8
5 5 7 9
6 6 4 3
3 3 3 6
;


I chose the names key1, nonKey, keyB, and key3 to make sure I wasn't getting the variables in alphabetical order
and mistaking it for the sort order.

Now sort the data set:

proc sort data = myData;
by key3 descending keyB key1;
run;


It is important to make sure this works with descending sorts. Also notice the
variables are in a different order than in the data step so we don't confuse any artifact of creation order with sort order.

And now finally the code that will report the sort order. I used SQL and a data _null_ step to get the information then I just write it to the log. Originally I was just trying to see if it could be done, but just writing it to the log is not the most useful. Now that I've seen it works, I can rewrite it as all macro code using the vtables and %sysfunc() calls. That would allow me to make it a "function" style macro which returns a value to be used within code. Maybe tomorrow or next week...

%macro getSortedByVars(lib=,mem=);
%* This macro will write the sort order of a data set into the log;
%* If there is no sort on the data set it returns a blank;
%* it takes two parameters: the library and the name of the data set;
%* June 2008, Stephen Philp datasteps.blogspot.com/pelicanprogramming.com;
%let lib = %upcase(&lib);
%let mem = %upcase(&mem);

proc sql;
create table keys
as select name, sortedBy, case
when (sortedBy<0) then 'DESCENDING'
when (sortedBy>1) then ' '
end as prefix,
case
when (sortedBy<0) then abs(sortedBy)
else sortedBy
end as sortOrder
from dictionary.columns
where libname ="&LIB" and
memname = "&MEM" and
sortedBy ne 0
order by sortOrder;
quit;

%* now pack up those values from the keys table;
data _null_;
length value $32767;
if 0 then set keys nobs=n;
do i = 1 to n;
set keys point=i;
value = catX(' ', value, prefix, name);
end;
call symput('sortedBy',trim(value));
stop;
run;

%put &sortedBy;
%mend getSortedByVars;

%getSortedByVars(lib=work,mem=mydata);

Friday, May 09, 2008

i-Doc Interactive SAS Documentation

After much hard work, I am happy to announce the arrival of i-Doc interactive SAS documentation. The idea behind i-Doc is to generate SAS documentation from users all over the world. I have started with SAS functions and hope to continue with formats, informats, macro, system options, etc. Eventually I'd like to provide copies in book format for people to keep on their desks.

Please check it out, tell your friends if you find it useful, etc. Currently i-Doc is beta and only works with internet explorer.

i-Doc Interactive SAS Documentation

Thursday, April 17, 2008

SAS SQL Join

One of the most frequent uses of SQL is to join tables. And since SAS data sets are tables, there is good reason to learn SQL. But a lot of SAS programmers will shy away from learning SQL because they are already familiar with merging in the data step. Here are three additional reasons to use a SQL join instead of a data step merge. But first two little sample data sets to use for the examples:


data sales;
length var $5;
do id_num = 1 to 10;
var = 'left';
output;
end;
run;

data contracts;
length var $5;
input id_no var $;
cards;
3 right
2 right
5 right
6 right
1 right
;


Okay, now here's the three reasons!

1) No need to sort the data beforehand.
This one is pretty self explanatory. If the tables are not sorted by the variables you are joining on, SQL will take care of it.

2) You can join on different variables names.
In a SAS data step merge, you have to merge by a variable or variables that are identical in each data set. With SQL, you can join on variables with different names as long as the values match up. So instead of something like this:

proc sort data = contracts;
by id_no;
run;
data together;
merge sales( in=a )
contracts( in=b rename=(id_no = id_num) );
by id_num;
if a;
run;


You can use sql:

proc sql;
create table together
as select * from sales as a
left join contracts as b
on a.id_num = b.id_no;
quit;


3) You are warned if there are overlapping variables.
In a traditional data step merge, you have to be very mindful of overlapping variables-- or variables that are shared by the data sets but are not part of the by statetement. If there are overlapping variables, the last data set named on the merge statement to contribute to the observation gets to deliver the resulting value. But there is no warning in the log letting you know values have been overwritten. SQL will let you know that the variable already exists, but it uses the first value, not the last.

If you are already pretty familiar with merging in the data step then you may find some of the SQL syntax a little strange. Most merges are of the "if a;" and "if a and b;" variety. Those are the best starting points for getting used to the equivalent SQL syntax.
The left join we used in the above example is equivalent to "if a;" data step syntax. This is an "outer" join since we are asking for values that don't belong to both sets. For the more restrictive "if a and b;" merge, use an inner join in SQL:

proc sql;
create table together
as select * from sales as a
inner join contracts as b
on a.id_num = b.id_no;
quit;


It is called an inner join because you only want values that belong to both sets. For the data step syntax "if not (a and b);" you use a full outer join in SQL. It is called a full outer join because you want everything that is not contained in both sets.

proc sql;
create table together
as select * from sales as a
full join contracts as b
on a.id_num = b.id_no;
quit;



Hopefully that is enough to get you started if you are interested in SQL. If I get some time I will put together and post a little cheat sheet of Venn diagrams to illustrate the inner/outer join concepts.

Update: I just got some Venn diagrams up in a new post. Hope you find it useful!
www.sascoders.com/2010/06/how-to-get-what-you-want-out-of-data.html

Wednesday, March 26, 2008

The Cats() Function

One of my favorite new functions is the cats() function available in SAS v9. It is a compress() like function in that it removes leading and
trailing blanks, but it also concatenates the results. CATS() stands for concatenate and strip. Basically the cats() function takes this
type of assignment statement:


key = trim(left(firstName)) || trim(left(lastName)) ||
trim(left(phone)) || trim(left(zip)) ;


and changes it to this:

key = cats(firstName, lastName, phone, zip);


Generally, the cats() function will return a value with a length of 32767 in the data step. So as always, it's a good idea to use
a length statement on the variable you are assigning to. In this example I might use something like:

length key $200;


The cats() function belongs to a family of cat() functions each doing it's own version of concatenate: cat(), cats(), catt(), catx().

Coming up in version 10, the dog() family of functions! :)

Tuesday, March 18, 2008

Blogging At the SAS Global Forum

So far so good at The SAS Global Forum.

I haven't had too much time to go to a lot of talks, but I attended a really informative one yesterday morning. Judy Loren from Health Dialog Analytic Solutions gave a good talk on using data step hash objects. One little code tidbit that caught my eye was this loop construct:

do i = 1 by 1 until( some criteria or i > 1000 );

It is a nice shorthand way to create a loop sentinel variable and update it instead of:

i = 0;
do until( some criteria or i > 1000 );
i + 1;

Nifty! You never know what little gems you can pick up from the Global Forum. Thanks Judy Loren.


Mostly I have been busy preparing for my own talk tomorrow morning (and enjoying St Paddy's Day on the riverwalk :).

I uploaded the paper and some accompanying code files in case you are interested.

More later!

Thursday, March 13, 2008

SAS Global Forum 2008


Howdy Ya'll!

I will be travelling to San Antonio, TX for SAS Global Forum 2008. I am not really there to "blog" the conference, but I will do my best to take some pictures and share my thoughts on different things I see there. Mostly I plan on wandering around, getting some free swag, meeting as many people as I can and having a good time. Oh, and I am presenting a paper "SAS Macros: Beyond the Basics" Wednesday morning at 10am :)

If you happen to see me there, please step forward and introduce yourself!

Thursday, February 07, 2008

Excel Text Wrap

I often work with data sets that have large text columns. Invariably, the long text fields almost always are related to customer complaint data. It seems we are at our most eloquent when we have something to complain about!

Anyways, I often end up pushing these data sets with large text columns into Excel. And Excel takes the text column and sizes it to only show some of the text. To remedy this, I will first set the width of the column to something manageable, then highlight the column in Excel and choose Format->Cells. Next choose the alignment tab and put a check mark next to Wrap Text. This will tell Excel to auto-wrap the text in the column so it can all be shown.


Like most SAS programmers, I end up doing at least some of my data/presentation work in Excel. I know enough to get most things I need done, however I certainly don't know Excel as well as I know SAS. In an effort to increase my productivity in Excel I ask you to send me any tips you have. After a few weeks, I will put them all together and share them on this blog.

Please send your Excel Productivity Tips For SAS Programmers to stephen at pelicanprogramming dot com. Please include your name and the city where
you live.

Thanks!

Tuesday, January 29, 2008

New Macro IN Operator

Has anyone gotten the new IN Macro operator to work? I just want to test it out and SAS Macro keeps coughing up an error that a character operand is found where a numeric operand is required.

According to the documentation, you can use the # character or the mnenomic IN. Their example is A#B C D E.

I am trying to test it with:


%macro test;
%if A#B C D E A %then %put it works;
%mend test;
%test;


Pretty straightforward as far as I can tell. Am I overlooking something obvious?

Wednesday, November 14, 2007

Using Logical Expressions In SQL

Most of us SAS programmers approach SQL as simply a data extraction and table joining tool. Since most of us have used the data step longer than SQL, we tend to leave the logic programming to the data step with its if/then statements. However, SQL does have a way of assigning values conditionally. With the CASE expression you can test and assign values logically.
The basic syntax is:


CASE value
WHEN condition THEN result
WHEN condition THEN result
ELSE result
END



In the code below I am just assigning a 1 or a 0 to a column/variable named bool_tf.
Using the CASE expression is pretty straightforward and is another great way to use SQL to get more coding done in fewer steps.



data myData;
input answer $;
datalines;
true
false
true
true
false
false
true
;
proc sql;
create table a as
select answer,
case substr(answer,1,1)
when 't' then 1
when 'f' then 0
end as bool_tf
from myData;
quit;

Tuesday, October 16, 2007

Saving Steps With SQL

Often we need to create some simple statistics for a set of data and then associate those stats with each observation of the original set. As a simple example
consider a table with only three rows:
N
3
6
4

We want to get the mean of the variable N and stripe it down all the observations:
N Mean
3 4.333
6 4.333
4 4.333

The first way I learned to do this was with a proc summary and a merge. A better way to do it is with proc sql.

Here is a little test data:
data myData;
input x level $;
cards;
11 a
31 a
51 a
2 b
61 a
8 b
21 a
71 a
91 a
4 b
61 a
21 a
5 b
7 b
5 b
31 a
1 b
61 a
8 a
9 b
3 a
2 b
5 a
7 b
7 a
3 b
;
* in that data set we have two variables X and LEVEL. We can get the stats on X for each level by summarizing and merging...;
proc sort data = myData;
by level;
run;

proc summary data = myData;
by level;
var x;
output out= tempStats(drop=_type_ _freq_) mean=mean max=max min=min;
run;

data sumStats;
merge myData tempStats;
by level;
run;

* or better yet, we can collapse the whole thing into one nifty proc sql step!;
proc sql;
create table stats as select *,
min(x) as min,
max(x) as max,
mean(x) as mean
from myData group by level;
quit;

Wednesday, September 19, 2007

Hooray For Vmware!

I am excited for computers again! Every once in a while something comes along that really changes the way you interact with computers. You know the feeling, it stops you in your tracks and makes you say, wow.

I remember when I was a kid and I first played a game called "Beach Head" on my Commodore 64. There was a level where you controlled a machine gun and the little computer guys would run at you from behind walls and throw grenades at you.
Every once in a while if you would shoot one of the little men he would yell "Medic!" or "I'm hit!". It was such a strain for that little computer to create the digitized speech that the whole game would slow down for a second or two. But my brother and I were seriously impressed. Wow!

I recently installed Vmware's Player on my little Dell laptop. If you are not familiar with Vmware and their virtualition technology then stop reading this and go to their web site. It is easily the most impressive software I have used in quite a while.

You see, I am going on vacation for two weeks (woohoo!) and will have some time to work on some coding projects during flights. I have been working on a perl/web/mySql project for my website for a while now and am getting close to finishing it. To work on it, I usually log into my remote server using ssh and work away. Works great until you aren't connected to the internet. So I thought, why not create a local server to work on while I am away from the internet?

Usually that would entail downloading a linux distro, partitioning part of my hard drive, making sure the distro has all the drivers it needs for my laptop, setting up and configuring all the tools I need, etc etc. Essentially a lot of wasted, unproductive time.

Last night I downloaded Vmware Player for free. Then I downloaded an appliance called Grandma's LAMP for free. An appliance is a full-blown pre-configured virtual server that is hosted on your machine through the player. Within minutes it was up and running.

All I had to do was go to my web server, tarball all the files for my application and download them to my laptop. Then I just copied them to my virtual
Ubuntu server using the pre-configured samba share and Voila! A completely useable local copy of my entire development environment in two hours! I am seriously impressed. And all without doing any reconfiguring on my little windows xp laptop.

And to top it off, I can take the whole virtual server and the player and copy them to a 2 gig thumb drive. Any computer I stick my USB drive into can host my development server. Wow, indeed.

Thursday, September 13, 2007

Bootstrap Resampling

First of all, I should mention here and now at the beginning of this post that I am not a statistician. But I am married to one (Happy Bithday Orla!), and I dounderstand normal distributions and confidence intervals and standard deviations and such. Suffice it to say, I generally get the concepts but my eyes invariably glaze over once the equations are presented.

Now that I've gotten that out of the way, I will attempt to make this post about... statistics! Hopefully everything I write will make sense, but if anything is outrageously stupid, feel free to forgive me and correct me in the comments.

On one of my travels through the internet I came across something I had never heard of before: bootstrap resampling. I will attempt to describe my understandingof it, but please do check out the links at the bottom because I am sure to over-simplify or exaggerate some parts.

In traditional parametric statistics the data is generally assumed to follow a particular pattern or distribution with the "normal" bell curve distribution being the ideal. Statisticians use various tests to determine if the sample data is normally distributed (a very surprising amount of data is) and then proceed to make statistically sound inferences about the population the data was drawn from (confidence intervals, standard deviation, etc). If it is true for the randomly drawn sample then it is true for any randomly drawn sample from the population. Assuming the sample fits the normal distribution.

Now If I understand bootstrap resampling correctly there is no need to assume the data follows a normal distribution; or any particular statistical distribution. You take a sample from your data and record the mean, then you put your sample back and get another sample of data and record the mean. You repeat that many, many, many times and then use the resulting means to pick your intervals. Here is the original description I read from a wonderful site called the World Question Center. It is an excerpt from the response of Bart Kosko. If you scroll about halfway down the page you will find it. He is way smarter than I am so his explaination will surely make more sense than mine:

"The hero of data-based reasoning is the bootstrap resample. The bootstrap has produced a revolution of sorts in statistics since statistician Bradley Efron introduced it in 1979 when personal computers were becoming more available. The bootstrap in effect puts the data set in a bingo hopper and lets the user sample from the data set over and over again just so long as the user puts the data back in the hopper after drawing and recording it. Computers easily let one turn an initial set of 100 data points into tens of thousands of resampled sets of 100 points each. Efron and many others showed that these virtual samples contain further information about the original data set. This gives a statistical free lunch except for the extensive computation involved—but that grows a little less expensive each day. A glance at most multi-edition textbook on statistics will show the growing influence of the bootstrap and related resampling techniques in the later editions.
Consider the model-based baggage that goes into the standard 95% confidence interval for a population mean. Such confidence intervals appear expressly in most medical studies and reports and appear implicitly in media poll results as well as appearing throughout science and engineering. The big assumption is that the data come reasonably close to a bell curve even if it has thick tails. A similar assumption occurs when instructors grade on a "curve" even the student grades often deviate substantially from a bell curve (such as clusters of good and poor grades). Sometimes one or more statistical tests will justify the bell-curve assumption to varying degrees — and some of the tests themselves make assumptions about the data. The simplest bootstrap confidence interval makes no such assumption. The user computes a sample mean for each of the thousands of virtual data sets. Then the user rank-orders these thousands of computed sample means from smallest to largest and picks the appropriate percentile estimates. Suppose there were a 1000 virtual sample sets and thus 1000 computed sample means. The bootstrap interval picks the 25th — largest sample mean for the lower bound of the 95% confidence interval and picks the 975th — largest sample mean for the upper bound. Done.
Bootstrap intervals tend to give similar results as model-based intervals for test cases where the user generates the original data from a normal bell curve or the like. The same holds for bootstrap hypothesis tests. But in the real world we do not know the "true" distribution that generated the observed data. So why not avoid the clear potential for modeler bias and just use the bootstrap estimate in the first place?"


So my questions to the statisticians: do you use bootstrap resampling? Is this something you do in SAS? Do you feel it helps to simplify statistics and open it up to us non-statisticians?

Really good explaination of bootstrap resampling:
http://www.uvm.edu/~dhowell/StatPages/Resampling/Bootstrapping.html

Bootstraping in SAS:
http://support.sas.com/ctx/samples/index.jsp?sid=479

Tuesday, August 21, 2007

Clean Up Clean Up

Clean up.
Clean up.

Everybody everwhere.

Clean up.
Clean up.

Everybody do your share.


This is the song my wife taught our two-year old daughter in the hopes that it would make clean-up fun and encourage more of it. Sometimes it works really well and sometimes not so well. Every now and then it backfires completely and my toddler makes a big mess just so she can run around in circles singing the Clean Up song. Leaving Mommy or Daddy to do the actual cleaning.

As SAS programmers, we are given a lot of freedom to easily create as many data sets as the system will allow in the workspace. I have met many SAS programmers that do not even carry a thought about the conseqences of keeping all those work data sets hanging around. Some of the trickiest bugs to track down can be caused by stale work data sets (especially when running interactive SAS).

I have found it very useful to delete all work data sets if I am working a piece of code repeatedly. That way I make sure previous runs don't taint current runs. A simple proc datasets does the trick:

proc datasets library=work mt=data nodetails nolist KILL;
quit;

So now that you've got the song and the code, you have no excuses for leaving a mess in the work library :)

Clean up! Clean Up! Everybody Everywhere!

Tuesday, August 07, 2007

Multiple By Variables

Here is one little piece of SAS programming that I always have to work out: When using multiple "by variables" in a SAS data step, when does the grouping flip? An example:


data stuff;
set otherStuff;
by var1 var2 var3;
if first.var1 then ...;
if first.var3 then ...;
run;

For some reason, I always have to sit and think through how multiple by variables effect each other. So here, once and for all, is the rule for me to remember:

If the group (value) changes in the variable to the left, it changes the group of all the variables on the right regardlessof their values.

It makes sense if you think it through, but sometimes it's just easier to write the rule down and refer to it (here!).

Wednesday, August 01, 2007

Summer Reading

Currently I am reading a book that is so good, I thought I would give it a quick recommendation. Against The Gods: The Remarkable Story of Risk is one of those books that I know-- before even finishing it, I will read again, and again. And I will gain deeper insights into history, humanity, stock markets, statistics and even the decisions that I make in my everyday life.

So if you get the chance, pick up a copy. And if you have any other good reads that you think I or others might be interested in, please share them here.

Wednesday, June 13, 2007

Los Angeles Basin SAS User Group

If you live in the Los Angeles area and have not had the chance to attend a LABSUG you are missing out. Kimberly Lebouton has worked very hard to bring a user group to Los Angeles and her efforts have been very productive. The speakers have been very good and Kimberly has worked diligently to listen and respond to attendee's feedback.

I was hoping to attend this year, but my wife is going out of town leaving me with babysitting duty. Of course, I could bring my toddler-- she would make a very engaging presenter!

I wonder if Kimberly could carve out 2 hours for nap time this year. . . :)

LABSUG
Friday June 22nd
Sheraton Los Angeles Downtown Hotel
http://www.labsug.org

Tuesday, May 29, 2007

Where Did the Observation Come From?

Here is a little snippet of code I created to address the problem of assigning a value to a variable based on what data set an observation came from in a data step. Here is an example:

Suppose I have a whole bunch of data sets each representing a different country. I want to set a lot of them in one data step and create one resulting data set with a variable called language. In order to create the language variable correctly, we need to know which data set the observation is coming from. Typically we would use the IN= option on the data set to create a flag and then check that flag using if/then logic.


data selectedCountries;
set
chile(in=chile)
china(in=china)
costa_rica(in=costa)
egypt(in=egypt)
fiji(in=fiji)
turkey(in=turkey)
usa(in=usa)
saudi_arabia(in=saudi)
;

if chile then language = 'SPANISH';
else if china then language = 'CHINESE';
else if costa then language = 'SPANISH';
etc etc etc...
run;

One of the major problems with this approach is it does not scale well. The more countries you set, the more problematic your if/then logic becomes.

Here is a slightly more elegant solution that uses arrays and variable information functions. You still use the IN= option on the data set, however you want to name the in= variable the same as the value we want to assign. Then you create an array of all those in=variables. Finally, you loop through the array of in= variables and check for their boolean value. If it is true then you assign your new variable the value derived from the vname() function.

data selectedCountries;
set
chile(in= SPANISH)
china(in= CHINESE)
costa_rica(in= SPANISH)
egypt(in= ARABIC)
fiji(in= ENGLISH)
turkey(in= TURKISH)
usa(in= ENGLISH)
saudi_arabia(in= ARABIC)
;
array names[*] SPANISH CHINESE ARABIC ENGLISH TURKISH;
do i = 1 to dim(names);
if names[i] eq 1
then language = vname( names[i] );
end;
run;

Wednesday, May 23, 2007

Saving Time

When I was a kid my brother, sister and I spent a lot of time in my Father's dental lab. This gave us a unique opportunity to learn how to get things done in a time-sensitive production environment. The more business he got and the more successful his practice became, the more demanding his labwork. He spent a lot of time working in the lab perfecting techniques and efficiency. We kids would hang out in his dental lab looking for things to do and he would hand out miscellaneous tasks to us (sadly he locked away the NO2 from us). As we got older and more profecient working the lathe, drill, sand blaster, oven, etc we would get more critical tasks. Spending time with Dad meant spending time learning how to get things done in a fast-paced hands-on environment.

One thing Dad would always repeat to us is how important it is to get things done "quickly and correctly."

Just getting things done quickly won't cut it. And believe it or not, just getting things done correctly doesn't cut it either. Not if you have other steps in the process or customers waiting on you to complete your task. In order to have time in this life for things other than work, it helps to learn how to get things done both quickly and correctly.

Generally, most people think of working quickly as producing sloppy work. But actually, you can get things done quickly with FEWER mistakes. The trick is to seperate tasks into two categories: things that should be done very quickly, and things that should be done very correctly. When you get good at cutting down the time it takes for you to do the miscellaneous tasks you can spend more time getting the critical tasks done correctly. This type of thinking translates very well to programming. It has probably helped my career more than any other single piece of advice I have received.

So as you spend your day programming, think to yourself, "what are the non-critical tasks that I am having to do and how can I minimize them?" Believe it or not, with just a few small changes you can find yourself getting a lot more done.

Here is an example of a change that I have recently incorporated. If you are like me, you probably have a few folders on your hard drive that you are constantly having to access. Throughout my day I am constantly typing something like "c:\my data\reports\ad hoc\" into Save As and Open dialog boxes, Windows Explorer, etc. In Windows you can create a PATH variable to substitute. So in my example I might create a Windows path variable name R (stands for reports) that has the value "c:\my data\reports\ad hoc\". Now I can just type %R% to navigate to that folder. Saves time and frees my mind to focus on the more critical tasks than navigating Windows Explorer.

I believe I got that tip from http://www.lifehack.org/. It's a great site full of useful tips for minimizing the clutter so you can focus on getting things done quickly and correctly.