Tuesday, October 30, 2007

Glossy 3D Interface with Corel Draw

Let’s make something clear, I’m NOT a designer, I am a coder. I learn bit about design just because I don’t want my apps to end up looks to much like crap as many Open Source application being done without the help of a designer.

So here’s my story. One day I was asked by a client if I could show them what their website might look like if I was to build them using CMS, sure enough I didn’t want to show crappy sample website with standard template even if it’s only a preview. I want it to look at least a close-to-nice website, complete with their company name on the header. I searched Google for a tutorial on the first thing that crossed my mind, glossy interface design. I got a nice one with a very promising title, “Principles of Glossy Interface Design” (http://www.partdigital.com/tutorials/glossy), it’s in Photoshop.

I opened Photoshop, click here and there, input this and that, and I’m stuck at some point where it didn’t give me clear instruction. After few minutes of trying in vain I cowardly gave up (hey, Photoshop is not one of my thing). So I thought I’ll give it a shot with Corel Draw at which I’m more experienced with, what I did was this:
  1. Create a rounded rectangle, do it by first create an ordinary rectangle by selecting Rectangle Tool from the Tool Box, then use Shape Tool to drag the corner to make it round.


  2. By using the Fill Tool, fill the rectangle with RGB value of 117, 153, and 163 (as instructed in the mentioned article, any color would do of course), don’t forget to remove the outline by using the Outline Tool.
    NOTE: When Fill Dialog opened, the default color input model is CMYK, change the model to RGB.
  3. Next is to create ellipse crossing the rounded rectangle, fill it up with white and remove the outline. The intersection between this ellipse and the rounded rectangle will be used to create shine effect by setting the intersection to semi transparent white.
    NOTE: The illustration below shows the ellipse without fill color and with outline for the comfort of our viewing, it should be white and without outline.


  4. To create intersection between ellipse and rounded rectangle bring up Shaping Dialog from the main menu Arrange -> Shaping -> Shaping. Select the rounded rectangle, select Intersect from the shaping type while making sure that on the Leave Original section Target Object checkbox is cleared. Click the Intersect With button and then click on the ellipse to create intersection between to shapes and remember to remove the outline of the intersection.


  5. Now to create that shiny effect we talked about a while ago we must set intersection to be nearly transparent, to do so use the Transparency Tool click inside the intersection and drag it slightly at any direction, that should create Linear Transparency which is not what we need, we want a Uniform Transparency which we could get by setting Transparency Type at the transparency toolbar to Uniform. Next, set the Starting Transparency value at the same toolbar to 70.


  6. Starting to look shiny here are we? No? Ok, let us move to the next step where we back to work on the rounded rectangle. To add a rather 3D look, we’ll fill the rounded rectangle with gradient, so select the rounded rectangle and click on the Fountain Fill Dialog, a dialog will appear with gradient editor, set the Angle to 90, set the “From” Color selection to RGB 117, 153, 163 and “To” Color to RGB 153, 219, 222.



  7. Now don’t tell me you can’t see something started to look shiny, but yes, something is missing, a little shadow perhaps. Select the rounded rectangle, select Shadow Tool and click on the center of rounded rectangle and drag it just a little bit at any direction. A Shadow Dialog will appear on the toolbar , set Drop Shadow Opacity and Drop Shadow Feathering to 10, Drop Shadow Feathering Direction to Outside.


Nice isn’t? Well, for a programmer at least. I guess that’s all I can share, next time I probably going to share some other simple tips on Corel Draw for Programmers.

Monday, October 29, 2007

Oracle TNS Error : Could not resolve the connect identifier specified

A cliché, yes, but I think what cause the problem is worth noted, at least for myself. One day I was thinking about bringing home some of my work and the country in which I live and work was (and still) no high-tech developed countries where internet speed generally fast enough to remotely access a Database Server. Anyway, the problem was that I only have Oracle 9i Client installed on my laptop, I need Server. So I installed Oracle 10g Express Edition (the download took quite some time, mind you).

Everything went just fine until I was back at my office trying to run the application with the TNS configuration set to Oracle Server at the office, when it said:

Oracle TNS Error : Could not resolve the connect identifier specified


I tried TOAD to connect using the same TNS configuration and no problem occurred, which was an addition my confusion. So I called Google, worked some search and found some discussions about the topic here:
http://forums.oracle.com/forums/thread.jspa?messageID=2154989
http://ora-12154.ora-code.com/

The point is that after I installed Oracle XE I had two Oracle Homes, the one used by my application was the Oracle XE’s, that’s why it won’t connect to my office’s server. One of the solutions is to use TNS_ADMIN variable, but those instructions brought me headache. I need a quick solution, my boss was coming. So my solution was to temporarily replace Oracle XE’s TNSNAMES.ORA file with the one from the Oracle 9i Client. It was a quick and certainly a dirty solution, but save me some accusing questions from my boss.

Actually if I was thinking more careful (and having more knowledge on Oracle) instead of instantly weep and ran to Google, I should have seen the Oracle Home options on TOAD’s Login Window at which displayed selections of which Oracle Home to use, then I would (probably) got the problem figured out at that moment, silly me.

Saturday, October 27, 2007

Optimizing ASP.NET Web Service with DataSet Compression

This is one of those days when I received a disastrous application source (I often call them garbage codes). To tell you something about this application, it abuses Web Service by simply, if not brutally, created a method called QueryData (returned DataSet) as a Web Method so that the Web Service is no more than a dummy proxy serving database queries without single knowledge on the business logic. When the big surprise came, that is the application failed to deliver expected performance due to network limitation and none of those brutish programmers seems to know what to do (another big surprise), the supervisor came to my desk:

“Can you do something to make this application run faster?”

At this point I’ve got a glimpse on the codes and found so many sign telling me this is garbage and I calmly said:

“My recommendation is to burn this piece of s**t, piss on it, and never look back”

Yeah I wish, but hey he’s my supervisor, so as always I obediently said, “I’ll see what I can do Sir”.

So here I am on the job, firing up Google on the first thing that crosses my little mind, Compression. Some recommendations quickly offered:

  1. Create a pair of Web Service Filter (Input and Output) by utilizing WSE (Web Service Enhancement) to compress and uncompress data transmitted between Server and Client. This is nice in the way that the application core doesn’t have to be changed, on simple modification on the Web Method declaration and the generated proxy (usually named Reference.cs), and even nicer various open source implementations were already available. The not-so-nice part is that somehow it does not work with my garbage codes.
  2. Utilize standard HTTP Compression by adding Web Service request with “Accept-Encoding: gzip, deflate” and then uncompress the data at the client side. This is done by slight modification on the GetWebRequest and GetWebResponse at the proxy. Sounds nice, but Gzip is only “normally” supported on IIS 6.0 bundled with Windows Server 2003, while the IIS 5.0 requires so much effort to make it work, and according to the community (and my own frustrating experiment) there’s no guarantee that the same steps will work on different machine. Even if I make it work most probably it will mean hours of support for the installation guys, no way I’m gonna go for it.

So I’ve come to a conclusion that maybe I should use a brutal approach for this brutal application, that is directly compress the DataSet. This certainly defy good programming practice, since it would require major changes on the Web Method including its return value, not to mention it’s not scalable and so forth, but thankfully my brutal application centralize its data call only at one method which mean I only have to deal with that method only.

So here we go, the previously mentioned QueryData method is originally returns DataSet, like this:

[WebMethod]

public DataSet QueryData (string UserName, string Password, string Database, string Server, string Query, string TableName)

{

...

return myDataSet;

}

What I would do is to rename this Web Method to QueryDataWithCompression with byte array contains compressed DataSet as the return value, in this method DataSet is compressed then returned as byte array. What will we need is the use of GZipStream class in System.IO.Compression (available at .NET 2.0), you can also use the popular SharpZipLib (available at http://www.icsharpcode.net/OpenSource/SharpZipLib/) if you’re still using .NET 1.1, all you have to do is use GZipInputStream instead of GZipStream to compress the DataSet.

[WebMethod]

public byte[] QueryDataWithCompression (string UserName, string Password, string Database, string Server, string Query, string TableName)

{

... //generate DataSet here

MemoryStream memStream = new MemoryStream();

GZipStream zipStream = new GZipStream(memStream, CompressionMode.Compress);

myDataSet.WriteXml(zipStream, XmlWriteMode.WriteSchema);

zipStream.Close();

byte[] data = memStream.ToArray();

memStream.Close();

myDataSet.Dispose();

return data;

}

NOTE: Do remember that when you update Web Reference from the Visual Studio IDE, the changes you made in Reference.cs (the proxy) will be lost, so be careful.

Then to avoid changing function calls to the original QueryData method, at the proxy class we will create a public method called QueryData which is actually a wrapper for the generated QueryDataWithCompression method.

public DataSet QueryData(string UserName, string Password, string Database, string Server, string query, string TableName)

{

byte[] data = QueryDataWithCompression(UserName, Password, Database, Server, query, TableName);

MemoryStream memStream = new MemoryStream(data);

GZipInputStream unzipStream = new GZipInputStream(memStream);

DataSet ds = new DataSet();

ds.ReadXml(unzipStream, XmlReadMode.ReadSchema);

memStream.Close();

unzipStream.Close();

return ds;

}

That’s it! Now my Dataset will be compressed before transmitted, but what about performance improvement? I tested a query by using Web Browser calling to the service, the code manage to optimize from 5.3 MB of XML DataSet down to about 300 KB or about 1/18 from the uncompressed size! Yes, it works.

Thursday, October 25, 2007

Optimize Web Service Database Connection

“Can we optimize the way our Web Service connects to Database Server? Since every time a call is being made to a method contains query, a group of codes to create new database connection is being executed.”

I was asked this question by my supervisor and (thanks to my lack of knowledge on MS SQL Server) it tickles me, so I fired up Google and found this article “10 Tips for Writing High-Performance Web Applications” by Rob Howard for Microsoft. Tips 3 seemed to answer my curiosity, the full article is at this link:
http://msdn.microsoft.com/msdnmag/issues/05/01/ASPNETPerformance/

Tip 3—Connection Pooling

Setting up the TCP connection between your Web application and SQL Server™ can be an expensive operation. Developers at Microsoft have been able to take advantage of connection pooling for some time now, allowing them to reuse connections to the database. Rather than setting up a new TCP connection on each request, a new connection is set up only when one is not available in the connection pool. When the connection is closed, it is returned to the pool where it remains connected to the database, as opposed to completely tearing down that TCP connection.

Of course you need to watch out for leaking connections. Always close your connections when you're finished with them. I repeat: no matter what anyone says about garbage collection within the Microsoft® .NET Framework, always call Close or Dispose explicitly on your connection when you are finished with it. Do not trust the common language runtime (CLR) to clean up and close your connection for you at a predetermined time. The CLR will eventually destroy the class and force the connection closed, but you have no guarantee when the garbage collection on the object will actually happen.

To use connection pooling optimally, there are a couple of rules to live by. First, open the connection, do the work, and then close the connection. It's okay to open and close the connection multiple times on each request if you have to (optimally you apply Tip 1) rather than keeping the connection open and passing it around through different methods. Second, use the same connection string (and the same thread identity if you're using integrated authentication). If you don't use the same connection string, for example customizing the connection string based on the logged-in user, you won't get the same optimization value provided by connection pooling. And if you use integrated authentication while impersonating a large set of users, your pooling will also be much less effective. The .NET CLR data performance counters can be very useful when attempting to track down any performance issues that are related to connection pooling.

Whenever your application is connecting to a resource, such as a database, running in another process, you should optimize by focusing on the time spent connecting to the resource, the time spent sending or retrieving data, and the number of round-trips. Optimizing any kind of process hop in your application is the first place to start to achieve better performance.

The application tier contains the logic that connects to your data layer and transforms data into meaningful class instances and business processes. For example, in Community Server, this is where you populate a Forums or Threads collection, and apply business rules such as permissions; most importantly it is where the Caching logic is performed.

So, basically SQL Server done the optimization job for us, all we need to do is use the same connection string (and the same thread identity if integrated authentication is used) and SQL Server will see if it is still available in the connection pool to be reused.

I found a deeper analysis and recommendation on Connection Pooling at the article “Tuning Up ADO.NET Connection Pooling in ASP.NET Applications” by Dmitri Khanine, this article can be found at this link:
(http://www.15seconds.com/issue/040830.htm)