tag:blogger.com,1999:blog-59635824059285505182024-03-16T11:52:58.142-07:00Yudong LiYudong Li's Programming LifeYudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.comBlogger43125tag:blogger.com,1999:blog-5963582405928550518.post-47411555603516397162011-06-24T01:05:00.000-07:002011-06-24T01:05:32.791-07:00SQL: Turn multiple rows into one row multiple columnsWe have a table looks like this:<br />
Name Question Value<br />
Alan 1 5<br />
Alan 2 4<br />
Alan 3 6<br />
Jim 1 4<br />
Jim 2 3<br />
Jim 4 5<br />
<br />
We would like to select out a result looks like this:<br />
Name Q1 Q2 Q3 Q4<br />
Alan 5 4 6 null<br />
Jim 4 3 null 5<br />
<br />
This problem seems to be very easy, however, cost me a lot of time to figure out how to proceed. Finally, after some googling a nice post introduce a nice way to handle similar issues using MAX(DECODE()) function.<br />
<br />
Basically, you first need to utilise the decode() function. decode() acts like if-else-then. For example, decode(name, 'Alan', 'True', 'False') works like if the name is 'Alan' then return 'True', otherwise return 'False'.<br />
<br />
In our problem here, we use decode() to distinguish the answers from different questions: decode(question, 1, value, NULL) as Q1, decode(question, 2, value, NULL) as Q2, ... In this way we would pick out the value of each question.<br />
<br />
After we can get the answer to each question, what we need to do is group the answers by different names. That's why we need max() here to act for the group by clause. Since there is only one value here for each question, so max(), min() or some other reasonable functions are all working well here.:<br />
<br />
select name AS NAME, max(decode(question, 1, value, NULL)) AS Q1, max(decode(question, 2, value, NULL)) AS Q2, max(decode(question, 3, value, NULL)) AS Q3, max(decode(question, 4, value, NULL)) AS Q4 from table group by name.Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com1tag:blogger.com,1999:blog-5963582405928550518.post-83603196817880447202011-05-24T20:01:00.000-07:002011-05-24T20:01:51.958-07:00Cannot restart LDAP server on UbuntuI did some changes to the ldif file yesterday, and cannot start my LDAP server any more today. I tried different ways, but all unfortunately failed.<br />
<br />
Actually the way to workaround is quite easy, that you simply remove the folder slapd.d/ under /etc/ldap, and reinstall the ldap again, ldap will work again.Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-80801370086131762452011-05-24T18:38:00.000-07:002011-05-24T18:38:57.076-07:00Intellij IDEA 10.5 stuck in the loading processIntellij IDEA is a really smart IDA that attracts more and more developers to its community. However, a lot of bugs are still hanging there which annoy people too much.<br />
<br />
One of the bugs is when you start IDEA with a pretty large project, there is a highly chance that your loading process will stuck there forever. This has been identified as a bug in <a href="http://youtrack.jetbrains.net/issue/IDEA-67401">IDEA-67401</a>, and hasn't been resolved yet now.<br />
<br />
According to that issue page, there is a workaround to load the project. That is to disable the 'Tip of the Day' and 'Productivity Guide' prompt when start a new project or open IDEA.<br />
<br />
Besides, there is another possibility that you cannot even get in to adjust your setting before it get stuck. If that is the case, just remember to be very quick when you open IDEA, as soon as the loading project prompt pop up, cancel the loading, as well as all the next loading processes. Once the loading started, you will not be able to cancel it again, and have to kill the IDE and restart again.Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com17tag:blogger.com,1999:blog-5963582405928550518.post-86516928210591334132011-05-16T03:30:00.000-07:002011-05-16T03:30:13.482-07:00Oracle: To convert a row into a column (CROSS-JOIN)This afternoon, while I am doing some Jasper report stuff, I am puzzled by a SQL query that which takes quite a long time to execute. As the time goes, Jasper will easily get a timeout and a blank screen is waiting for me after retrieving data for about 120 seconds.<br />
<br />
As it is not appropriate to use the original data table as the example to describe here, I tried to make up a similar scenario which should explain the cause and the solution sufficiently.<br />
<br />
Given a table A, with five columns -- Primary Key (pk), Statistics 1A (s1a), Statistics 1B (s1b), Statistics 2A(s2a), Statistics 2B(s2b), we need to select the data out into a form that looks like -- Primary Key (pk), Statistics A(s1a or s2a), Statistics B(s2b or s2b). Essentially, it is trying to convert a row into one column but two rows. Someone may prefer to call it pivot query by the way.<br />
<br />
Initially, naively I think the most easiest and intuitive way to do it is by two union queries. That is<br />
select pk, s1a, s1b from table where *** union select pk, s2a, s2b from table where ***.<br />
When the data set is small, and it only depends on the single table, it is fine. However, in my case, the data set is huge, and the union will not be two but eight, and more importantly every union section will consists of another 6 inner join tables. That's the reason why Jasper cannot retrieve what it needs on time.<br />
<br />
<a href="http://www.dba-oracle.com/t_converting_rows_columns.htm">In this link</a>, as suggested by Scott, introduced a very smart way to handle this issue. What it uses is called: cross join. So basically, we use the existing table to cross join with the different type/criteria to return the expected result.<br />
<br />
select<br />
pk,<br />
case<br />
when ite = 's1' then s1a<br />
when ite = 's2' then s2a<br />
end as sa<br />
case<br />
when ite = 's1' then s1b<br />
when ite = 's2' then s2b<br />
end as sb<br />
from<br />
(<br />
select pivoter.ite,<br />
s1a, s1b, s2a, s2b<br />
from<br />
table<br />
cross join (<br />
select 's1' as ite from dual<br />
union all<br />
select 's2' as ite from dual<br />
) pivotel<br />
)<br />
<br />
By using the above code, the problem will be solved by the cross join union columns.Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com15tag:blogger.com,1999:blog-5963582405928550518.post-75150507742838574892011-05-15T17:58:00.000-07:002011-05-15T17:58:46.757-07:00Miscellaneous Points for Oracle Join QueryAs a software developer, you must have the experience to deal with all sorts of join queries. And I believe everyone has once or still now struggle with all these different terms about inner, outer, left, right, and etc. As I have spent roughly two hours these morning, to summarize some of the points that I am easily to forgotten and got wrong, it's a good chance to write it done in case later I need to test my memory.<br />
<br />
1) What is the difference of join...on... grammar and little (+) sign?<br />
<br />
Oracle used to only support (+) in old days, which is also kind of created by Oracle. Later, as the ANSI formalized the standard for join grammar, Oracle adopted both for join queries. As a result, it won't be a surprise if you see both two styles in your project which comes from different developers.<br />
<br />
2) Any difference between inner join and join?<br />
<br />
No.<br />
<br />
3) Why do we need outer join?<br />
<br />
Generally, inner join will return result sets that have a match in both Table A and B. However in reality sometimes, we also need results to be returned even the matching is null. That's the place where outer join shows its ability. There are plenty of tutorials available to discuss about the outer join. What I want to mentioning here is the null value will only applied to the join's destination table, but not the starting table.<br />
<br />
4) Any more differences?<br />
<br />
Yes. Actually there are two more I want to emphasize:<br />
<br />
1. ANSI style supports outer join, which you can google what does it mean. But traditional Oracle doesn't directly support it. ( By saying it directly, I mean there always exists workarounds, and various workarounds)<br />
2. One of the most important feature to differentiate the two is the ANSI style separate the join condition and query filter criteria, which is much more nice tidy and clean. (Is it, at least I think so). Meanwhile, you will also be able to avoid a lot of caveats that may come with you when you go for the traditional style. See <a href="http://www.orafaq.com/node/855">Common errors seen when using OUTER-JOIN</a>.Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-67615391340844260782011-04-25T21:29:00.000-07:002011-04-25T21:29:21.507-07:00Australian Permanent Residency and CitizenshipLast Friday, after "illegally" struggling in Australia for more than one year, I was granted permanent residency visa. With a lot of congratulations coming from different people, a common question they ask is 'what is Australian Permanent Residency'? This question even comes from many Aussies which I thought the answer should be crystal clear for them. Now I realized the one who knows it well can only be those people who had/have experience to deal with the evil DIAC ( Department of Immigration and Citizenship).<br />
<br />
Anyway, in this post, I will shortly introduce what is Australian Permanent Residency and how does it work for migrating people like me.<br />
<br />
As everyone knows, generally there are only a few countries in the world publicly welcome immigration, Australia, Canada, New Zealand to name a few. Most of these countries share the same features like highly developed but low population density. In order to keep these contries' development, more on economical perspective, they need immigrates to fill the holes for skills shortage. The occupations varied from country to country depending on what kind of main industry the country runs. Let's go back to Australia again, the immigration policy has changed dramatically in the last ten years. Start from 2000, as the strong need from Australia government, it was extremely easy for an overseas student to get a permanent residency after studying in Australia for a tertiary degree. Under that policy, many many students came to Australia and settled since then. Meanwhile, they acted as an advertisement or agent, and attracted more crowds keeping coming. Consequently, a lot of changes of the migration policy had been put on. From 2007, a two-year minimum study period and IETLS 4*7 was started to be effective. Though pretty hard, comparing to the huge base numbers of students, still too many are eligible to stay in Australia after graduation. Started from 2008, more and more changes put on migration law made most of the graduates lost the opportunity to apply for PR, and had to return to their own country after spending thousands of thousands dollars here.<br />
<br />
Currently, migration law is still under discussion, and another dramatic overhaul is being in progress and is due to release on Jul 1, 2011. Australia is no longer a country that can be easily migrated.<br />
<br />
So, after you got PR, what benefits you can get out of it?<br />
<br />
<ul><li>You are eligible to staying in Australia indefinitely</li>
<li>You can work, study, or nearly do whatever you want in Australia</li>
<li>You are free to leave Australia and back to Australia as many times as you want</li>
<li>You have the right to apply Medicare</li>
<li>You can also apply for centrelink after two years</li>
<li>You can freely go to New Zealand</li>
</ul><div>Comparing the citizenship, the only a few disadvantages are</div><div><ul><li>You have to renew your PR visa every five years, and within this period time, you have to stay at least 2 years to show you are genuinely would like to be a resident in Australia</li>
<li>You have no political right in Australia, and cannot vote (which I guess most people would not care)</li>
<li>You cannot apply for Australian passport, and if you have to use your original passport to apply any third-country visa (Except New Zealand) </li>
</ul><div>I would say, the most benefit to apply for a citizenship is you get an Australian passport which allows you to enter most of the countries in the world without worring about visa. However, if your original country doesn't allow dual citizenship (e.g. China), then the moment you join Australia, you lost your original country's citizenship. This may be a big problem, if you wish to go back to your original country after 20 years or would like to work there, and will also be a problem for your property or deposits back in your own country. That's why, there are many people working in Australia holding a PR visa not applying for the citizenship.</div></div>Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com5tag:blogger.com,1999:blog-5963582405928550518.post-47034887534005308402011-04-20T21:48:00.000-07:002011-04-20T21:49:10.091-07:00Get rid of JTA warning message for WebLogicThis is the second time I ran into the JTA exceptions. Basically, you will get this message every one minute when running your EJB on any version of WebLogic server:<br />
<br />
<i><Warning> <JTA> <BEA-110486> <Transaction BEA1-05576B5644FBD4F5B49F cannot complete commit processing because resource [weblogic.jdbc.wrapper.JTSXAResourceImpl] is unavailable. The transaction will be abandoned after 67,910 seconds unless all resources acknowledge the commit decision.> </i><br />
<br />
Last I did google it, get rid of it, and eventually forget about it at all. Now it's a good chance for me to pick it up and record it down. Hopefully next time I can handle it in no time without spending half an hour wondering around. I tried different ways like restart server, uninstall/reinstall application, etc. No luck at all.<br />
<br />
This warning message is because something is wrong in your code, specifically of transaction. There must be some of the transaction still keep opening after some exception happens or similar alike. As a result, the server complain that there is a transaction still running which no one seems to be using it at all.<br />
<br />
In order to get rid of this ugly message, just go to your domain: <your-domain>/servers/AdminServer/data/store/default, there should be a file like "_WLS_ADMINSERVER000000.DAT" there. The number will random. If you delete this file, and restart the server, problem solved!Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com2tag:blogger.com,1999:blog-5963582405928550518.post-26939486090299146662011-03-24T04:01:00.000-07:002011-03-24T04:01:20.373-07:00Firefox 4 Two DaysI was aware of the release of Firefox since Day 1, and kept trying to use numerous beta or trial version or even the minefield (sounds scary as the name or the real experience?) before the official release. However every time I just switched back to my lovely and sleek Chrome. Don't blame my patience, it just never worked out.<br />
<br />
But this time, I have to say, the FF4 release is a huge success in terms of every aspect I can consider so far. The speed improvement is the topic that people keeping discussing all these two days. It's dramatically speed up everything even ten tabs are opening at the same time. The memory is another point I reckon, which used to consume nearly 700-800 MB on my poor machine, now can stably up and down around 300-400 MB. Actually this is one of the killing point for my working machine since this old PC always needs to Intellij Idea, WebLogic, SQLDeveloper and all sorts of dev tools all at once, you know how hard time I am having.<br />
<br />
Another thing I like FF better than Chrome is it can temporarily record down the input you have put in before. Next time, you don't need to repeat those lengthy characters but just double click the text field. For me it's really helpful since I have do those boring task everyday to make sure programs never being screwed up.<br />
<br />
Because it's only Day 2, I don't mind the add-ons are not ready for use. But please, guys, at least make some of the most common tools available ASAP. I have spent nearly half an hour this morning trying to find a good add-on for twitter. You will know the result if you do the same search now, not two days later, I reckon things may change less than 48 hours because weekend is coming :)<br />
<br />
Anyway, now I have both Chrome and FF running at the same time finally (which also thanks for the latest upgrade of the memory), each sitting in a monitor. Before I make any final decision to stick on which browser, I will hang over, or later I may get used to the dual browser environment. Should I?<br />
<br />
P.S. It would be nice if we can have a cross-browser bookmark application. Any recommendations?Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-85014817573936727412011-03-06T20:22:00.000-08:002011-03-19T02:14:27.912-07:00Java MultithreadingI know this is one of the most basic topic that even second year university should be familiar with, however, this afternoon I have spent nearly a whole afternoon to debug some issues with Java Multi-threading, and only at that point I realized I am far far away from fully understanding the seemingly easy problem.<br/><br/>As I have reading through all kinds of tutorials, discussions available, I believe the best way would be to sommarize what I have found and record it as my own writing.<br/><br/>So here is the beginning, Java always running with a single thread if there is no request for a new thread to be spawned. However, in some circumstances, a separate thread is needed to perform some other tasks either to be parallel with the main thread or as a background job. There are two ways to achieve multi-thread in Java, as everyone should be familiar with: <strong>implements Runnable</strong> and <strong>extends Thread</strong>.<br/><br/>Basically, most of the time, <strong>implements Runnable</strong> should be the preference over <strong>extends Thread, </strong>unless you need to specifically override the life cycle method of the thread (which is rare). These two ways do nearly the same thing except when you subclass Thread, you are trapped in the spot that no other class can inherit as Java only allows single inheritance. While if you implements the interface, you still have the ability to extend any class you wish.<br/><br/>Next, let's go into the main method in the body -- <strong>run</strong>(). Actually, you have the choice to call<strong> run() </strong>and <strong>start() </strong>to execute the body. However, there is some slight differences between these two methods.<strong> run() </strong>will execute the method <strong>immediately</strong> in the <strong>current thread</strong>, and start() will spawn a new thread and the execution will be invoked undeterministicallly. As a rule of thumb, we should avoid to use run() but use start() all the time. The reason we have a multi-thread class is to run it in a separate thread. If we just call the run(), everything will run sequentially as there is no thread existed. run() should always invoked by JVM not the application.<br/><br/>Another point we should pay some attention is Executor which comes from JDK 5. Instead of explicitly calling new MyThread().start() to invoke the new thread, Executor can decouple the task submission from the real method perform -> executor.execute(new MyThread()).Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-933788482202248812010-11-15T15:03:00.000-08:002011-03-19T02:14:27.915-07:00WstxIOException in WebLogiccom.ctc.wstx.exc.WstxIOException:<br/>Tried all: '1' addresses, but could not connect over HTTP to server: 'java.sun.com', port: '80'<br/>at com.ctc.wstx.sr.StreamScanner.throwFromIOE(StreamScanner.java:683)<br/>at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1086)<br/>at weblogic.servlet.internal.TldCacheHelper$TldIOHelper.parseXML(TldCacheHelper.java:134)<br/>at weblogic.descriptor.DescriptorCache.parseXML(DescriptorCache.java:380)<br/>at weblogic.servlet.internal.TldCacheHelper.parseTagLibraries(TldCacheHelper.java:65)<br/><br/>Truncated. see log file for complete stacktrace<br/><br/>java.net.ConnectException:<br/>Tried all: '1' addresses, but could not connect over HTTP to server: 'java.sun.com', port: '80'<br/>at weblogic.net.http.HttpClient.openServer(HttpClient.java:312)<br/>at weblogic.net.http.HttpClient.openServer(HttpClient.java:388)<br/>at weblogic.net.http.HttpClient.New(HttpClient.java:238)<br/>at weblogic.net.http.HttpURLConnection.connect(HttpURLConnection.java:172)<br/>at weblogic.net.http.HttpURLConnection.getInputStream(HttpURLConnection.java:356)<br/><br/>Truncated. see log file for complete stacktrace<br/><br/>Sometimes, you may encounter problems like this which slows the deployment process.<br/><br/>One option to workaround this issue is add<br/><br/>"-Djavax.xml.stream.XMLInputFactory=weblogic.xml.stax.XMLStreamInputFactory"<br/><br/>to your WebLogic start script. It will prevents WebLogic to fetch any remote xml definition files.Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com1tag:blogger.com,1999:blog-5963582405928550518.post-62475194785288791862010-10-04T12:25:00.000-07:002011-03-19T02:14:27.918-07:00Exception Handling Principles1. System.out.println is expensive. These calls are synchronized for the duration of disk I/O, which<br/><br/>significantly slows throughput.<br/><br/>2. By default, stack traces are logged to the console. But browsing the console for an exception trace isn't feasible in<br/><br/>a production system.<br/><br/>3. In addition, they aren't guaranteed to show up in the production system, because system administrators can map<br/><br/>System.out and System.errs to ' ' [>nul] on NT and dev/nul on UNIX. Moreover, if you're running the<br/><br/>J2EE app server as an NT service, you won't even have a console.<br/><br/>4. Even if you redirect the console log to an output file, chances are that the file will be overwritten when the<br/><br/>production J2EE app servers are restarted.<br/><br/>5. Using System.out.println during testing and then removing them before production isn't an elegant solution<br/><br/>either, because doing so means your production code will not function the same as your test code.<br/><div><br/><div>6. If you can't handle an exception, don't catch it.</div><br/><div>7. Catch an exception as close as possible to its source.</div><br/><div>8. If you catch an exception, don't swallow it.</div><br/><div>9. Log an exception where you catch it, unless you plan to re-throw it.</div><br/><div>10. Preserve the stack trace when you re-throw the exception by wrapping the original exception in the new one.</div><br/><div>11. Use as many typed exceptions as you need, particularly for application exceptions. Do not just use</div><br/><div>java.lang.Exception every time you need to declare a throws clause. By fine graining the throws clause, it is self-</div><br/><div>documenting and becomes evident to the caller that different exceptions have to be handled.</div><br/><div>12. If you programming application logic, use unchecked exceptions to indicate an error from which the user cannot</div><br/><div>recover. If you are creating third party libraries to be used by other developers, use checked exceptions for</div><br/><div>unrecoverable errors too.</div><br/><div>13. Never throw unchecked exceptions in your methods just because it clutters the method signature. There are some</div><br/><div>scenarios where this is good (For e.g. EJB Interface/Implementations, where unchecked exceptions alter the bean</div><br/><div>behavior in terms of transaction commit and rollback), but otherwise this is not a good practice.</div><br/><div>14. Throw Application Exceptions as Unchecked Exceptions and Unrecoverable System exceptions as unchecked</div><br/><div>exceptions.</div><br/><div>15. Structure your methods according to how fine-grained your exception handling must be.</div><br/><div></div><br/></div>Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-35454391537267797052010-08-04T19:42:00.000-07:002011-03-19T02:14:27.921-07:00Develop a simple Maven pluginWhile this is a really old topic, the process to develop a maven plugin still need some time to sort our especially for people who are not familiar with Maven, like me. So here is a very short introduction on how to develop a maven plugin and how to integrate it with your existing application.<br/><br/>Prerequisite: of course, you need Maven, and all other things will totally depend on your wish. For me, I use Eclipse with m2eclipse plugin which saves me some time to create the archetype of Maven plugin project. However, this is a rather simple process and everyone can do it in command line with only a few typing.<br/><br/>1. Create a new Maven plugin project in Eclipse. Here, we name the project groupid: Featheast, artifactid: maven-test-plugin. Please pay attention to the naming convention of artifactid which we will use a little bit later.<br/><br/>2. Under the src directory, create a new Class naming MyMojo which extends AbstractMojo. AbstractMojo is the class you must inherit for plugin to work, and the only method you have to implement is execute(), where the real business will happens.<br/><br/>3. In order for other project to recognize your plugin function, you have to specify what the Mojo does. In Maven, this is accomplished by add an annotation @goal for the class in the comment of class. This goal name will be used later for other projects to reference this function.<br/><br/>4. You can create any number of variables in the class which acts like a parameter for later process. Consider the whole class as a function, then this variables will be the arguments you passed in. For each variable, another annotation @parameter will be used to specify how to like the variable to external usage.<br/><br/>5. Set the real business logic in the execute() function. You can use the Maven log to print out or debug for your convenience. An internal method getLog() is always there for you to do so and the usage of it is quite similar to the log4j.<br/><br/>/**<br/><br/>*<br/><br/>* @author yudong<br/><br/>*<br/><br/>* @goal realmojo<br/><br/>*/<br/><br/>public class MyRealMojo extends AbstractMojo{<br/><br/><span> </span><br/><br/><span> </span>/**<br/><br/><span> </span> * @parameter expression="${mymojo.username}"<br/><br/><span> </span> */<br/><br/><span> </span>private String username;<br/><br/><span> </span><br/><br/><span> </span>/**<br/><br/><span> </span> * @parameter expression="${mymojo.password}"<br/><br/><span> </span> */<br/><br/><span> </span>private String password;<br/><br/><span> </span><br/><br/><span> </span><br/><br/><span> </span><br/><br/><span> </span>public void execute() throws MojoExecutionException, MojoFailureException {<br/><br/><span> </span>if(password.length()<10){<br/><br/><span> </span>getLog().info("Hey" + username +", your password is too short!");<br/><br/><span> </span>}else{<br/><br/><span> </span>getLog().info("Congratulations " + username + ", your password is all right!");<br/><br/><span> </span>}<br/><br/><span> </span>Set set = getPluginContext().keySet();<br/><br/><span> </span>getLog().info("The context include " + set.size() + " entries");<br/><br/><span> </span>Iterator iterator = set.iterator();<br/><br/><span> </span><br/><br/><span> </span>while(iterator.hasNext()){<br/><br/><span> </span>Object key = iterator.next();<br/><br/><span> </span>getLog().info(key.toString() + " : " + getPluginContext().get(key));<br/><br/><span> </span>}<br/><br/><span> </span>}<br/><br/>}<br/><br/>6. In the pom.xml, add any dependency you need, then add the maven-plugin-plugin to build the plugin. Specify any goals that you want to be included in the build output.<br/><br/><build><br/><br/><plugins><br/><br/><plugin><br/><br/><groupId>org.apache.maven.plugins</groupId><br/><br/><artifactId>maven-plugin-plugin</artifactId><br/><br/><version>2.5.1</version><br/><br/><configuration><br/><br/><goalPrefix>Plugin.Test</goalPrefix><br/><br/><username>Featheast</username><br/><br/></configuration><br/><br/><goals><br/><br/><span> </span><goal><br/><br/><span> </span>realmojo<br/><br/><span> </span></goal><br/><br/></goals><br/><br/></plugin><br/><br/></plugins><br/><br/></build><br/><br/>7. Now your Maven plugin is created, build it with standard command: mvn install.<br/><br/>8. Create another project to use this plugin. Add the plugin configuration in the pom.xml, and specify the goal.<br/><br/><build><br/><br/><span> </span><plugins><br/><br/><span> </span><plugin><br/><br/><span> </span><groupId>Featheast</groupId><br/><br/><span> </span><artifactId>maven-test-plugin</artifactId><br/><br/><span> </span><version>0.0.1-SNAPSHOT</version> <span> </span><br/><br/><span> </span><executions><br/><br/><span> </span><execution><br/><br/><span> </span><phase>install</phase><br/><br/><span> </span><goals><br/><br/><span> </span><goal>realmojo</goal><br/><br/><span> </span></goals><br/><br/><span> </span><configuration><br/><br/><span> </span><username>This is the usernmae</username><br/><br/><span> </span></configuration><br/><br/><span> </span></execution><br/><br/><span> </span></executions><span> </span><br/><br/><span> </span></plugin><br/><br/><span> </span></plugins><br/><br/></build><br/><br/>9. You could specify the parameters in the configuration tag, or you can add -Dusername = XXX in the command line to pass in the parameters.<br/><br/>10. Finally what you did will be print on the console, once you build the new project.Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-57278887858865760902010-08-02T00:33:00.000-07:002011-03-19T02:14:27.926-07:00Pre-EJB 3.0 Enterprise BeanAn enterprise bean is a server-side software component that can be deployed in a distributed multi-tiered environment, and it will remain that way going forward. Anyone who has worked with Enterprise JavaBeans technology before knows that there are three types of beans - session beans, entity beans, and message-driven beans. Historically an EJB component implementation has neven been contained in a single source file; a number of files work together to make up an implementation of an enterprise bean. Let us briefly go through these EJB implementation artifacts:<br/><br/>1) Enterprise bean class<br/><br/>The primary part of the bean used to be the implementation itself - which contained the guts of your logic - called the enterprise bean class. This was simply a Java class that conformed to a well-defined interface and obeyed certain rules. For instance, the EJB specification defined a few standard interfaces forced your bean class had to implement. Implementing these interfaces forced your bean class to expose certain methods that all bean must provide, as defined by the EJB component model. The EJB container called these required methods to manage your bean and alert your bean to significant events. The most basic interface that all of the session, entity and message-driven bean classes implemented is the javax.ejb.EnterpriseBean interface. This interface served as a marker interface, meaning that implementing this interface indicated that your class was indeed an enterprise bean class. Session beans, entity beans, and message-driven beans each had more specific interfaces that extended the component interface javax.ejb.EnterpriseBean, viz. javax.ejb.SessionBean, javax.ejb.EntityBean, and javax.ejb.MessageDrivenBean.<br/><br/>2) EJB Object<br/><br/>When a client wants to use an instance of an enterprise bean class, the client never invokes the method directly on an actual bean instance. Rather, the invocation is intercepted by the EJB container and then delegated to the bean instance. By intercepting requests, the EJB container can provide middleware services implicitly. Thus, the EJB container acted as a layer of indirection between the client code and the bean. This layer of indirection manifested itself as a single network-aware object called the EJB object. The container would generate the implementation of javax.ejb.EJBObject or javax.ejb.EJBLocalObject, depending on whether the bean was local or remote, that is whether it supported local or remote clients, at deployment time.<br/><br/>3) Remote interface<br/><br/>A remote interface, written by the bean provider, consisted of all the methods that were made available to the remote client of the bean.These methods usually would be business methods that the bean provider wants the remote clients of the bean to use. Remote interfaces had to comply with special rules that EJB specification defined. For example, all remote interfaces must be derived from the javax.ejb.EJBObject interface. The EJB object interface consisted of a number of methods, and the container would implement them for you.<br/><br/>4) Local interface<br/><br/>The local interface, written by the bean provider, consisted of all the methods that were made available to the local clients of the bean. Akin to the remote interface, the local interface provided business methods that the local bean clients could call. The local interface provided an efficient mechanism to enable use of EJB objects within the Java Virtual MAchine, without incurring the overhead of RMI-IIOP. An enterprise bean that expected to be used by remote as well as local clients had to support both local and remote interfaces.<br/><br/>5) Home interface<br/><br/>Home interfaces defined methods for creating, destroying, and finding local or remote EJB objects. They acted as life cycle interfaces for the EJB objects. Each bean was supposed to have a corresponding home interface. All home interfaces had to extend standard interface javax.ejb.EJBHome or javax.ejb.EJBLocalHome, depending on whether the enterprise bean was local or remote. The container generated home objects implementing the methods of this interface at the time of deployment. Clients acquired references to the EJB objects via these home objects. Even though the container implemented home interfaces as home objects, an EJB developer was still required to follow certain rules pertaining to the life-cycle methods of a home interface. For instance, for each createXXX() method in the home interface, the enterprise bean class was required to have a corresponding ejbCreateXXX() method.<br/><br/>6) Deployment descriptor<br/><br/>To inform the container about your middleware needs, you as a bean provider were required to declare your component' middleware needs - such as life-cycle management, transaction control, security services, and so on - in an XML-based deployment descriptor file. The container inspected the deployment descriptor and fulfilled the requirements laid out by you. The deployment descriptor thus played the key role in enabling implicit middleware services in the EJB framework.<br/><br/>7) Vendor-specific files<br/><br/>Since all EJB server vendors are different, they each have some proprietary value-added features. The EJB specification did not touch these features, such as how to configure load balancing, clustering, monitoring, and so on. Therefore, each EJB server vendor required you to include additional files specific to that vendor, such as a vendor specific XML or text-based deployment descriptor that the container would inspect to provide vendor-specific middleware services.<br/><br/>8) The Ejb-jar file<br/><br/>The Ejb-jar file, the packaging artifact, consisted of all the other implementation artifacts of your bean. Once you generated your bean classes, your home interfaces, your remote interfaces, and your deployment descriptor, you'd package them into an Ejb-jar file. It is this Ejb-jar file that you, as a bean provider, would pass around for deployment purpose to application assembles.Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-15518678696234872322010-08-01T23:58:00.000-07:002011-03-19T02:14:27.930-07:00Maven tips1) Try to ensure there is no duplicates or different versions of dependencies in a project, which will lead errors or conflicts later on.<br/><br/>2) If only want dependencies to exist during the compile phase and then be removed, the scope of such dependency should be set to PROVIDED. PROVIDED scope is not transitive, and the dependencies is supposed to be provided by JDK or container.<br/><br/>3) Use mvn dependency:tree to display the dependencies structure of the whole project, and try to pipeline the output to a file will be more easily to be observed.<br/><br/>4) If you are sitting behind a firewall, set proxy configurations in settings.xml under your .m2 directory.<br/><br/>More to be continued.Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-4654302108803840902010-07-12T12:34:00.000-07:002011-03-19T02:14:27.932-07:00Pipe or Redirect within Java Commandit's common knowledge to use Runtime.getRuntime().exec(command) to execute any Unix command or Windows command in a Java application. However, when you try to include the pipe '|' or redirect '>' in the command to alter any output pattern, most of the time the Java will not interpret your command as expected which will turn out to be an error finally. For example, when I tried to run ffmpeg command to encode any video and would like to capture those outputs into a log file, an error of "Unable to find a suitable output format for '>'" will appear.<br/><br/>In order to make Java "understand" out purpose, you cannot directly insert the usual command into the exec() parameter. There is a workaround which will solve the issue.<br/><br/>Construct an array:<br/><br/>String[] commands = {<br/>"/bin/sh",<br/>"-c",<br/>YOUR REAL COMMAND HERE<br/>}<br/><br/>and pass the commands as argument to the Runtime.getRuntime().exec(commands). In this way, the Java environment will make a sh (YOU COULD USE BASH) environment to execute your command, which will take the pipe and redirect into consideration.Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com2tag:blogger.com,1999:blog-5963582405928550518.post-8176082547686655512010-07-08T01:09:00.000-07:002011-03-19T02:14:27.934-07:00Seven Java EE Performance Problems1) Slow-running applications<br/><br/>2) Applications that degrade over tim<br/><br/>3) Slow memory leaks that gradually degrade performance<br/><br/>4) Huge memory leaks that crash the application server<br/><br/>5) Periodic CPU spikes and application freezes<br/><br/>6) Applications that behave significantly differently under a heavy load under normal usage patterns<br/><br/>7) Problems or anomalies that occur in production but cannot be reproduced in a test environmentYudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-30341993172777665062010-07-07T23:35:00.000-07:002011-03-19T02:14:27.937-07:00Long IP AddressMost of the time, people will think of an IP adress as a String. Especially in Java, most of the time, developers will deal with IP address either with URL or String class. However, representing an IP adress with String has several disadvantages. First a String usually takes more memory comparing to the "same" value int or long, and it will be difficult to compare with other IP addresses with a String. And more importantly, it will not be possible for people to easily determine whether an IP address is in the range of another two IP addresses.<br/><br/>Since IP addresses (IPv4) are composed by four integers ranging from 0 - 255, it will be obviously easy to convert an String IP address to an numerical form which can also uniquely represent the IP address. That's how Long IP address emerges.<br/><br/>A simple method to convert the String IP address to a Long IP address (A.B.C.D) would be:<br/><br/>256*256*256*A + 256*256*B + 256*C + D<br/><br/>Using Long IP address will be helpful in certain scenarios, one is when you dealing with IP-Location mapping in Google App Engine. A very popular data called GeoIP created by MaxMind is heavily used in a lot of different projects. However, when parsing the IP addresses, what they have done in the Java library is first transform the IP String into an InetAddress, and then using getAddress() to get its byte[], and finally get the Long value. There will be no problem when you using this library in other platforms. But in Google App Engine, things will got stuck because of the InetAddress is on the black-list of GAE, which means you will not be able to play with this class. The workaround here would be using the converting method above, you can directly get the Long value which is what they have calculated all the way along.<br/><br/>There might be some other places Long IP addresses is useful, especially when dealing with range query of IP addresses.Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-6377211332362451262010-06-15T13:16:00.000-07:002011-03-19T02:14:27.940-07:00tmpreaper and ctime, atime, and mtime in UbuntuThere is a package in Ubuntu can be used to clean directories with those files older than a certain period of time. Before we get into that, let's first clarify three terms related to times in Ubuntu: ctime, atime and mtime.<br/><br/>ctime is the creation time of the file. Say I created this file at Wed Jun 16, 9:45:15, 2010, then this time spot will be exactly the ctime of this file.<br/><br/>atime is the access time of the file. Displaying the file contents or executing the file script will update the atime.<br/><br/>mtime is the modify(ication) time which will be updated when the actual content of the file is modified.<br/><br/>Back to the tmpreaper command, since it is not default installed into the Ubuntu, you have to sudo apt-get install to get the latest version of tmpreaper.<br/><br/>It is a simple command <strong>tmpreaper TIME-FORMAT DIRS </strong><span>to invoke the function to do the clean job for you. </span><br/><br/><span>TIME-FORMAT is a parameter that specifies the duration of the file which has not been accessed. By default, the time here is about atime. So even if you modify the content in a later stage but does not access the file, the file might still be deleted. Of course, you can enforce the command to run in terms of mtime which you have to append --mtime to the original command. </span><br/><br/><span>While the DIRS is the directory you would like to invoke this function, such as /tmp. Never try to do such a thing on the root directory or you may encounter a disaster.</span><br/><br/><span>If you have to manually run the command every time, then there is no sense to use this. While the power strengthens with combining another tool CronTab. </span><br/><br/><span>CronTab is used to create cron job to run specific script in a period of time. In order to run the cron job, all you need to do is write a script which include the command we have talked previously, then edit the configuration file of CronTab, then the scripts will run as you required in the background.</span><br/><br/><span>To edit the configuration file, simply run sudo crontab -e, add an entry in to the file. The format of the file is m h dom mon dow command, the first five sections are divided by space, and you can use asterisk to specify anytime like a wildcard.<br/></span>Fox example, * * * * * /XXX.bash will run every minute. More usage can be seen from the documentations.Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-68804546817234244152010-06-13T20:10:00.000-07:002011-03-19T02:14:27.943-07:00Handling Azure Large File UploadIn Azure storage, files smaller than 64MB can be directly stored as a single blob into the storage. However, when you want to store a file larger than 64MB, things will become a little bit complicated. The way to accomplish this task is to use the block list service.<br/><br/>Block, unlike blob, is a small unit of file which can be aggregated as a list to form a large file, with each of the small chunk to have a limit of 4MB. For example, say if you have a 100MB file which you want to store into Azure, you have to manually split the files into at least 25 pieces, and then using the <a href="http://msdn.microsoft.com/en-us/library/dd135726.aspx">put block</a> & <a href="http://msdn.microsoft.com/en-us/library/dd179467.aspx">put block list</a> operation to upload all the 25 items. More details are listed below:<br/><br/>1) Split large files: this can be done in various ways, via existing tools or write your simple code. Pay attention to write down those file names and make them in the sequence you split them.<br/><br/>2) Put Block: Each of the pieces created last step is called a block, and the second will upload each block one by one into the storage via Put Block operation. The basic process is no difference with other methods, however, one thing need to pay attention is the blockid is a required parameter and all blockids of the blocks must be the same size. In our example, you can have a Base64 blockid with arbitrary length less than 64, but you have to enforce all of the 25 items to have the same length. If not, a 400 exception, or The specified blob or block content is invalid error message will be returned.<br/><br/>3) Put Block List: The last but not the least step is to notify the server that all pieces are uploaded and now it's your job to combine them altogether.<br/><br/>After the three steps, you will be able to upload any size files into the Azure storage.Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-51693030354664763962010-05-25T13:03:00.000-07:002011-03-19T02:14:27.945-07:00How to tell the version of your Mac OSStep 1: Click the apple icon in the upper left corner of the screen<br/><br/>Step 2: Choose About this Mac<br/><br/>Step 3: The label right below the Apple indicates the version of this Mac OS, e.g. Version 10.6.2<br/><br/>Step 4: Right to the Processor label is the version of your CPU, from which we can tell the bits of the CPU<br/><br/>Here is a table to clarify the relationships:<br/><table id="kbtable" border="0"><br/><tbody><br/><tr id="header"><br/><td>Processor Name</td><br/><td>32- or 64-bit</td><br/></tr><br/><tr><br/><td>Intel Core Solo</td><br/><td>32 bit</td><br/></tr><br/><tr id="even"><br/><td>Intel Core Duo</td><br/><td>32 bit</td><br/></tr><br/><tr><br/><td>Intel Core 2 Duo</td><br/><td>64 bit</td><br/></tr><br/><tr id="even"><br/><td>Intel Quad-Core Xeon</td><br/><td>64 bit</td><br/></tr><br/><tr id="even"><br/><td>Dual-Core Intel Xeon</td><br/><td>64 bit</td><br/></tr><br/><tr id="even"><br/><td>Quad-Core Intel Xeon</td><br/><td>64 bit</td><br/></tr><br/></tbody></table><br/>Ref : http://support.apple.com/kb/ht3696Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-42952373169069642292010-05-15T16:05:00.000-07:002011-03-19T02:14:27.947-07:00Redirect vs ForwardIn the context of web programming, a lot of ambiguity existed between Redirect and Forward. Here is a short summary of the differences between these two terms.<br/><br/>1) Forward can only be direct to an internal page, however Redirect can be used both to an internal page and external page.<br/><br/>2) Forward is much faster than Redirect<br/><br/>3) With Forward the browser is unaware of what happens, and the URL address remains the original link, while Redirect will initiate a new request that the browser will update its address to the new link.<br/><br/>4) As a result of the browser awareness, the refresh function will be failed in the Forward since the URL has not changed, but the Redirect will be all the same.<br/><br/>5) Still with the same reason, with those operations have side-effect, say update the status of database, a Redirect should be used to avoid the refresh Forward to generate any duplicate operations.Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-63343876199712619042010-05-12T22:50:00.000-07:002011-03-19T02:00:29.724-07:00Ubuntu Path Setting/etc/profile is loaded once on login for every user<br/>/etc/bash.bashrc is loaded every time every user opens a terminal<br/>~/.bashrc is loaded every time a single user opens a terminal<br/>~/.profile is loaded once when a single user logs onYudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com1tag:blogger.com,1999:blog-5963582405928550518.post-66860852864366600562010-05-10T19:11:00.000-07:002011-03-19T02:00:29.725-07:00Integrating Amazon S3 and CloudFront for video streamingAmazon is a key player in the competition of cloud computing, and the service it provides can always satisfy our requirements if you can just dig a little bit deeper. Here, I'd like to show how to integrate the S3 and CloudFront service to provide video streaming service.<br/><br/>First, let's clarify some basic concepts of S3 and CloudFront. Amazon S3 is a storage provider which can store all kinds of data. But in order to deliver the content more rapidly for users globally, CloudFront uses its edge locations to obviously improve the responsive time. When people try to fetch a content via CloudFront, the server will automatically routes the user to the most closest location which host the replica of the data. However, CloudFront doesn't provide storage service, which is complemented by S3. That's why they always appear in the same place.<br/><br/>1. Distribute S3 bucket to CloudFront location<br/><br/>OK, so our first step is to link the S3 and CloudFront to allow distribution. It can be done with S3 Fox plugin of FireFox, or by programming. But we need to clarify another issue before we step forward. There are also two kinds of distribution existed in CloudFront, static file distribution and streaming distribution. Currently, the latest S3 Fox only supports the static distribution, which can be simply done by right clicking the bucket and 'manage the distribution'. In order to streaming distribution our bucket, we kind of have the only choice to code. Here is a short snippet of Java code to implement this function by using Jets3t library.<br/><br/>StreamingDistribution newStreamingDistribution = null;<br/><br/><span> </span>try {<br/><br/><span> </span>newStreamingDistribution = cloudFrontService.createStreamingDistribution(bucket.getName(), ""+ System.currentTimeMillis(),<br/><br/><span> </span>null, "Test streaming distribution", true );<br/><br/><span> </span>} catch (CloudFrontServiceException e1) {<br/><br/><span> </span>log.error(e1.getMessage());<br/><br/><span> </span>}<br/><br/><span> </span>log.info("New Streaming Distribution: " + newStreamingDistribution);<br/><br/><span> </span>StreamingDistributionConfig streamingDistributionConfig;<br/><br/><span> </span>try {<br/><br/><span> </span>streamingDistributionConfig = cloudFrontService.getStreamingDistributionConfig(newStreamingDistribution.getId());<br/><br/><span> </span>log.info("Streaming Distribution Config: "+ streamingDistributionConfig);<br/><br/><span> </span>} catch (CloudFrontServiceException e) {<br/><br/><span> </span>log.error(e.getMessage());<br/><br/><span> </span>}<br/><br/>2. After we enable the streaming distribution of the bucket, we will get a distribution URL. This is the base URL address that we will use throughout the whole process. Now we can use any methods to upload a multimedia file into the bucket we just created. Next we will use Flowplayer rtmp plugin to display the file into a browser.<br/><br/>Download the latest version of FlowPlayer as well as its rtmp plugin, and write a follow html page:<br/><br/><html><br/><br/><head><title>Video</title><script src="flowplayer/flowplayer-3.1.4.min.js"></script><br/><br/></head><br/><br/><body><br/><br/><a class="rtmp" href="50f9307fbcdcdcaae65c4bc58857ca19-LOW" style="display:block;width:640px;height:360px;"></a><br/><br/><script type="text/javascript">$f("a.rtmp", "flowplayer/flowplayer-3.1.5.swf",<br/><br/><span> </span>{clip:{provider: 'rtmp',autoPlay: true,},plugins: {rtmp: {url: 'flowplayer/flowplayer.rtmp-3.1.3.swf',netConnectionUrl: 'rtmp://s240vvr18v7md1.cloudfront.net/cfx/st'}}});</script><br/><br/></body></html><br/><br/>Several places need to notice:<br/><br/>1) you must have the flowplayer.js, flowplayer.swf, and flowplayer.rtmp.swf ready to use and with the right path.<br/><br/>2) In the anchor tag, the href attribute is the file path/name of which you would like to play. Say for example, your file is XXX.mp3, and with the full path of http://AAA.s3.amazonaws.com/XXX.mp3, then you should place XXX in the href attribute. DON'T ADD THE EXTENSION!!!.<br/><br/>3) In the JavaScript section, the autoPlay indicate whether to run the file automatically, the netConnectionUrl must be set according to the distribution URL you retrieved from part 1. Remember, you must prefix "rtmp://" before the url and append "cfx/st" to the URL, and also you must not ignore the single quote around the whole URL.<br/><br/>Now, you can have your streaming video playing in your browser! Easy and Sweat!Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-13881593382435298882010-05-06T13:23:00.000-07:002011-03-19T02:00:29.728-07:00GAE/J deployment transaction conflict error 409Today when I tried to deploy my application onto the App Engine, a weird error prompt up:<br/><br/><em>Unable to update app: Error posting to URL: https://appengine.google.com/api/appversion/create?app_id=metacdn&version=1&</em><br/><em>409 Conflict</em><br/><em>Another transaction by user featheast.lee is already in progress for this app and major version. That user can undo the transaction with appcfg.py's "rollback" command.</em><br/><em>See the deployment console for more details</em><br/><div style="text-align: left;">I haven't seen error before, and have no clue how it comes out. After several google page, I found a way to solve this problem.</div><br/><div style="text-align: left;">1. Open your terminal and get into the directory of your project. ( For windows users, please do it in CMD accordingly).</div><br/><div style="text-align: left;">2. Execute the sh script of appcfg.sh under with the parameter of rollback and war. You should prefix the path of your appcfg.sh, which is usually under the directory of eclipse plugin. ( Windows guys, continue to use cmd to replace the sh script)</div><br/><div style="text-align: left;">3. Now after successfully rollback the deployment, you are free to do anything you want to now.</div><br/><div style="text-align: left;">I guess the same story should be applied to Python as well, just simply use the appcfg.py instead of the script and things should work out.</div><br/><div style="text-align: left;">PS: I got a ZipExeption during the rollback process with a warning that Could not find API version from .svn. Though it still solves my deploy issue.</div>Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com0tag:blogger.com,1999:blog-5963582405928550518.post-79647572051333801452010-05-04T17:18:00.000-07:002011-03-19T02:00:29.731-07:00HTTP and HTTPS setup a Restlet environmentRestlet is a handy tool for people to setup an environment running RESTful web service. In order to secure some endpoint resource, sometimes it will be required to use HTTPS for communication, and Restlet, no doubt, supports both of the two ways.<br/><br/>1) HTTP<br/><br/>It's pretty easy to setup the HTTP environment, all you need to do is create a new component, and register the HTTP protocol into the component's server and things will work out as expected.<br/><br/>Component component = new Component();<br/>component.getServers().add(Protocol.HTTP, 8183);<br/>component.getDefaultHost().attach(new XXXApplication());<br/><br/>Notice, the 8183 is the port number you have to provide.<br/><br/>2) HTTPS<br/><br/>Unlike HTTP, in HTTPS mode, you need provide three more things: keystore, keystorePassword, and keyPassword.<br/><br/>For those who are not familiar with keystore: A Java container of keys and certificates is called a keystore. There are two usages for keystores: as a keystore and as a truststore. The keystore contains the material of the local entity, that is the private key and certificate that will be used to connect to the remote entity. Its counterpart, the truststore, contains the certificates that should be used to check the authenticity of the remote entity's certificates.<br/><br/>The steps to construct a keystore is detailed on the page: <a href="http://wiki.restlet.org/docs_2.0/13-restlet/27-restlet/46-restlet/213-restlet.html">http://wiki.restlet.org/docs_2.0/13-restlet/27-restlet/46-restlet/213-restlet.html</a>. Basically speaking, you need to use a SSL tool to generate keys first, then self-signed the certification. After all these steps finished, following the code list below and you will run the HTTPS server successfully.<br/><br/>Component component = new Component();<br/>Server server = component.getServers().add(Protocol.HTTPS, 8183);<br/>server.getContext().getParameters.add("keystorePath", keystorePath);<br/>server.getContext().getParameters.add("keystorePassword", keystorePassword);<br/>server.getContext().getParameters.add("keyPassword', keyPassword);<br/>component.start();<br/><br/>As the same theory, if you need to run any client side code as well under the same project, simply add the client's support protocol to the component as:<br/><br/>component.getClients().add(Protocol.HTTPS);Yudong Lihttp://www.blogger.com/profile/14441655709399595955noreply@blogger.com1