Internet Explorer caching behavior for RESTful data pulled by AngularJS

Hi, There:

A UI tester told me an AngularJS app doesn’t work in IE11, but works fine in Chrome or FF. So I gave it try and tested it in IE. In order to see how things are doing, I turned on the Developer tool to see what the console says and how network traffic moves – and you guessed it – everything works just fine! Soon I realized that the tester would never turn on the IE Developer tool when she was testing, so I turned it off. Then I could replicate the problems that she had described to me.

A bit later I realized, by default, IE actually cache the JSON data that AngularJS gets from the server side. But the cache is off when the Developer tool is on. Hence the behavior is different. For Chrome or FF, caching is not on by default. Searching throw sites such as StackOverflow gave similar suggestions. So it looks like force turning off caching in the application should be the right solution to the problem.

Typically there are two solutions to turn off cache, client side and server side. They both should work. But since the app is an AngularJS app that asynchronously gets data from server side’s RESTful services, html <meta> tag like this will not work well since the JSON data requested would not be affected:

<meta http-equiv="Pragma" content="no-cache">   <!-- does not work -->

The solution that will work is to enforce response to be not cached on the server side:

response.setHeader("Cache-Control", "private, no-store, no-cache, must-revalidate");
response.setHeader("Pragma", "no-cache");

 

This should ensure non-caching at the client side for a particular JSON data pulled by AngularJS. Certainly, if caching is desired, it should not be implemented as such. After implemented this, the above AngularJS application whose behavior depends fully on fresh JSON data now works well as expected!

Cheers!

-T. Yan

 

mitigate through maven-replacer-plugin phase issue

Hi, Happy Friday!

We often want to change certain application properties in Maven build process. A very common one is the build timestamp. A nice plugin we can use is the maven-replacer-plugin. Unfortunately, setting the <phase> tag in this plugin is not always straightforward, especially when you deal with different kind of packaging such as bundle for Felix, instead of jar files. Often the replacer runs too early (when the file being replaced has not been moved into the target folder) or too late (when the actual package has already been built).

I found a way to mitigate through the situation, a method may not be so orthodox but works every time.

Assume your file myapp.properties has items to be replaced on contains this line:

build.timestamp=@2017.0324.1420@

And the pom.xml can have this property and the replacer plugin config:

 <properties>
 <timestamp>${maven.build.timestamp}</timestamp>
 <maven.build.timestamp.format>yyyy.MMdd.HHmm</maven.build.timestamp.format>
 </properties>

....

<plugin>
 <groupId>com.google.code.maven-replacer-plugin</groupId>
 <artifactId>replacer</artifactId>
 <version>1.5.3</version>
 <executions>
 <execution>
 <phase>clean</phase>
 <goals>
 <goal>replace</goal>
 </goals>
 </execution>
 </executions>
 <configuration>
 <file>${project.basedir}/src/main/resources/myapp.properties</file>
 <replacements>
 <replacement>
 <token>@[\d]{4}\.[\d]{4}\.[\d]{4}@</token>
 <value>@${timestamp}@</value>
 </replacement>
 </replacements>
 </configuration>
 </plugin>

Then each time the plugin is run at clean phase, it will look for the regex pattern, in this case:

@[\d]{4}\.[\d]{4}\.[\d]{4}@

and replace it with the a current timestamp so the resulted line becomes:

build.timestamp=@2017.0324.1428@

This way, each time the build timestamp can be replaced correctly at the earliest phase of the build. When use this property in application, simply remove the pattern holder, in this case the two @’s before using it.

Again it is not super elegant or orthodox. But I have dealt with the phase issue with this replacer plugin and felt the time spent on it is totally not worth it. The above mitigating solution gives me expected result every time Maven runs.

Cheers!

-TY

Integrate Git into ANT targets

Hi, There:

For some of our projects, we still use ANT instead of Maven. And we have the need to build project directly out of GIT using ANT targets to ensure consistency for production deployment. If you don’t use Jenkins, then running ANT from Eclipse or Command Line is the option to have. The integration of ANT and Git is not all that straightforward. From the web, there are some good examples of how to do this, and this one works for us:

<target name="git.get.source">
 <delete dir="${LOCAL_PATH}"/>
 <mkdir dir="${LOCAL_PATH}"/>
 <git command = "clone">
 <args>
 <arg value = "--progress" />
 <arg value = "--verbose" />
 <arg value = "${GIT_URL}" />
 <arg value = "${LOCAL_PATH}" />
 </args>
 </git> 
 </target>

<!-- git macro utils setup -->
 <macrodef name = "git">
 <attribute name = "command" />
 <attribute name = "dir" default = "" />
 <element name = "args" optional = "true" />
 <sequential>
 <echo message = "git @{command}" />
 <exec executable = "${GIT_EXE_PATH}" dir = "@{dir}">
 <arg value = "@{command}" />
 <args/>
 </exec>
 </sequential>
 </macrodef>

The above ANT scripts assumes that you have installed Git client and ${GIT_EXE_PATH} is where the git.exe resides. After the source code project is cloned into ${LOCAL_PATH}, building project is simply to run typical ANT targets such as compile and jar.

There is one error I have encountered, however, if the ${LOCAL_PATH} is synchronized into any Eclipse project,  this annoying error would occur the next time you run the git.get.source:

Unable to delete ..
.git\objects\pack\pack-b38c095fff6a391435a492ccf49985ed82dfd245.pack

It turns out that Eclipse is keeping an eye on the Git status as well and have a file lock on this pack file. It is not really a bug per se, just multiple processes are working on the Git Status of the clone.

The only way to get around this issue if you want to run ANT script within Eclipse is to use a folder for ${LOCAL_PATH} that is not under the control of Eclipse project. For example, use c:\temp\myproject\ for ${LOCAL_PATH} would work just fine. Since we don’t plan to modify the codes from GIT in Eclipse but simply want to build the project out of Git sources, this trick is good to use.

Cheers!

-Tony

JAX-WS Streaming/MTOM with WSSE UsernameToken WITHOUT using MessageHandler

Hi, Happy Chinese New Year!

A while back I have this post about Data Handler Issue when using MTOM for Streaming SOAP services Issue with Streaming/MTOM with DataHandler. The issue was essentially that any message handler after the will cause load of whole binary content into memory and hence cause Out of Memory issues at the client side if the binary content is too large. I described a solution for the CXF implementation.  Now I have searched and tested a solution when use JAX-WS default implementation, thanks mostly to this post in StackOverflow Link.

import javax.xml.soap.SOAPElement;
import javax.xml.soap.SOAPFactory;
import javax.xml.ws.BindingProvider;
import javax.xml.ws.soap.MTOMFeature;
import javax.xml.ws.soap.SOAPBinding;
import com.sun.xml.ws.api.message.Header;
import com.sun.xml.ws.api.message.Headers;
//Static Strings
private static String SECURITY_NS = “http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&#8221;;
private static String PASSWORD_TYPE = “http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText&#8221;;
private static String AUTH_PREFIX = “wss”;
// Prepare Service and Port
MyService service = getMySerice();
MyservicePort port = service.getMyServicePort();
BindingProvider bp = (BindingProvider)port;
SOAPBinding soapbinding = (SOAPBinding) bp.getBinding();
soapbinding.setMTOMEnabled(true);
SOAPFactory soapFactory = SOAPFactory.newInstance();
SOAPElement security = soapFactory.createElement(“Security”, AUTH_PREFIX, SECURITY_NS); SOAPElement uToken = soapFactory.createElement(“UsernameToken”, AUTH_PREFIX, SECURITY_NS);
SOAPElement username = soapFactory.createElement(“Username”, AUTH_PREFIX, SECURITY_NS);
username.addTextNode(this.getUserName().trim());
SOAPElement pass = soapFactory.createElement(“Password”, AUTH_PREFIX, SECURITY_NS); pass.addAttribute(new QName(“Type”), PASSWORD_TYPE); pass.addTextNode(this.getPassword().trim());
uToken.addChildElement(username);
uToken.addChildElement(pass);
security.addChildElement(uToken);
Header header = Headers.create(security);
((WSBindingProvider) port).setOutboundHeaders(header);
port.myOperation();

This way, the Security Header will have the WSSE UsernameToken without disturbing MTOM payload which is being streamed in my operation. If the WSSE header were processed in MessageHandler, any huge binary payload would cause Out of Memory exception very quickly and fail the whole SOAP invocation right off the bat.

Cheers!

-Tony

log4j AsyncAppender missing class/method/line numbers???

Hi, There:

This post is about Log4j again. We keep getting three ?:?:? in our log4j logging files for the Class name:Method name and Line Number. Basically we want to have this %F:%M:%L  in the ConversionPattern but got no information back but ?:?:?.

The reason is that we use org.apache.log4j.AsyncAppender for better performance. This AsyncAppender references to org.apache.log4j.RollingFileAppender. Because AsyncAppender uses another thread when write the logging event to the FileAppender, the location information about class/method/line are lost in the FileAppender.

The simple fix to this is to set the boolean attribute LocationInfo to true, it will then preserve the calling thread information from the AsyncAppender to the next FileAppender.

<appender name=”AsyncAppender” class=”org.apache.log4j.AsyncAppender”>
   <appender-ref ref=”FileAppender” />
   <param name=”LocationInfo” value=”true”/>
</appender>

The log file now have all the class/method/line number we can use during debugging.

Cheers!

 

-TY

 

Adding system environment properties to log4j filename

Hi, Happy Friday!

The holiday is just around corner!  I feel like posting something before the holiday is really here. And it happens I do have something worth posting. 🙂

This week we found that in a cluster environment, log4j may encounter an issue with writing log entries to FileAppender. The problem is essentially that when multiple nodes from the same cluster are trying to write to the same log file, the competing threads can cause some of the events to be dropped without being written to the file. The losing rate depends on how heavy the logging events.

We found a way to mitigate this issue by adding a hostname to the logging filename. This way, each node will be writing to its own logging file, avoiding IO contention.

How to achieve this involves two steps. First, adding a system variable in your application startup process, such as:

System.setProperty(“HostName“, InetAddress.getLocalHost().getHostName()); 

Then reference this HostName system variable in your Log4j.xml configuration file, something like the following:

<appender name=”FileAppender” class=”org.apache.log4j.RollingFileAppender”>
 <param name=”file” value=”/tmp/logs/app_${HostName}.log” />
 <param name=”MaxFileSize” value=”1024KB” />
 <param name=”MaxBackupIndex” value=”30″ />
 <param name=”Threshold” value=”DEBUG” />
 <layout class=”org.apache.log4j.PatternLayout”>
  <param name=”ConversionPattern” value=”%5p [%d] [%t] (%F:%M:%L) – %m%n” />
 </layout>
</appender>

The application will write logging entries to the logging file with an actual host name in its filename. Another beautiful side effect of this is that one can quickly tell where the logging event is originated from, which often provides valuable information about the host in the cluster.

Of course, if you want to add more environment variables besides the HostName, it can be done in similar fashion. Also, the variables don’t need to be in the filename, it can be in the content of the logging message.

Cheers and have a happy holiday season!

-T. Y.

 

 

 

Custom DataSource for DataHandler: how to get byte[] into DataHandler

Hi there:

Happy Friday!

We use javax.activation.DataHandler Routinely in SOAP Webservice invocation. However, when DataHandler constructs with DataSource, it only comes in with two types, FileDataSource, URLDataSource. A lot of times in production applications, neither is useful. For example, in memory data in byte[] or other forms cannot be used readily with these two types of DataSource. This can be easily solved by implementing your own DataSource. Here is an example that you can use

 

public class MyDataSource implements DataSource {

private InputStream is;
private String name, contentType;

@Override
public String getContentType() {
return contentType;
}
public void setContentType(String ct){
this.contentType = ct;
}

@Override
public InputStream getInputStream() throws IOException {
return is;
}

public void setInputStream(InputStream is){
this.is = is;
}

@Override
public String getName() {
return name;
}
public void setName(String name){
this.name = name;
}

@Override
public OutputStream getOutputStream() throws IOException {
// TODO Auto-generated method stub
return null;
}
}

 

To use the DataSource, inject byte[] into it before instantiate DataHandler with the DataSource.

 

MyDataSource ds = new MyDataSource ();

byte[] data =  … ;

InputStream is =  new ByteArrayInputStream(data);

ds.setInputStream(is);

ds.setContentType(“application/octet-stream”);

ds.setName(“some content name”);

DataHandler dataHandler = new DataHandler(ds);

 

DataHandler will then fetch the data from the underlying InputStream from the Custom DataSource.

Cheers!

 

-Tony