Question

Since initialization of the WS client service and port takes ages I like to initialize them once at startup and reuse the same instance of the port. Initalization would look something like this:

private static RequestContext requestContext = null;

static
{
    MyService service = new MyService(); 
    MyPort myPort = service.getMyServicePort(); 

    Map<String, Object> requestContextMap = ((BindingProvider) myPort).getRequestContext();
    requestContextMap = ((BindingProvider)myPort).getRequestContext(); 
    requestContextMap.put(BindingProvider.USERNAME_PROPERTY, uName); 
    requestContextMap.put(BindingProvider.PASSWORD_PROPERTY, pWord); 

    rc = new RequestContext();
    rc.setApplication("test");
    rc.setUserId("test");
}

The call somewhere in my class:

myPort.someFunctionCall(requestContext, "someValue");

My question: Will this call be thread-safe?

Était-ce utile?

La solution

According to the CXF FAQ:

Are JAX-WS client proxies thread safe?

Official JAX-WS answer: No. According to the JAX-WS spec, the client proxies are NOT thread safe. To write portable code, you should treat them as non-thread safe and synchronize access or use a pool of instances or similar.

CXF answer: CXF proxies are thread safe for MANY use cases. The exceptions are:

  • Use of ((BindingProvider)proxy).getRequestContext() - per JAX-WS spec, the request context is PER INSTANCE. Thus, anything set there will affect requests on other threads. With CXF, you can do:

    ((BindingProvider)proxy).getRequestContext().put("thread.local.request.context","true");
    

    and future calls to getRequestContext() will use a thread local request context. That allows the request context to be threadsafe. (Note: the response context is always thread local in CXF)

  • Settings on the conduit - if you use code or configuration to directly manipulate the conduit (like to set TLS settings or similar), those are not thread safe. The conduit is per-instance and thus those settings would be shared. Also, if you use the FailoverFeature and LoadBalanceFeatures, the conduit is replaced on the fly. Thus, settings set on the conduit could get lost before being used on the setting thread.

  • Session support - if you turn on sessions support (see jaxws spec), the session cookie is stored in the conduit. Thus, it would fall into the above rules on conduit settings and thus be shared across threads.
  • WS-Security tokens - If use WS-SecureConversation or WS-Trust, the retrieved token is cached in the Endpoint/Proxy to avoid the extra (and expensive) calls to the STS to obtain tokens. Thus, multiple threads will share the token. If each thread has different security credentials or requirements, you need to use separate proxy instances.

For the conduit issues, you COULD install a new ConduitSelector that uses a thread local or similar. That's a bit complex though.

For most "simple" use cases, you can use CXF proxies on multiple threads. The above outlines the workarounds for the others.

Autres conseils

In general, no.

According to the CXF FAQ http://cxf.apache.org/faq.html#FAQ-AreJAX-WSclientproxiesthreadsafe?

Official JAX-WS answer: No. According to the JAX-WS spec, the client proxies are NOT thread safe. To write portable code, you should treat them as non-thread safe and synchronize access or use a pool of instances or similar.

CXF answer: CXF proxies are thread safe for MANY use cases.

For a list of exceptions see the FAQ.

As you see from above answers that JAX-WS client proxies are not thread safe so I just wanted to share my implementation will others to cache the client proxies. I actually faced the same issue and decided to create a spring bean that does the caching of the JAX-WS Client proxies. You can see more details http://programtalk.com/java/using-spring-and-scheduler-to-store/

import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;

import javax.annotation.PostConstruct;

import org.apache.commons.lang3.concurrent.BasicThreadFactory;
import org.apache.logging.log4j.Logger;
import org.springframework.stereotype.Component;

/**
 * This keeps the cache of MAX_CUNCURRENT_THREADS number of
 * appConnections and tries to shares them equally amongst the threads. All the
 * connections are created right at the start and if an error occurs then the
 * cache is created again.
 *
 */
/*
 *
 * Are JAX-WS client proxies thread safe? <br/> According to the JAX-WS spec,
 * the client proxies are NOT thread safe. To write portable code, you should
 * treat them as non-thread safe and synchronize access or use a pool of
 * instances or similar.
 *
 */
@Component
public class AppConnectionCache {

 private static final Logger logger = org.apache.logging.log4j.LogManager.getLogger(AppConnectionCache.class);

 private final Map<Integer, MyService> connectionCache = new ConcurrentHashMap<Integer, MyService>();

 private int cachedConnectionId = 1;

 private static final int MAX_CUNCURRENT_THREADS = 20;

 private ScheduledExecutorService scheduler;

 private boolean forceRecaching = true; // first time cache

 @PostConstruct
 public void init() {
  logger.info("starting appConnectionCache");
  logger.info("start caching connections"); ;;
  BasicThreadFactory factory = new BasicThreadFactory.Builder()
    .namingPattern("appconnectioncache-scheduler-thread-%d").build();
  scheduler = Executors.newScheduledThreadPool(1, factory);

  scheduler.scheduleAtFixedRate(new Runnable() {
   @Override
   public void run() {
    initializeCache();
   }

  }, 0, 10, TimeUnit.MINUTES);

 }

 public void destroy() {
  scheduler.shutdownNow();
 }

 private void initializeCache() {
  if (!forceRecaching) {
   return;
  }
  try {
   loadCache();
   forceRecaching = false; // this flag is used for initializing
   logger.info("connections creation finished successfully!");
  } catch (MyAppException e) {
   logger.error("error while initializing the cache");
  }
 }

 private void loadCache() throws MyAppException {
  logger.info("create and cache appservice connections");
  for (int i = 0; i < MAX_CUNCURRENT_THREADS; i++) {
   tryConnect(i, true);
  }
 }

 public MyPort getMyPort() throws MyAppException {
  if (cachedConnectionId++ == MAX_CUNCURRENT_THREADS) {
   cachedConnectionId = 1;
  }
  return tryConnect(cachedConnectionId, forceRecaching);
 }

 private MyPort tryConnect(int threadNum, boolean forceConnect) throws MyAppException {
  boolean connect = true;
  int tryNum = 0;
  MyPort app = null;
  while (connect && !Thread.currentThread().isInterrupted()) {
   try {
    app = doConnect(threadNum, forceConnect);
    connect = false;
   } catch (Exception e) {
    tryNum = tryReconnect(tryNum, e);
   }
  }
  return app;
 }

 private int tryReconnect(int tryNum, Exception e) throws MyAppException {
  logger.warn(Thread.currentThread().getName() + " appservice service not available! : " + e);
  // try 10 times, if
  if (tryNum++ < 10) {
   try {
    logger.warn(Thread.currentThread().getName() + " wait 1 second");
    Thread.sleep(1000);
   } catch (InterruptedException f) {
    // restore interrupt
    Thread.currentThread().interrupt();
   }
  } else {
   logger.warn(" appservice could not connect, number of times tried: " + (tryNum - 1));
   this.forceRecaching = true;
   throw new MyAppException(e);
  }
  logger.info(" try reconnect number: " + tryNum);
  return tryNum;
 }

 private MyPort doConnect(int threadNum, boolean forceConnect) throws InterruptedException {
  MyService service = connectionCache.get(threadNum);
  if (service == null || forceConnect) {
   logger.info("app service connects : " + (threadNum + 1) );
   service = new MyService();
   connectionCache.put(threadNum, service);
   logger.info("connect done for " + (threadNum + 1));
  }
  return service.getAppPort();
 }
}

A general solution for this is to use multiple client objects in a pool, then to use proxy that acts as a facade.

import org.apache.commons.pool2.BasePooledObjectFactory;
import org.apache.commons.pool2.PooledObject;
import org.apache.commons.pool2.impl.DefaultPooledObject;
import org.apache.commons.pool2.impl.GenericObjectPool;

import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;

class ServiceObjectPool<T> extends GenericObjectPool<T> {
        public ServiceObjectPool(java.util.function.Supplier<T> factory) {
            super(new BasePooledObjectFactory<T>() {
                @Override
                public T create() throws Exception {
                    return factory.get();
                }
            @Override
            public PooledObject<T> wrap(T obj) {
                return new DefaultPooledObject<>(obj);
            }
        });
    }

    public static class PooledServiceProxy<T> implements InvocationHandler {
        private ServiceObjectPool<T> pool;

        public PooledServiceProxy(ServiceObjectPool<T> pool) {
            this.pool = pool;
        }


        @Override
        public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
            T t = null;
            try {
                t = this.pool.borrowObject();
                return method.invoke(t, args);
            } finally {
                if (t != null)
                    this.pool.returnObject(t);
            }
        }
    }

    @SuppressWarnings("unchecked")
    public T getProxy(Class<? super T> interfaceType) {
        PooledServiceProxy<T> handler = new PooledServiceProxy<>(this);
        return (T) Proxy.newProxyInstance(interfaceType.getClassLoader(),
                                          new Class<?>[]{interfaceType}, handler);
    }
}

To use the proxy:

ServiceObjectPool<SomeNonThreadSafeService> servicePool = new ServiceObjectPool<>(createSomeNonThreadSafeService);
nowSafeService = servicePool .getProxy(SomeNonThreadSafeService.class);
Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top