Monday, 12 March 2012

Software Architectural Styles


About the Post

Software Patterns generally include Architectural Styles, Design Patterns and Language Idioms. This post is part of a 3-part introduction to Architectural Styles. Part-I will introduce the more commonly used styles, Part-II will introduce some of the hybrid styles are used today and Part-III will provide some insight into emerging styles.
Note that this post is not intended to be a comprehensive guide or tutorial on all existing architectural styles but only a brief introduction to a sub-set of more commonly used architectural styles.

Architectural Styles - Part I

Definition[1]: An architectural style expresses a fundamental structural organization schema for software systems. It provides a set of predefined element types, specifies their responsibilities, and includes rules and guidelines for organizing the relationships between them.

In short it’s a template that defines how the software components of the system are organized and how they are inter-connected.
There are some classifications done for categorizing architectural styles. A preliminary classification of these styles is presented in Shaw and Clements [1997] and repeated in Bass et al. [1998].Typically these classifications are based on what kinds of components and connectors between them are used in a particular architectural style. However the primary goal of this post is to provide an introduction to more commonly used styles and ignite interest in software architects to evaluate existing architectural styles based on their applicability to their systems.

Pipes & Filters

Description:

In a pipe and filter style each component has a set of inputs and a set of outputs. A component reads streams of data on its inputs and produces streams of data on its outputs, delivering a complete instance of the result in a standard order. The filters modify or transform or in some cases examine the data as it passes through them. This computation to the input streams is done incrementally so that output begins before input is fully consumed. The pipes serve as conduits for the streams, transmitting outputs of one filter to inputs of another.

Constraints:

The constraint is that a filter must be completely independent of other filters (zero coupling): it must not share state, control thread, or identity with the other filters on its upstream and downstream interfaces.

Variants:

  •          Pipelines — linear sequences of filters
  •          Bounded pipes — limited amount of data on a pipe
  •          Typed pipes — data strongly typed
  •          Uniform Pipe and Filter —all filters must have the same interface.

Examples:

Traditionally compilers have been viewed as a pipeline system (though the phases are often not incremental). The stages in the pipeline include lexical analysis, parsing, semantic analysis, and code generation. Other examples of pipes and filters occur in signal processing domains, functional programming, and distributed systems.

Advantages:

  • System behavior is a succession of component behaviors
  • Filter addition, replacement, and reuse
  • Possible to hook any two filters together
  • Certain analyses such as throughput and deadlock analysis
  • Concurrent execution

Limitations:

  • Often lead to batch organization of processing
  • Not good at handling interactive applications
  • Might force lowest common denominator on data transmission leading to loss of performance and increased complexity in implementing filters.

Event-based Integration

Description:

The idea behind event-based Integration (also called implicit invocation, selective broadcast) is that instead of explicit communication between software components (e.g. via procedure calls, remote object calls etc), software components simply broadcast one or more events. Other components in the system can register an interest in an event by subscribing to it. When the event is announced the interested components are notified and they can invoke the required procedures accordingly. Thus an event announcement "implicitly" causes other components to invoke procedures in their modules.

Constraints:

The constraint is that software components broadcasting events do not know which components will be affected by those events. Hence no assumptions on the order of processing can be made by the components announcing the events.

Variants:

·         Message Bus — events are exchanged via an intermediate component usually called message bus or message router. Though the primary concept is based on implicit invocation, yet most the message bus architectures provide multiple integration patterns that qualify them to be considered as a separate style as well.
o   Enterprise Service Bus – An ESB uses services for communication between the bus and components attached to it. An ESB typically provides services that transform messages from one format to another to enable components that use incompatible messages to communicate.
o   Internet Service Bus – Is similar to an ESB but the participating components are hosted in a cloud instead of on an enterprise network. ISB uses URI (Uniform Resource Identifiers) to control the routing of messages between the participating components.
·         Publish-subscribe – Subscribers register/deregister to receive specific messages or specific content. Publishers broadcast messages to subscribers either synchronously or asynchronously.

Examples:

Examples of systems with implicit invocation mechanisms abound. They are used in programming environments to integrate tools, in database management systems to ensure consistency constraints, in user interfaces to separate presentation of data from applications that manage the data, and by syntax-directed editors to support incremental semantic checking.

Advantages:

  •   Component reuse
  •   System evolution
  •   Loose coupling between components.

Limitations:

  •          No knowledge of what components will respond to event
  •          Lack of control on order of processing. (Message bus architectures overcome this by externalizing the routing logic and providing some control on it).
  •          Counter-intuitive system structure.

Layered Systems

Description:

Layered architecture focuses on the grouping of related functionality within a system into distinct layers that are stacked vertically on top of each other, with each layer providing service to the layer above it and serving as a client to the layer below. The layers of an application may reside on the same physical computer (the same tier) or may be distributed over separate computers (n-tier), and the components in each layer communicate with components in other layers through well-defined interfaces.


Constraints:

Components in one layer can interact only with components in the same layer or with components from the layer directly below it.

Variants:

·      N-Tier/3-Tier Architecture — Though this qualifies to be a style in itself, since there is good amount of resemblance from the separation of concerns into different layers, I have taken the liberty to include this as a variant of Layered architecture. Communication between tiers is typically asynchronous in order to support better scalability.

Examples:

Examples of systems with layered architecture are abound; most of the network protocol stacks use this style like the OSI, TCP/IP stack etc. Line-of-business (LOB) applications such as accounting and customer-management systems; enterprise Web-based applications and Web sites, and enterprise desktop or smart clients with centralized application servers for business logic.

Advantages:

  • Increasing abstraction levels
  • Easy evolution
  • Changes in a layer affect at most the adjacent two layers
  • Reuse of software assets
  • Different implementations of layer are allowed as long as interface is preserved
  • Standardized layer interfaces for libraries and frameworks

Limitations:

  • Can be quite difficult to find the right levels of abstraction
  • Performance

   Repositories

   Description:

    In a repository style there are two quite distinct kinds of components: a central data structure represents the current state, and a collection of independent components operate on the central data store. Interactions between the repository and its external components can vary significantly between systems. The choice of control discipline leads to major subcategories. If the types of transactions in an input stream of transactions trigger selection of processes to execute, the repository can be a traditional database. If the current state of the central data structure is the main trigger of selecting processes to execute, the repository can be a blackboard.

   Constraints:

    System control is entirely driven by the blackboard state.

   Variants:

    Replicated Repository: The repository is decentralized (either replicated or partitioned) yet they as a whole provide an illusion of a centralized view to the clients.

   Examples: Typically used for AI systems and in applications requiring complex interpretations of signal processing, such as speech and pattern recognition.

   Advantages:

  •    Suitable for complex problems
  •    Incrementally lead to problem resolution, enables easy troubleshooting.

   Limitations:

  •     Control is completely driven by repository state

  Data Abstraction and Object Oriented Organization

   Description:

   Data Abstraction architecture focuses on the division of responsibilities for a system into individual reusable and self-sufficient objects, each containing the data and the behavior relevant to the object. Objects are examples of a sort of component we call a manager because it is responsible for preserving the integrity of a resource (here the representation). Objects interact through function and procedure invocations. The object is responsible for preserving the integrity of its representation and the representation of that object is hidden from other objects.

   Constraints:

  •     Objects are responsible for their internal representation integrity
  •     Internal representation is hidden from other objects

   Variants:

    There are many variations. For example, some systems allow “objects” to be concurrent tasks; others allow objects to have multiple interfaces.

   Advantages:

  •     Object internals can be altered without affecting the clients.
  •     System decomposition into sets of interacting agents

   Limitations:

  •     Objects must know identities of other objects that they intend to interact with.
  •     Side effects in object method invocations


Client/Server

Description:

The client-server style is the most frequently encountered of the architectural styles for network-based applications. It segregates the system into two applications, where the client makes requests to the server. A server component, offering a set of services, listens for requests upon those services. In many cases, the server is a database with application logic represented as stored procedures.  A client component, desiring that a service be performed, sends a request to the server via a connector. The server either rejects or performs the request and sends a response back to the client.


Constraints:

Separation of concerns is the principle behind the client-server constraints. User interface functionality is concentrated in client side.

Variants:

There are many variants mainly depending on how the client and server components interact i.e. based on the various connectors between the two components e.g. Client-Queue-Client Systems, Peer-to-Peer applications etc.

Examples:

Most of the network based applications like FTP, DHCP etc are all client-server style architectures.

Advantages:

  • Separation of concerns allows the two types of components to evolve independently.
  • Centralized and secure data access.
  • Ease of maintenance as the roles of each component are well established.

Limitations:

  • Application and data logic is combined on the server component.
  • Extensibility and Scalability




Mobile Code

Description:

This style is mainly based on the philosophy that transferring code (business logic) over the network is cheaper than transferring the application data and hence the name Mobile Code. Mobile code styles use mobility in order to dynamically change the distance between the processing and source of data or destination of results. In all of the mobile code styles, a data element is dynamically transformed into a component.


Variants:

Code-on-demand, remote evaluation, and mobile agent.

Examples:

Mainly used in network management applications like Intelligent Mobile Agents.

Advantages:

Improved the proximity and quality of its interaction
Reduces interaction costs and thereby improving efficiency and user-perceived performance

Limitations:

Suitable only for specific application areas.

Summary:
This concludes the introductory part to commonly used architectural styles, in the next part of this series of posts we will see hybrid styles that are popular today.

Thursday, 8 March 2012

Neural Networks - Primer


About the Post
This post is part of a 3-part introduction to Artificial neural networks. Part-I will introduce the basic concepts of neural networks, Part-II will introduce the common types of neural networks and Part-III will provide some programming examples that illustrate implementation of basic neural networks.

Part – I: Introduction to Artificial Neural Networks.

Introduction
Our brains perform sophisticated information processing tasks, using hardware and operation rules which are quite different from the ones on which conventional computers are based. The processors in the brain, the neurons (nerve cell), are rather noisy elements which operate in parallel. They are organized in dense networks and they communicate signals through a huge number of inter-neuron connections (the so-called synapses). These connections represent the 'program' of a network. By continuously updating the strengths of the connections, a network as a whole can modify and optimize its 'program', 'learn' from experience and adapt to changing circumstances.

The term neural network was traditionally used to refer to a network or circuit of biological neurons. However, modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes.
Artificial neural networks represent a type of computing that is based on the way that the brain performs computations. Neural networks are good at fitting non-linear functions and recognizing patterns. Consequently, they are used in the aerospace, automotive, banking, defense, electronics, entertainment, financial, insurance, manufacturing, oil and gas, robotics, telecommunications, and transportation industries.

Non-linearity of Neurons:
Conventional computers that we use today are more suited to solve problems that exhibit linearity. 

You might recall from your high-school math classes that typically equations are classified as linear, polynomial etc. A Linear equation is one that exhibits additivity and homogeneity:
  • additivity, f(x+y) = f(x) + f(y);
  •  homogeneity, f(αx) = αf(x).
(Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity; for example, an anti-linear map is additive but not homogeneous.)


So, conventional computers simply execute a detailed set of instructions, requiring programmers to know exactly which data can be expected and how to respond. Subsequent changes in the actual situation, not foreseen by the programmer, lead to trouble.
Neural networks, on the other hand, can adapt to changing circumstances i.e. can solve non-linear problems. Neural networks are superior to conventional computers in dealing with real-world tasks, such as e.g. communication (vision, speech recognition), movement coordination (robotics) and experience-based decision making (classification, prediction, system control), where data are often messy, uncertain or even inconsistent, where the number of possible situations is infinite and where perfect solutions are for all practical purposes non-existent.

Following table shows a brief comparison between conventional computers and biological neurons:

Conventional computers
Biological neural networks
Operation speed
~108m=sec
connections ~10
~108Hz

~102Hz
Signal/noise
~∞
~1
Signal velocity
~108m/sec
~1m/sec
Execution Model
Sequential operation
Parallel operation
Programming Model
External programming
Self-programming & adaptation
Resilience
Almost fatal incase of hardware failure or unforeseen data
Robust against hardware failure & unforeseen data



Artificial Neural Networks:

An artificial network consists of a set of processing units (called neurons or nodes) that communicate with each other by sending signals over a large number of weighted connections. The neurons along with the interconnections are referred to as neural nets (networks).

Single-layer Neural Networks:

These are the simplest of neural networks, they have a single input layer of neurons connected to a output neuron(s). 

The output is a weighted sum of all the inputs. The output neuron has a threshold (t), if the weighted sum of the input neurons is >= t then the output neuron fires (i.e. output = 1).

Single-layer neural networks have many advantages:
  • Easy to setup and train
  • Explicit link to statistical models
    • Shared covariance Gaussian density function
    • Sigmoid output functions allow a link to posterior probabilities
  • Outputs are weighted sum of inputs: interpretable representation

But some big limitations:
  •          Can only represent a limited set of functions
  •          Decision boundaries must be hyper-planes
  •          Can only perfectly separate linearly separable data


Multi-layer Neural Networks:
Multi-layer networks can model more general networks by considering layers of processing units. They can solve the classification problem for non linear sets by employing hidden layers, whose input neurons are not directly connected to the output neurons. The additional hidden layers can be interpreted geometrically as additional hyper-planes, which enhance the separation capacity of the network. 


Training of artificial neural networks

A neural network has to be configured such that the application of a set of inputs produces the desired set of outputs. Various methods to set the strengths of the connections exist. One way is to set the weights explicitly, using a priori knowledge. Another way is to 'train' the neural network by feeding it teaching patterns and letting it change its weights according to some learning rule.

These are multiple learning paradigms that can be employed to train a neural network:

  • Supervised learning or Associative learning in which the network is trained by providing it with input and matching output patterns. These input-output pairs can be provided by an external teacher, or by the system which contains the neural network (self-supervised).
  • Unsupervised learning or Self-organization in which an (output) neuron is trained to respond to clusters of pattern within the input. In this paradigm the system is supposed to discover statistically salient features of the input population. Unlike the supervised learning paradigm, there is no a priori set of categories into which the patterns are to be classified; rather the system must develop its own representation of the input stimuli.
  • Reinforcement Learning This type of learning may be considered as an intermediate form of the above two types of learning. Here the learning machine does some action on the environment and gets a feedback response from the environment. The learning system grades its action good (rewarding) or bad (punishable) based on the environmental response and accordingly adjusts its parameters. Generally, parameter adjustment is continued until an equilibrium state occurs, following which there will be no more changes in its parameters. The self organizing neural learning may be categorized under this type of learning.
Summary:
This concludes the introductory part to artificial neural networks, in the next part of this series of posts we will see the different types of neural networks.









Thursday, 10 November 2011

Garbage First G1 - Garbage Collection in Java

Introduction:
A preliminary version of G1 (garbage first) GC has been introduced in Java 6 update 14. This has been further enhanced in Java 7. G1 offers a new dimension to garbage collection in Java. G1 performs whole heap operations concurrently which greatly decreases the pause times and thereby increases the throughput.

Concept:
G1 employs multiple techniuques to achieve lower pauses and higher throughput but the core concept is to partition the heap into smaller equal sized regions and then identifiying which regions can result in maximum yield. That is it does a global mark phase after which it figures out which of the regions have the least live objects i.e. those regions which can result in maximum garbage collection thereby releasing more memory for use.

By choosing the collect memory from regions that least occupied first, it also gives more time to other regions that are occupied to get freed up.

Cost Model:
Since the heap is partitioned into equal sized regions and the occupancy is known during the mark phase, it has a fairly accurate estimate of the cost of collecting a region within a given pause limit. This enables it to meet soft-real time needs at fairly good accuracy. The allowed pause time can be configured by the user depending on the applications thorughput needs, for example the user can specify that for every 200ms spend no more than 50ms on garbage collection. This can be configured using the following options:

-XX:MaxGCPauseMillis =50
-XX:GCPauseIntervalMillis =200

Few other techniques that GC1 employs is the Popular Object Handling. A popular object is one that is referenced from many locations in the heap. A small set of heap regions are reserved for storing popular objects and GC1 tries to quickly identify popular objects and move them to the reserved heap regions. These reserved regions are given the least priority for collection.

To use G1 as the GC following arguments have to passed to jvm:

-XX:+UnlockExperimentalVMOptions -XX:+UseG1GC

Summary:
GC1 seems to offer very good results for applications that run on mult-processor environments with large memories and have soft-real time requirements. The benchmark results available so far are quite promising and this also the planned long term replacement for Concurrent Mark-Sweep GC.

Thursday, 15 September 2011

Invoking a remote webservice from BPEL using Apache ODE

Introduction:
In this post we will examine how an existing webservice can be invoked as part of a BPEL work flow.

Prerequisites:
Eclipse Helios
Tomcat 6.0.16
Apache ODE 1.3.4
BPEL Visual Designer 0.5.0  (Eclipse plugin).

Installation:
See following links for instructions on how to install these:
Tomcat: http://tomcat.apache.org/tomcat-6.0-doc/setup.html
ODE: Download ode war and deploy to tomcat webapps folder.
http://ode.apache.org/user-guide.html
Eclipse: Download the helios build of eclipse and extract the zip.
http://help.eclipse.org/helios/index.jsp
BPEL Visual Designer: Open the Eclipse, go to the menu Help→Install New SoftWare
Click on the button Add and define a new Eclipe update site with the location:
http://download.eclipse.org/technology/bpel/update-site


Invoking the webservice from BPEL

Let's assume that there is web service already implemented and call it EmployeeService. Now we want to implement a BPEL flow and as part of its execution invoke this already existing Employee Service.

1. Add partnerLink to the EmployeeService:

<plnk:partnerLinkType name="EmployeeService">
<plnk:role name="Em ployeeServiceProvider" portType="ns:EmployeeServicePortType"/></plnk:partnerLinkType>
Note: The portType should be same as the portType defined in the WSDL.

2. Create BPEL Project in eclipse:
File->New->Other->BPEL2.0-> BPEL Project
Name the project as HelloWorld

3. Right click on the bpelContent folder under the newly created HelloWorld Project and then New->Other->BPEL2.0-> New BPEL Process File.
Name it HelloWorld.bpel

4. Create the BPEL process flow like shown below:



The source for this will look like:

<!-- HelloWorld BPEL Process [Generated by the Eclipse BPEL Designer] -->
<!-- Date: Tue May 10 18:11:54 IST 2011 -->
<
bpel:process name="HelloWorld" targetNamespace="http://www.ibm.com/wd2/ode/HelloWorld"
suppressJoinFailure="yes" xmlns:tns="http://www.ibm.com/wd2/ode/HelloWorld"
xmlns:bpel="http://docs.oasis-open.org/wsbpel/2.0/process/executable"
xmlns:emp="http://services.test.com">
<!-- Import the client WSDL -->
<bpel:import namespace="http://services.test.com"
location="EmployeeService.wsdl" importType="http://schemas.xmlsoap.org/wsdl/"></bpel:import>
<bpel:import location="HelloWorldArtifacts.wsdl"
namespace="http://www.ibm.com/wd2/ode/HelloWorld" importType="http://schemas.xmlsoap.org/wsdl/" />
<!-- ================================================================= -->
<!-- PARTNERLINKS -->
<!-- List of services participating in this BPEL process -->
<!-- ================================================================= -->
<bpel:partnerLinks>
<!-- The 'client' role represents the requester of this service. -->
<bpel:partnerLink name="client" partnerLinkType="tns:HelloWorld"
myRole="HelloWorldProvider" />
<bpel:partnerLink name="EmployeeService" partnerLinkType="emp:EmployeeService"
partnerRole="EmployeeServiceProvider" initializePartnerRole="yes"></bpel:partnerLink>
</bpel:partnerLinks>
<!-- ================================================================= -->
<!-- VARIABLES -->
<!-- List of messages and XML documents used within this BPEL process -->
<!-- ================================================================= -->
<bpel:variables>
<!-- Reference to the message passed as input during initiation -->
<bpel:variable name="input" messageType="tns:HelloWorldRequestMessage" />
<!-- Reference to the message that will be returned to the requester -->
<bpel:variable name="output" messageType="tns:HelloWorldResponseMessage" />
<bpel:variable name="id" messageType="emp:getEmployeeRequest" />
<bpel:variable name="employee" messageType="emp:getEmployeeResponse" />
</bpel:variables>
<!-- ================================================================= -->
<!-- ORCHESTRATION LOGIC -->
<!-- Set of activities coordinating the flow of messages across the -->
<!-- services integrated within this business process -->
<!-- ================================================================= -->
<bpel:sequence name="main">
in HelloWorld.wsdl -->
<!-- Receive input from requester. Note: This maps to operation defined
<bpel:receive name="receiveInput" partnerLink="client"
portType="tns:HelloWorld" operation="process" variable="input"
createInstance="yes" />
<!-- Generate reply to synchronous request -->
<bpel:assign validate="no" name="Assign">
<bpel:copy>
<bpel:from>
<bpel:literal>
<emp:getEmployeeRequest xmlns:emp="http://services.test.com"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<emp:id>emp:id</emp:id>
</emp:getEmployeeRequest>
</bpel:literal>
</bpel:from>
<bpel:to variable="id" part="parameters"></bpel:to>
</bpel:copy>
<bpel:copy>
<bpel:from>
<bpel:literal>
<emp:getEmployeeResponse xmlns:emp="http://services.test.com"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<emp:return>emp:return</emp:return>
</emp:getEmployeeResponse>
</bpel:literal>
</bpel:from>
<bpel:to variable="employee" part="parameters"></bpel:to>
</bpel:copy>
<bpel:copy>
<bpel:from part="payload" variable="input">
<bpel:query queryLanguage="urn:oasis:names:tc:wsbpel:2.0:sublang:xpath1.0"><![CDATA[tns:input]]></bpel:query>
</bpel:from>
<bpel:to part="parameters" variable="id">
<bpel:query queryLanguage="urn:oasis:names:tc:wsbpel:2.0:sublang:xpath1.0">
<![CDATA[emp:id]]>
</bpel:query>
</bpel:to>
</bpel:copy>
</bpel:assign>
<bpel:invoke name="Invoke" partnerLink="EmployeeService"
operation="getEmployee" inputVariable="id" outputVariable="employee"
createInstance="yes"></bpel:invoke>
<bpel:assign validate="no" name="Assign1">
<bpel:copy>
<bpel:from>
<bpel:literal>
<tns:HelloWorldResponse xmlns:tns="http://www.ibm.com/wd2/ode/HelloWorld"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<tns:result>tns:result</tns:result>
</tns:HelloWorldResponse>
</bpel:literal>
</bpel:from>
<bpel:to variable="output" part="payload"></bpel:to>
</bpel:copy>
<bpel:copy>
<bpel:from part="parameters" variable="employee">
<bpel:query queryLanguage="urn:oasis:names:tc:wsbpel:2.0:sublang:xpath1.0"><![CDATA[emp:return]]></bpel:query>
</bpel:from>
<bpel:to part="payload" variable="output">
<bpel:query queryLanguage="urn:oasis:names:tc:wsbpel:2.0:sublang:xpath1.0"><![CDATA[tns:result]]></bpel:query>
</bpel:to>
</bpel:copy>
</bpel:assign>
<bpel:reply name="replyOutput" partnerLink="client"
portType="tns:HelloWorld" operation="process" variable="output" />
</
</bpel:sequence>bpel:process>


Description:
The recieveInput takes the input which in this case is the employee id and this is assigned as an input to the webservice that needs to be invoked (i.e. EmployeeService). The output of the invocation of the service (getEmployee service method) is then assigned back to the bpel process flow which is the result.

5. Now create the ODE deployment descriptor. File->New->Other->BPEL2.0->Apache ODE Deployment Descriptor
Name it as deploy.xml

It should look like:

<?xml version="1.0" encoding="UTF-8"?>
<deploy xmlns="http://ode.fivesight.com/schemas/2006/06/27/dd"
xmlns:pns="http://www.ibm.com/wd2/ode/HelloWorld"
xmlns:emp="http://services.test.com"
xmlns:wns="http://www.ibm.com/wd2/ode/HelloWorld">
<process name="pns:HelloWorld">
<active>true</active>
<provide partnerLink="client">
<service name="wns:HelloWorldService" port="HelloWorldPort" />
</provide>
<invoke partnerLink="EmployeeService">
<service name="emp:EmployeeService" port="EmployeeServiceHttpSoap11Endpoint" />
</invoke>
</process>
</deploy>

Description:

The "process" specifies the BPEL process, its service name and port. The "invoke" defines the partner webservice that needs to be invoked as part the process execution.

6. Deploy the service. Just copy the HelloWorld eclipse folder to <TOMCAT_HOME>/webapps/ode/WEB-INF/processes/

7. You can check if the processes is deployed by opening the url:
http://localhost:8080/ode/processes/