Google App Engine Java and GWT Application Development

Using transactions

As the App Engine documentation states,

 A transaction is a Datastore operation or a set of Datastore operations that either
 succeed completely, or fail completely. If the transaction succeeds, then all of its
 intended effects are applied to the Datastore. If the transaction fails, then none of
 the effects are applied.

The use of transactions can be the key to the stability of a multiprocess application
(such as a web app) whose different processes share the same persistent Datastore.
Without transactional control, the processes can overwrite each other’s data
updates midstream, essentially stomping all over each other’s toes. Many database
implementations support some form of transactions, and you may be familiar with
RDBMS transactions. App Engine Datastore transactions have a different set of
requirements and usage model than you may be used to.

First, it is important to understand that a “regular” Datastore write on a given entity
is atomic—in the sense that if you are updating multiple fields in that entity, they will
either all be updated, or the write will fail and none of the fields will be updated.
Thus, a single update can essentially be considered a (small, implicit) transaction—
one that you as the developer do not explicitly declare. If one single update is
initiated while another update on that entity is in progress, this can generate a
“concurrency failure” exception. In the more recent versions of App Engine, such
failures on single writes are now retried transparently by App Engine, so that you
rarely need to deal with them in application-level code.

However, often your application needs stronger control over the atomicity and
isolation of its operations, as multiple processes may be trying to read and write to
the same objects at the same time. Transactions provide this control.

For example, suppose we are keeping a count of some value in a “counter” field of
an object, which various methods can increment. It is important to ensure that if one
Servlet reads the “counter” field and then updates it based on its current value, no
other request has updated the same field between the time that its value is read and
when it is updated. Transactions let you ensure that this is the case: if a transaction
succeeds, it is as if it were done in isolation, with no other concurrent processes
‘dirtying’ its data.

Another common scenario: you may be making multiple changes to the Datastore,
and you may want to ensure that the changes either all go through atomically, or
none do. For example, when adding a new Friend to a UserAccount, we want to
make sure that if the Friend is created, any related UserAcount object changes are
also performed.

While a Datastore transaction is ongoing, no other transactions or operations
can see the work being done in that transaction; it becomes visible only if the
transaction succeeds.

Additionally, queries inside a transaction see a consistent “snapshot” of the Datastore
as it was when the transaction was initiated. This consistent snapshot is preserved
even after the in-transaction writes are performed. Unlike some other transaction
models, with App Engine, a within-transaction read after a write will still show the
Datastore as it was at the beginning of the transaction.

Datastore transactions can operate only on entities that are in the same entity group.
We discuss entity groups later in this chapter.

Transaction commits and rollbacks

To specify a transaction, we need the concepts of a transaction commit and rollback.

A transaction must make an explicit “commit” call when all of its actions have been
completed. On successful transaction commit, all of the create, update, and delete
operations performed during the transaction are effected atomically.

If a transaction is rolled back, none of its Datastore modifications will be performed.
If you do not commit a transaction, it will be rolled back automatically when its
Servlet exits. However, it is good practice to wrap a transaction in a try/finally
block, and explicitly perform a rollback if the commit was not performed for some
reason. This could occur, for example, if an exception was thrown.

If a transaction commit fails, as would be the case if the objects under its control
had been modified by some other process since the transaction was started the
transaction is automatically rolled back.

Example—a JDO transaction

With JDO, a transaction is initiated and terminated as follows:

 import javax.jdo.PersistenceManager;
 import javax.jdo.Transaction;
 PersistenceManager pm = PMF.get().getPersistenceManager();
 Transaction tx;
 try {
 tx = pm.currentTransaction();
 // Do the transaction work
 finally {
 if (tx.isActive()) {

A transaction is obtained by calling the currentTransaction() method of the
PersistenceManager. Then, initiate the transaction by calling its begin() method .
To commit the transaction, call its commit() method . The finally clause in the
example above checks to see if the transaction is still active, and does a rollback if
that is the case.

While the preceding code is correct as far as it goes, it does not check to see if the
commit was successful, and retry if it was not. We will add that next.

App Engine transactions use optimistic concurrency

In contrast to some other transactional models, the initiation of an App Engine
transaction is never blocked. However, when the transaction attempts to commit, if
there has been a modification in the meantime (by some other process) of any objects
in the same entity group as the objects involved in the transaction, the transaction
commit will fail. That is, the commit not only fails if the objects in the transaction have
been modified by some other process, but also if any objects in its entity group have
been modified. For example, if one request were to modify a FeedInfo object while
its FeedIndex child was involved in a transaction as part of another request, that
transaction would not successfully commit, as those two objects share an entity group.

App Engine uses an optimistic concurrency model. This means that there is no check
when the transaction initiates, as to whether the transaction’s resources are currently
involved in some other transaction, and no blocking on transaction start. The commit
simply fails if it turns out that these resources have been modified elsewhere after
initiating the transaction. Optimistic concurrency tends to work well in scenarios
where quick response is valuable (as is the case with web apps) but contention is
rare, and thus, transaction failures are relatively rare.

Transaction retries

With optimistic concurrency, a commit can fail simply due to concurrent activity on
the shared resource. In that case, if the transaction is retried, it is likely to succeed.

So, one thing missing from the previous example is that it does not take any action
if the transaction commit did not succeed. Typically, if a commit fails, it is worth
simply retrying the transaction. If there is some contention for the objects in the
transaction, it will probably be resolved when it is retried.

 PersistenceManager pm = PMF.get().getPersistenceManager();
 // ...
 try {
 for (int i=0; i< NUM_RETRIES; i++) {
 // the transaction work ...
 try {
 catch (JDOCanRetryException e1) {
 if (i== (NUM_RETRIES - 1)) {
 throw e1;
 finally {
 if (pm.currentTransaction().isActive()) {

As shown in the example above, you can wrap a transaction in a retry loop, where
NUM_RETRIES is set to the number of times you want to re-attempt the transaction.
If a commit fails, a JDOCanRetryException will be thrown. If the commit succeeds,
the for loop will be terminated.

If a transaction commit fails, this likely means that the Datastore has changed in the
interim. So, next time through the retry loop, be sure to start over in gathering any
information required to perform the transaction.

Transactions and entity groups

An entity’s entity group is de termined by its key. When an entity is created, its key
can be defined as a child of another entity’s key, which becomes its parent. The child
is then in the same entity group as the parent. That child’s key could in turn be used
to define another entity’s key, which becomes its child, and so on. An entity’s key
can be viewed as a path of ancestor relationships, traced back to a root entity with no
parent. Every entity with the same root is in the same entity group. If an entity has
no parent, it is its own root.

Because entity group membership is determined by an entity’s key, and the
key cannot be changed after the object is created, this means that entity group
membership cannot be changed

As introduced earlier, a transaction can only operate on entities from the same entity
group. If you try to access entities from different groups within the same transaction,
an error will occur and the transaction will fail.

You may recall from Chapter 5 that in App Engine, JDO owned relationships
place the parent and child entities in the same entity group. That is why, when
constructing an owned relationship, you cannot explicitly persist the children ahead
of time, but must let the JDO implementation create them for you when the parent
is made persistent. JDO will define the keys of the children in an owned relationship
such that they are the child keys of the parent object key. This means that the parent
and children in a JDO owned relationship can always be safely used in the same
transaction. (The same holds with JPA owned relationships).

So in the Connectr app, for example, you could create a transaction that encompasses
work on a UserAccount object and its list of Friends—they will all be in the same
entity group. But, you could not include a Friend from a different UserAccount in
that same transaction—it will not be in the same entity group.

This App Engine constraint on transactions—that they can only encompass members
of the same entity group—is enforced in order to allow transactions to be handled
in a scalable way across App Engine’s distributed Datastores. Entity group members
are always stored together, not distributed.

Creating entities in the same entity group

As discussed earlier, one way to place entities in the same entity gro up is to create a
JDO owned relationship between them; JDO will manage the child key creation so
that the parent and children are in the same entity group.

To explicitly create an entity with an entity group parent, you can use the App
Engine KeyFactory.Builder class . This is the approach used in the FeedIndex
constructor example shown previously. Recall that you cannot change an object’s key
after it is created, so you have to make this decision when you are creating the object.

Your “child” entity must use a primary key of type Key or String-encoded Key (as
described in Chapter 5); these key types allow parent path information to be encoded
in them. As you may recall, it is required to use one of these two types of keys for
JDO owned relationship children, for the same reason.

If the data class of the object for which you want to create an entity group parent
uses an app-assigned string ID, you can build its key as follows:

 // you can construct a Builder as follows:
 KeyFactory.Builder keyBuilder =
 new KeyFactory.Builder(Class1.class.getSimpleName(),
 // alternatively, pass the parent Key object:
 Key pkey = KeyFactory.Builder keyBuilder =
 new KeyFactory.Builder(pkey);
 // Then construct the child key
 keyBuilder.addChild(Class2.class.getSimpleName(), childIDString);
 Key ckey = keyBuilder.getKey();

Create a new KeyFactory.Builder using the key of the desired parent. You may
specify the parent key as either a Key object or via its entity name (the simple name of
its class) and its app-assigned (String) or system-assigned (numeric) ID, as appropriate.
Then, call the addChild method of the Builder with its arguments—the entity name
and the app-assigned ID string that you want to use. Then, call the getKey() method
of Builder. The generated child key encodes parent path information. Assign the
result to the child entity’s key field. When the entity is persisted, its entity group parent
will be that entity whose key was used as the parent.

This is the approach we showed previously in the constructor of FeedIndex, creating
its key using its parent FeedInfo key.

 html for more information on key construction.

If the data class of the object for which you want to create an entity group parent
uses a system-assigned ID, then (because you don’t know this ID ahead of time), you
must go about creating the key in a different way. Create an additional field in your
data class for the parent key, of the appropriate type for the parent key, as shown in
the following code:

 @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
 private Key key;

@Extension(vendorName=”datanucleus”, key=”gae.parent-pk”,
private String parentKey;

Assign the parent key to this field prior to creating the object. When the object is
persisted, the data object’s primary key field will be populated using the parent key
as the entity group parent. You can use this technique with any child key type.

Getting the entity parent key from the child key

Once a “child” key has been created, you can call the getParent() method of Key on
that key to get the key of its entity group parent. You need only the key to make this
determination; it is not necessary to access the actual entity. getParent() returns
a Key object. We used this technique earlier in the Splitting a model by creating an
“index” and a “data” entity

If the primary key field of the parent data class is an app-assigned string ID or
system-assigned numeric ID, you can extract that value by calling the getName() or
getID() method of the Key, respectively.

You can convert to/from the Key and String-encoded Key formats using the
stringToKey() and keyToString() methods of the KeyFactory.

Entity group design considerations

Entity group design considerations can be very important for the efficient execution
of your application.

Entity groups that get too large can be problematic, as an update to any entity in
a group while a transaction on that group is ongoing will cause the transaction’s
commit to fail. If this happens a lot, it affects the throughput. But, you do want your
entity groups to support transactions useful to your application. In particular, it is
often useful to place objects with parent/child semantics into the same entity group.
Of course, JDO does this for you, for its owned relationships. The same is true for
JPA’s version of owned relationships.

It is often possible to design relatively small entity groups, and then use transactional
to achieve required coordination between groups. Transactional tasks are
initiated from within a transaction, and are enqueued only if the transaction commits.
However, they operate outside the transaction, and thus it is not required that the
objects accessed by the transactional task be in the transaction’s entity group. We
discuss and use transactional tasks in the transactional task section of this chapter.

Also, it can be problematic if too many requests (processes) are trying to access the
same entity at once. This situation is easy to generate if, for example, you have a
counter that is updated each time a web page is accessed. Contention for the shared
object will produce lots of transaction retries and cause significant slowdown. One
solution is to shard the counter (or similar shared object), creating multiple counter
instances that are updated at random and whose values are aggregated to provide
the total when necessary. Of course, this approach can be useful for objects other
than counters.

 We will not discuss entity sharding further here, but the App Engine
 documentation provides more detail:

What you can do inside a transaction

As discussed earlier, a transaction can operate only over data objects from the same
entity group. Each entity group has a root entity. If you try to include multiple
objects with different root entities within the same transaction, it is an error.

Within a transaction, you can operate only on (include in the transaction) those
entities obtained via retrieval by key, or via a query that includes an ancestor
filter. That is, only objects obtained in those ways will be under transactional
control. Queries that do not include an ancestor filter may be performed within the
transactional block of code without throwing an error, but the retrieved objects will
not be under transactional control.

 Queries with an "ancestor filter" restrict the set of possible hits to objects
 only with the given ancestor, thus restricting the hits to the same
 entity group. JDO allows querying on "entity group parent key fields",
 specified as such via the @Extension(vendorName="datanucle
 us", key="gae.parent-pk", value="true") annotation, as a
 form of ancestor filter. See the JDO documentation for more information.

So, you may often want to do some preparatory work to find the keys of the object(s)
that you wish to place under transactional control, then initiate the transaction and
actually fetch the objects by their IDs within the transactional context. Again, if there
are multiple such objects, they must all be in the same entity group.

As discussed earlier, after you have initiated a transaction, you will be working
with a consistent “snapshot” of the Datastore at the time of transaction initiation. If
you perform any writes during the transaction, the snapshot will not be updated to
refl ect them. Any subsequent reads you make within the transaction will still refl ect
the initial snapshot and will not show your modifications.

When to use a transaction

There are several scenarios where you should be sure to use a transaction. They
should be employed if:

  • You are changing a data object field relative to its current value. For example,
    this would be the case if you are modifying (say, adding to) a list. It is
    necessary to prevent other processes from updating that same list between
    the time you read it and the time you store it again. In our app, for example,
    this will be necessary when updating a Friend and modifying its list of urls.

    In contrast, if a field update is not based on the field’s current value, then by
    definition, it suggests that you don’t care if/when other updates to the field
    occur. In such a case, you probably don’t need to employ a transaction, as the
    entity update itself is guaranteed to be atomic.

  • You want operations on multiple entities to be treated atomically. That is,
    there are a set of modifications for which you want them all to happen, or
    none to happen. In our app, for example, we will see this when creating a
    Friend in conjunction with updating a UserAccount. Similarly, we want
    to ensure that a FeedInfo object and its corresponding FeedIndex are
    created together.
  • You are using app-assigned IDs and you want to either create a new object
    with that ID, or update the existing object that has that ID. This requires a test
    and then a create/update, and these should be done atomically. In our app,
    for example, we will see this when creating feed objects, whose IDs are URLs.

    If you are using system-assigned IDs, then this situation will not arise.

  • You want a consistent “snapshot” of state in order to do some work. This
    might arise, for example, if you need to generate a report with mutually
    consistent values.

Adding transactional control to the Connectr application

We are now equipped to add transactional control to the Connectr application.

First, we will wrap transactions around the activities of creating, deleting, and
modifying Friend objects . We will also use transactions when creating feed objects:
because FeedInfo objects use an app-assigned ID (the feed URL string), we must
atomically test whether that ID already exists, and if not, create a new object and its
associated FeedIndex object . We will also use transactions when modifying the list
of Friend keys associated with a feed, as this is a case where we are updating a field
(the friendKeys list ) relative to its current value.

We will not place the feed content updates under transactional control. The content
updates are effectively idempotent—we don’t really care if one content update
overwrites another—and we want these updates to be as quick as possible.

However, there is a complication with this approach. When we update a Friend’s
data, their list of urls may change. We want to make any accompanying changes to
the relevant feed objects at the same time—this may involve creating or deleting feed
objects or editing the friendKey lists of the existing ones.

We’d like this series of operations to be under control of the same transaction, so that
we don’t update the Friend, but then fail to make the required feed object changes.
But they can’t be—a Friend object is not in the same entity group as the feed objects.
The Friend and feed objects have a many-to-many relationship and we do not want
to place them all in one very large entity group.

App Engine’s transactional tasks will come to our rescue.

Transactional tasks

App Engine supports a feature called transactional tasks. We have already
introduced some use of tasks and Task Queues in Chapter 7 (and will return to the
details of task configuration in Chapter 12). Tasks have specific semantics in the
context of a transaction.

 If you add a task to a Task Queue within the scope of a transaction,
 that task will be enqueued if and only if the transaction is successful.

A transactional task will not be placed in the queue and executed unless the
transaction successfully commits. If the transaction rolls back, the task will not be
run. The specified task is run outside of the transaction and is not restricted to objects
in the same entity group as the original transaction. At the time of writing, you can
enqueue up to five transactional tasks per transaction.

You may recall from Chapter 7 that once a task is enqueued and executed, if it returns
with error status, it is re-enqueued to be retried until it succeeds. So, you can use
transactional tasks to ensure that either all of the actions across multiple entity
groups are eventually performed or none are performed. There will be a (usually
small) period of time when the enqueued task has not yet been performed and thus
not all of the actions are completed, but eventually, they will all (or none) be done.
That is, transactional tasks can be used to provide eventual consistency across more
than one entity group.

We will use transactional tasks to manage the updates, deletes, and adds of feed
URLs when a Friend is updated. We place the changes to the Friend object
under transactional control. As shown in the following code from server.
FriendsServiceImpl, within that transaction we enqueue a transactional task that
performs the related feed object modifications. This task will be executed if and only
if the transaction on the Friend object commits.

 public class FriendsServiceImpl extends RemoteServiceServlet
 implements FriendsService {
 private static final int NUM_RETRIES =5;
 public FriendDTO updateFriend(FriendDTO friendDTO){
 PersistenceManager pm = PMF.getTxnPm();
 if (friendDTO.getId() == null) { // create new
 Friend newFriend = addFriend(friendDTO);
 return newFriend.toDTO();
 Friend friend = null;
 try {
 for (int i=0;i<NUM_RETRIES;i++) {
 friend = pm.getObjectById(Friend.class,friendDTO.getId());
 Set<String> origurls = new HashSet<String>(friend.getUrls());
 // delete feed information from feedids cache
 // we only need to do this if the URLs set has changed...
 if (!origurls.equals(friendDTO.getUrls())) {
 if (!(origurls.isEmpty() && friendDTO.getUrls().isEmpty())) {
 // build task payload:
 Map<String,Object> hm = new HashMap<String,Object>();
 hm.put("newurls", friendDTO.getUrls());
 hm.put("origurls", origurls);
 hm.put("replace", true);
 byte[]data = Utils.serialize(hm);
 // add transactional task to update the
 // url information
 // the task will not be run if the
 // transaction does not commit.
 Queue queue = QueueFactory.getDefaultQueue();
 try {
 pm.currentTransaction().commit();"in updateFriend, did successful commit");
 catch (JDOCanRetryException e1) {
 if (i==(NUM_RETRIES-1)) {
 throw e1;
 }// end for
 } catch (Exception e) {
 friendDTO = null;
 } finally {
 if (pm.currentTransaction().isActive()) {
 logger.warning("did transaction rollback");
 friendDTO = null;
 return friendDTO;

The previous code shows the updateFriend method of server.
FriendsServiceImpl. Prior to calling the transaction commit, a task (to update
the feed URLs) is added to the default Task Queue. The task won’t be actually
enqueued unless the transaction commits. A similar model, including the use of a
transactional task, is employed for the deleteFriend and addFriend method s of

We configure the task with information about the “new” and “original” Friend
urls lists, which it will use to update the feed objects accordingly. Because this task
requires a more complex set of task parameters than we have used previously, we
convert the task parameters to a Map, and pass the Map in serialized form as a byte[]
task payload. See the Task Parameters: Sending a Payload of byte[] Data as the Request
section of this chapter for more information about payload generation and use.

This task is implemented by a Servlet, servlet,se rver.servlets.
UpdateFeedUrlsServlet, which is accessed via /updatefeedurls. This Servlet
covers the three cases of URL list modification—adds, deletes, and updates.

The following cod e is from the server.servlets.UpdateFeedUrlsServlet class.

 public class UpdateFeedUrlsServlet extends HttpServlet {
 public void doPost(HttpServletRequest req, HttpServletResponse
 resp) throws IOException {
 Set<String> badurls = null;
 // deserialize the request
 Object o = Utils.deserialize(req);
 Map<String,Object> hm = (Map<String, Object>) o;
 Set<String> origurls = (Set<String>)hm.get("origurls");
 Set<String> newurls = (Set<String>)hm.get("newurls");
 Boolean replace = (Boolean)hm.get("replace");
 Boolean delete = (Boolean)hm.get("delete");
 String fid = (String)hm.get("fid");
 if (delete != null && delete) {
 if (origurls != null) {
 FeedIndex.removeFeedsFriend(origurls, fid);
 else {

The previou s code shows the initial portion of the doPost method of
UpdateFeedUrlsServlet. It shows the Servlet deserialization of the request, which
holds the task payload. The resulting Map is used to define the task parameters. If the
delete fl ag is set, the origurls Set is used to indicate the FeedIndex objects from
which the Friend key (fid) should be removed.

Similarly, the following code shows the latter portion of the doPost method of
server.servlets.UpdateFeedUrlsServlet. If the task is not a deletion request,
then depending upon how the different task parameters are set, either the FeedIndex.
addFeedURLs method or the FeedIndex.updateFeedURLs method is called.

 public class UpdateFeedUrlsServlet extends HttpServlet {
 private static final int NUM_RETRIES = 5;
 public void doPost(HttpServletRequest req, HttpServletResponse
 resp) throws IOException {
 if (origurls == null) {
 // then add only -- no old URLs to deal with
 badurls = FeedIndex.addFeedURLs(newurls, fid);
 else { //update
 badurls = FeedIndex.updateFeedURLs(newurls, origurls, fid,
 if (!badurls.isEmpty()) {
 // then update the Friend to remove those bad urls from its set.
 // Perform this operation in a transaction
 PersistenceManager pm = PMF.getTxnPm();
 try {
 for (int i=0; i< NUM_RETRIES; i++) {
 Friend.removeBadURLs(badurls, fid, pm);
 try {
 catch (JDOCanRetryException e1) {
 if (i== (NUM_RETRIES -1)) {
 throw e1;
 } } } }
 finally {
 if (pm.currentTransaction().isActive()) {
 logger.warning("did transaction rollback");
 } } }

In the case where URLs were added to the urls list of a Friend, some of the new
URLs may have been malformed, or their endpoints unresponsive. In this case,
they are returned as badurls. It is necessary to update the Friend object with this
information—specifically, to remove any bad URLs from its urls list. This operation
is performed in a transaction within the task Servlet. As with previous examples, the
transaction may be retried several times if there are commit problems.

What if something goes wrong during the feed update task?

In UpdateFeedUrlsServlet, in addition to the Friend transaction, the operations on
the feed objects are using transactions under the hood (in the FeedIndex methods).

Because all the transactions initiated by the task may be retried multiple times, it is
quite unlikely that any of them will not go through eventually. However, it is not
impossible that one might fail to eventually commit. So, we want to consider what
happens if a failure were to take place.

The FeedIndex operations can be repeated multiple times without changing their
effect, as we are just adding and removing specified friend keys from the sets of
keys. So, if the task’s Friend update transaction in the example above fails after its
multiple retries, the Servlet will throw an exception and the entire task will be retried
until it succeeds (recall that if a task returns an error, it is automatically retried). It is
okay to redo the task’s feed operations, so this scenario will cause no problems.

If a FeedIndex URL add/update transaction fails after multiple retries, this will
result in that URL being marked as “bad”. So, no information in the system ends up
as inconsistent, though the client user will have to re-enter the “bad” URL.

If a FeedIndex URL delete transaction fails after multiple retries, it will throw an
exception, which will result in the task being retried until it succeeds. Again,
this causes no problems.

So, even if the transactions performed in the task themselves fail to go through on
the first invocation of the task, the system will nevertheless reach an eventually
co nsistent state.

Task parameters—sending a payload of byte[ ] data as the request

The previous code, from FriendsServiceImpl, showed a task using a byte[]
“payload” as specification, rather than the individual string params that we had
employed in Chapter 7. The following syntax is used to specify a payload, where
data refers to a byte[]:

 Queue queue = QueueFactory.getDefaultQueue();

When the task is invoked, the task Servlet’s request parameter, req, will
hold the payload data and may be deserialized from the request byte stream
back into its original object, for example, as shown in the previous code,
UpdateFeedUrlsServlet. This can be a useful technique when the task parameters
are too complex to easily deal with as Strings. In our case, we want to pass Sets as
parameters. So for the UpdateFeedUrlsServlet task, a Map containing the various
task parameters is constructed and used for the payload, then deserialized in the
task Servlet. Methods of support the serialization and
deserialization. As an alternative approach, you could also pass as task params the
identifier(s) of Datastore objects containing the parameter information. The task
would then load the information from the Datastore when it is executed.

 Due to a GAE issue at the time of writing, base64 encoding and decoding
 is necessary for successful (de)serialization; this is done in server.
 utils.Utils (thanks to a post by Vince Bonfantifor this insight).

GWT Articles & Books

Pages: 1 2 3 4

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This

Share this post with your friends!