Thursday, December 3, 2015

Consuming Siebel web services with Perl

The first time that I tried to consume a Siebel web service with Perl wasn't a pleasant experience.

Going back to 2009, I saw a opportunity to flex my programming muscles and do it to get contact information. It was not only a opportunity to put my old book about web services with Perl to use, but also to greatly reduce operations time in the company I was working at that time.

Let's be fair: that book was really outdated. Back that time the book was published WSDL was available, but things were much easier since SOAP was RPC-based, not document based. Guess which implementation those web services in Siebel were based? Document.

I exhausted all my options with SOAP::Lite. I tried even editing manually the WSDL exported from Siebel, gave a shot with SOAP::WSDL (better, but not good enough) and after hours debugging I decided to have a chat with a fellow .Net programmer that worked in the same company and see if he was able to consume the WSDL from Siebel and do something with the results. To my shame he was and not just accomplished the job but did it in 15 minutes with a working POC.

I already had found references about XML::Compile at that moment, but the documentation from it was poor... and given the amount of time already spent, I went with my college .Net code.

Let's now go back to present. This year I decided to give another try with SOAP in Perl and the choice was with XML::Compile. After reading it's documentation (that was improved compared to the past, but still has a lot of space for doing better), starting from XML::Compile::WSDL and after a while I was able to pull up a working piece of code:

    use XML::Compile::WSDL11;
    use XML::Compile::SOAP11;
    use XML::Compile::Transport::SOAPHTTP;

    my $wsdlfile = File::Spec->catfile( 't', 'SWIContactServices.WSDL' );
    my %request = (
        ListOfSwicontactio => {
            Contact =>
                { Id => '0-1', FirstName => 'Siebel', LastName => 'Administrator' }

    my $wsdl = XML::Compile::WSDL11->new($wsdlfile);

    my $call = $wsdl->compileClient(
        operation      => 'SWIContactServicesQueryByExample',
        transport_hook => \&do_auth

    my ( $answer, $trace ) = $call->(%request);

    if ( my $e = $@->wasFatal ) {


    } else {

        # do something with the response


The "SWI Contact Services" is a vanilla Siebel inbound web service, very simple indeed. In this case, I'm just providing to it some information regarding SADMIN and recover the contact details in the response payload. Not very exciting, but SADMIN contact will always be in database.

Siebel has it's own authentication process instead of using some of the standards available out there. Unless the inbound web service in question uses that degenerate "user and password in the URL" method, you probably will want to use Siebel session management. That's exactly what the sub reference do_auth does in this sample code: such sub is well documented in XML::Compile::WSDL Pod, which includes the manipulation of the SOAP envelope containing the SOAP header, which is exactly what Siebel expects you to do to use session management.

Session management in Siebel has several advantages, including improved performance. By receiving a authentication token, several steps of authentication are skipped until that token is not valid anymore.

With that in mind, I think that doing this over and over would be quite boring. It was time to cook something and deliver it to CPAN: enter Siebel::SOAP::Auth.

By using a tiny object built with Moo now you can not only provide the Siebel authentication but also handle it entirely automatically. Here is the same code with it:

    use XML::Compile::WSDL11;
    use XML::Compile::SOAP11;
    use XML::Compile::Transport::SOAPHTTP;
    use Siebel::SOAP::Auth;

    my $wsdlfile = File::Spec->catfile( 't', 'SWIContactServices.WSDL' );
    my %request = (
        ListOfSwicontactio => {
            Contact =>
                { Id => '0-1', FirstName => 'Siebel', LastName => 'Administrator' }

    my $wsdl = XML::Compile::WSDL11->new($wsdlfile);
    my $auth = Siebel::SOAP::Auth->new(
            user          => 'sadmin',
            password      => 'XXXXXXX',
            token_timeout => MAGIC_NUMBER,

    my $call = $wsdl->compileClient(
        operation      => 'SWIContactServicesQueryByExample',
        transport_hook => sub { 
            my ( $request, $trace, $transporter ) = @_;
            # request was modified
            my $new_request = $auth->add_auth_header($request);
            return $trace->{user_agent}->request($new_request);

    my ( $answer, $trace ) = $call->(%request);

    if ( my $e = $@->wasFatal ) {


    } else {

        # do something with the answer


    $auth->find_token( $answer );

The MAGIC_NUMBER above is constant that defines the number of seconds the token will timeout.

The secret is to use the instance inside a sub reference, pass to it the $request (which will have the SOAP envelope to be manipulated) and return from the request made, exactly the same you would need to do in the do_auth sub. You need then to provide the new token  receive to the $auth object find_token method.

The good news is that a instance of Siebel::SOAP::Auth, being maintained and used during all the life time you need a web service from Siebel will take care of renewing the token, and hopefully, even avoid a additional round trip to the server in the case the token expires before it requests a new one. In the case it misses the opportunity, you need to be careful and check the exception throw by $@->wasFatal and do something with it (probably just repeat the request).

I'm still waiting for some bad news (not because I didn't find one it is not there). Please let me know if you find any!

Tuesday, November 10, 2015

eeePC 701 and OpenBSD

EeePC is somehow a quite old netbook nowadays, but there are a lot of folks around that still have it and keeping put it to good use.

I myself have a tiny 701 model. Back some years ago I used it basically for e-mail and Internet browsing, but it's limited disk space (4GB SSD) and single core processor make it useless for the current requirements for that (specially the websites that still uses the bitch called Adobe Flash). And, let's agree, any smartphone can beat it nowadays in terms of processor and memory.

Anyway, I was looking for an excuse to learn OpenBSD a bit. I always was curious about it but never gave me a change to learn it.

The "excuse" I was looking for was setting up a CPAN Reporter Smoke machine with OpenBSD, and my second thought (first was a VM in Virtualbox) was installing it on my EeePC.

I tried OpenBSD 5.7 (a few weeks before the release of 5.8) and setup run smoothly. The OpenBSD installer offered me a quite pleasant setup processes, with almost no questions and easy partitioning (very different of my first experience with it at 2000, when it offered disk space in sectors and let me do the math myself!). Keyboard configuration (something that is usually a pain to configure since I use ABNT2) was pretty simple too.

OpenBSD also does a good job about dealing with the limited disk space available on my EeePC, since it's basic install being really basic. I just removed the X-Window options from the file sets since the CPAN Reporter Smoker doesn't need one (and the screen size is really small) and fired away the installer.

Unfortunately, those were the good points about this "marriage": there are two big issues of installing OpenBSD on EeePC 701:
  1. File system (FFS).
  2. No fan control.
The default OpenBSD file system (FFS) is really slow compared to options available to Linux. Even after setting up the partitions to use noatime and soft updates my EeePC took a long time (most of it with I/O) to prepare the CPAN indexes. That's something that I could have fixed with mfs as a workaround, but the 512MB of memory of EeePC does not allow it. And, to be fair, I'm still working on providing statistics for each tested module by CPAN::Reporter::Smoker, some more things might doing the tests execution slower and I'm not aware of it.

Also, FFS does not implement TRIM (at least that's what my research found), so basically the support for SSD disks on OpenBSD is pretty weak.

The EeePC fan is another problem too. The thing is that the hardware configuration of it leaves the netbook get to high temperatures (and I'm not overclocking the processor) without turning the fans properly (yes, it really sucks). Since OpenBSD 5.7 kernel is not capable to detect the fans control there is nothing to do about it. I searched for some patches/user land software but none seems to be "production ready".


So, my conclusion about the experiment is: although OpenBSD has a good install for the limited hardware of EeePC 701, the limitations I explained above (specially number 2) tells me that is not even that safe to let the EeePC running for long periods with it, something that a CPAN Reporter Smoker requires. And, just to leave a comparison, I was able to easily install Debian Jessie on the EeePC with fan-control and lm-sensors packages, run a quickly setup and voila: the fans worked like a charm.

Thursday, April 30, 2015

Checking all modules configured in OHS

That's something I've being looking for since I started working with OHS (a cousin of Apache web server in the case you don't know it).

Apache happily will tell you the modules it has running with by usage of apachectl (or httpd -M depending on the distribution you're using).

Well, that's not that easy for OHS, at least not while you're in the shell. Based on a tip that I found about /proc (see details here) I wrote this little Bash script for OHS 11g:



if [ -z $PID -o -z $full_path ]
    echo <<BLOCK
Usage: <PID> <path to httpd>

xargs --null --max-args=1 echo export < /proc/${PID}/environ > "${temp_file}"
source "$temp_file"
"${full_path}" -M
rm -v "$temp_file"

Then I call the shell script passing the PID of a running process of OHS and the complete pathname to the httpd.worker process (probably I should try to fetch this information from /proc too). Here is a sample of the output:

bash-3.2$ ./ 28377 /foobar/ias/product/OHS/ohs/bin/httpd.worker
Loaded Modules:
core_module (static)
mpm_worker_module (static)
http_module (static)
so_module (static)
oralog_module (static)
ohs_module (static)
ora_audit_module (static)
file_cache_module (shared)
vhost_alias_module (shared)
env_module (shared)
log_config_module (shared)
mime_magic_module (shared)
mime_module (shared)
negotiation_module (shared)
status_module (shared)
info_module (shared)
include_module (shared)
autoindex_module (shared)
dir_module (shared)
cgi_module (shared)
asis_module (shared)
imagemap_module (shared)
actions_module (shared)
speling_module (shared)
userdir_module (shared)
alias_module (shared)
authz_host_module (shared)
auth_basic_module (shared)
authz_user_module (shared)
authn_file_module (shared)
authn_anon_module (shared)
authn_dbm_module (shared)
proxy_module (shared)
proxy_http_module (shared)
proxy_ftp_module (shared)
proxy_connect_module (shared)
proxy_balancer_module (shared)
cern_meta_module (shared)
expires_module (shared)
headers_module (shared)
usertrack_module (shared)
unique_id_module (shared)
setenvif_module (shared)
context_module (shared)
rewrite_module (shared)
onsint_module (shared)
weblogic_module (shared)
plsql_module (shared)
swe_module (shared)
Syntax OK
«/tmp/tmp.hKZiBm6589» deleted

Impersonating a process on Linux

That's a short tip, but I would like to came across with it in the past earlier. It would avoid me having to put a lot of effort to do the same thing.

Sometimes, for a reason or another, you want to execute a program on Linux with a different set of environment variables set. This program might be using complex and/or hidden configurations that will you need to find and reproduce in order to execute the program properly.

The thing is, if you already have a instance of this program already running, /proc will do that for you easily. This is one of the possible implementations, with a good help from xargs:

xargs --null --max-args=1 echo export < /proc/${PID}/environ > "${temp_file}"
source "$temp_file"
rm -v "$temp_file"

In this Bash script, I setup a temporary file to hold the environment configuration.

Then the script reads /proc/${PID}/environ, where ${PID} is the variable holding the process id that you're interested with.

This directory of /proc will hold every environment variable set for that process, separated by the NUL character. xargs will happily parse every entry in a loop, generating a single export <variable> for each item retrieved. And everything goes redirected to that temporary file.

That's it! No need to search around configuration files and/or debugging shell scripts to setup those variables. If the configuration is working for the program, it will certainly work for you too.

Friday, March 20, 2015

Your eScript loop sucks!

Scripting in Siebel has a bad reputation.

I'm not that old with Siebel, but I believe this bad reputation may had started because Siebel started scripting with VBScript language. It is that easy to make terrible code with it.

If you don't agree, please leave some comments explaining why Microsoft moved from it favoring other programming languages.

Anyway, I'm always listening that scripting is bad and you should avoid it at all costs.

Yeah, right.

"Because learning how to program is not for everybody" should be the rest of the recommendation that nobody will give you.

That's not because programming with eScript requires an "Einstein". Because programming requires attention and dedication. If you want to do it right, you must practice it, study it. There is no way around, this is not the same thing than drawing squares and dragging arrows around.

And, to finish my rant, let's go for a example. You should have seen that a lot in your life developing with Siebel:

var OfferBO = TheApplication().GetBusObject("Offer");
var OfferBC = OfferBO.GetBusComp("Offer");

OfferBC.SetSearchSpec("Id", TreatmentID);
var rec = OfferBC.FirstRecord();
while (rec) {
    //do something with the data
    rec = OfferBC.NextRecord();

OfferBC = null;
OfferBO = null;

It's the same old story to get data from the data layer. Let's ignore that this lame code is not using try-catch for now,  just please take a look at the while block.

You did? Right... now, if you're doing loops like that PLEASE STOP for God's sake!

eScript has the proper statement to do that kind of thing and I wonder why people keep writing the same damn thing:

if ( OfferBC.FirstRecord() ) {

    do {
        //do something with the data
    } while ( OfferBC.NextRecord() )

} else {

    TheApplication().RaiseErrorText("What the hell?!? Where is my data???");


Now you have it. Cleaner code. No extra variable to control execution flow. Properly validating that you have the data you're looking for or doing something about otherwise.

Let's tweak a little further:

    var OfferBO: BusObject = TheApplication().GetBusObject("Offer");
    var OfferBC: BusComp = OfferBO.GetBusComp("Offer");

    try {

        //Get the priority
        OfferBC.SetSearchSpec("Id", TreatmentID);

        var Priority:String = null;

        if (OfferBC.FirstRecord()) {

            do {

                //a silly example, but anyway...
                Priority = OfferBC.GetFieldValue("Priority");

            } while (OfferBC.NextRecord())

        } else {

            TheApplication().RaiseErrorText("No offer found for treatment id " + TreatmentID);


    } catch (e) {

        throw (e);

    } finally {

        OfferBC = null;
        OfferBO = null;


Let's review the changes:
  1. Using strongly typed variables: you should be using it nowadays to improve eScript performance.
  2. Hey, try-catch-finally is there for something! Use it!
  3. Always declare your cursor type when calling ExecuteQuery, and use a constant for that, not a number. "ForwardOnly" is the one you want in most cases.
  4. Always check if you're getting what you're expecting. If you're not, do something except hiding the dirt under the carpet. If you don't know what to do in such situation, add an exception and later you figure it out during your QA tests with some functional expert.
  5. Clean up your objects, you don't want a memory leak crashing the Siebel component.
That's it. And come on, it wasn't that difficult, it was?

Additionally, let me tell that despite Siebel Tools not helping you with that, your code does not have to look like indented by your cat sleeping on your keyboard. There are plenty tools to make your code look nice, I personally use for that.


Friday, March 13, 2015

Resolving HTTP 404 errors for Siebel on Linux

I couple of weeks ago I was checking OHS logs files that serves a Siebel web application instance for some errors regarding the application and what I found were a lot of errors regarding "File Not Found" (HTTP 404) errors.
Double checking those messages, all them seem to be happening because of two different "categories":
  • transparent GIF images used in the Siebel vanilla application
  • favicon.ico
In the first case, I had to look for those GIFs in the Siebel default folder for web content. The files were there. Looking further, I found a bug regarding Cascading Style Sheet: those CSS files, while used under Microsoft Windows OS didn't bother to use the correct case for file names or file extensions. For those GIFs, that meant the file extension was written as ".GIF" meanwhile it should be lowercase (at least the image files had that in lowercase).

File extensions don't make much sense for Linux, but the OS is case sensitive regarding file names. That's why OHS wasn't able to find them.

A fix to this issue is simple: change the CSS or the GIF file names. I preferred the former since using Sed for that is pretty straightforward. Assuming that the Siebel Server is installed in the directory defined for the $HOME environment variable, that's what I did:

tmp_file=$(mktemp);sed -e 's/\.GIF/.gif/g' $HOME/81/siebsrvr/webmaster/files/esn/main.css > "${tmp_file}";cat "${tmp_file}" > $HOME/81/siebsrvr/webmaster/files/esn/main.css; rm -v "${tmp_file}"

tmp_file=$(mktemp);sed -e 's/\.GIF/.gif/g' $HOME/81/sweapp/public/esn/files/main.css > "${tmp_file}";cat "${tmp_file}" > $HOME/81/sweapp/public/esn/files/main.css; rm -v "${tmp_file}"

That took care of fixing in place those CSS in webmaster and sweapp directories (yes, you should make that on both since they are synchronized by the Siebel application). This bug seems to be present only in Siebel versions or lower.

The second issue is easy to fix as well. You only need to find the proper favicon.ico file to use (most probably the one used by the company in their institutional web site). Just copy the file to the root folder of OHS and the next time a modern browser hit the page, it will find the icon just fine.


I really don't know if the end user will ever notice that those images are now available... if they did, we should know about errors long before looking at OHS log files.

The main reason is to avoid thousands of HTTP 404 errors being logged. It doesn't hurt anyway to check OHS logs anyway from time to time to see how things are going.

Wednesday, January 28, 2015

Faster imports of Siebel Repositories

So, it is just another day in your SADMIN life. The development team just finished cooking their configurations and now they want to test their doing in another Siebel Enterprise, which implicates that you need to import a new Siebel Repository.

Well, all you need is to launch Database Server Configuration Utility, go over all those dialog boxes filling the fields as necessary and choosing the correct options to import a Siebel Repository.

Lame try... if you ever did that on MS Windows, didn't you ever pay attention to the command line application that is invoked when you finish entering the details? That guy over there execute "repimexp", which is a command line program that you can actually use in your own.

Well, by using this program you can select different options that the Database Server Configuration Utility will not expose to you. And two of those options will allow you to dramatically improve the speed of importing a new Siebel repository.

Don't bother looking around a documentation that gives you all those details... there are only at command line help message. Let's review it:

bash-3.2$ repimexp
Siebel Enterprise Applications Repository Import/Export Utility, Version [21238] LANG_INDEPENDENT
Copyright (c) 1990-2008, Oracle. All rights reserved.

The Programs (which include both the software and documentation) contain
proprietary information; they are provided under a license agreement containing
restrictions on use and disclosure and are also protected by copyright, patent,
and other intellectual and industrial property laws. Reverse engineering,
disassembly, or decompilation of the Programs, except to the extent required to
obtain interoperability with other independently created software or as specified
by law, is prohibited.

Oracle, JD Edwards, PeopleSoft, and Siebel are registered trademarks of
Oracle Corporation and/or its affiliates. Other names may be trademarks
of their respective owners.

If you have received this software in error, please notify Oracle Corporation
immediately at 1.800.ORACLE1.


Error: Value missing for argument 'Rows Per Commit'.
-------------- required parameters -----------------

/C <ODBC data source>   ODBC data source.
                          Default: SIEBEL_DATA_SOURCE environment variable.
/U <userName>           User name
/P <password>           Password
/D <table owner>        Siebel database table owner.
                          Default: SIEBEL_TABLE_OWNER environment variable.
/R <repository>         Repository name, default: Siebel Repository
/F <dataFile>           Import/Export/Dump repository data file's name

------------- choose which action you want to take by /A -------------
------------- and select the sub-parameters accordingly --------------

/A <D|I|X|E>            Dump/Import/Import_INTL/Export actions

/A D                    Action: Dump basic data info from the datafile

/A I                    Action: Import base tables and (optional) INTL tables
  /G <languages>       (Optional) Specify what languages to import, e.g. ENU,FRA,ITA
                                  ALL for all languages. Default: no language import
  /X <Codepage#>       (Optional) Verify (not really import) the data file against
                                  a codepage character set, e.g. CP1252,CP874 ...
  /H <number>          (Optional) Number of rows per commit
  /8 <moduleFile>      (Optional) Module list filename
  /K <Y|N>             (Optional) Preserve DB system column values (Default: N)
  /J <string>          (Optional) DB system column source (DB_LAST_UPD_SRC) (Default: repimexp)
  /Z <number>          (Optional) Array Insert Size (Default: 5)

  /A X                    Action: Import INTL tables only
  /G <languages>       Specify what languages to import, e.g. ENU,FRA,ITA
                          ALL for all languages
  /O <Y|N>             (Optional) Abort INTL import if unable to resolve parent row
                                  in server repository, i.e. orphans
                                  Default: N
  /I <Y|N>             (Optional) Abort INTL table import if insert fails
                                  Default: Y
  /A E                    Action: Export
  /1 <exp rep userName>       (Optional) Default: to same as User name
  /2 <exp rep password>       (Optional) Default: to same as Password
  /3 <exp rep ODBC dt src>    (Optional) Default: to same as ODBC data source
  /4 <exp rep table owner>    (Optional) Default: to same as Siebel database table owner
  /5 <exp rep repository>     (Optional) Default: to same as Repository name
  /E <Y|N>                    (Optional) Export prototype data. Default:N
  /S <Signature>              (Optional) Repository signature
  /N <0|1|2>                  (Optional)
                                  0: no change.
                                  1: change CREATED_BY, UPDATED_BY, OWNER_BRANCH
                                  2: change CREATED_BY, UPDATED_BY, dates columns, OWNER_BRANCH
                                  Default: 1

------------ other optional parameters ---------------

/L <logFile>            (Optional) Log output messages to this file as well.
/W <lang code>          (Optional) The language env where this program is running.
                                  Default: SIEBEL_LANGUAGE, if not set ENU
/B <appServer root>     (Optional) Siebel server installation directory to
                                  override SIEBEL_HOME environment variable.
/T <Y|N>                (Optional) Test/Debug use only, do not import into database
/V <Y|N>                (Optional) Verify data.  For import, default: Y
                                  For export, default: N
/M <Y|N>                (Optional) Commit changes even if verification failed.
                                  Default: N

I highlighted the options that will help you get better performance.

The /H is the one that you will use in most cases, and probably the one that gives you the best improvement of speed. It's is basically allowing you to set the number of rows imported before doing a single commit. If you don't set this option, the program will commit every line inserted, which is a very secure operation... but also a lot slower.

It is all about trading off data consistence with I/O speed. The point is, the worst thing that might happen with a repository import that went bad is that you will need to erase it and repeat the operation. Not a big deal in most of cases.

That said, talk to your DBA about how much lines he considers reasonable inserting before a single commit to avoid using all your UNDO (or whatever other term is used by the DBMS Siebel is installed over) space and set the /H option with it.

The /G options allows you to select which languages you want to import. If you have a multilingual repository but do not need all those languages, just select the one you need it. That will reduce the number of rows to be imported. This parameter is not as good as /H but it will help anyway.

Another tip to improve Siebel Repository maintenance speed is to keep only the most recent ones imported (see Doc Id 761894.1). By my experience, keeping the current and the previous one should be enough. You don't need to try doing "big data" with Siebel Repositories. If you are worried about backups, just use the .dat files for that and erase all of those older repositories. Your DBA will thank you later for that.