Friday, November 1, 2013

Ubuntu, Firefox and all those useless dictionaries

So, you got Ubuntu installed in your computer, made your personal customizations and everything seems to be fine.

Well, that until you start using spell checking with Firefox and/or Thunderbird and discover that you haven't exactly tamed those animals.

For default, Ubuntu seems to install lots of dictionaries for both programs. If it's language settings include, for example, Portuguese and English, you will got spell checking with both languages in Firefox and Thunderbird, but probably not exactly what you want: when you click if your right mouse button in your text in progress, you will discover that English spell checking will take you four different English options, one for a different country. If you're coming from a upgrade or restore a previously used backup, then probably you'll get repeated options.

Fortunately, you can get rid of those country-specific dictionaries, but configuration for it will not be available in Firefox or Thunderbird.

Open up a terminal session and check out the commands output below:

youruser@yourhost:~$ ls /usr/lib/firefox/dictionaries
en_AU.aff  en_AU.dic  en_GB.aff  en_GB.dic  en_US.aff  en_US.dic  en_ZA.aff  en_ZA.dic  pt.aff  pt_BR.aff  pt_BR.dic  pt.dic  pt_PT.aff  pt_PT.dic

This list shows all dictionaries installed to Firefox. If you look for the same thing in the thunderbird folder of Thunderbird e-mail program, you'll see a similar output.

Let's remove them and configure dpkg to not install new files then, but move then to other location instead, which is a nice way to avoid the files to be restored every time you upgrade Firefox and/or Thunderbird.

youruser@yourhost:~$ sudo rm -f /usr/lib/firefox/dictionaries
[sudo] password for youruser:


youruser@yourhost:~$ sudo dpkg-divert --add /usr/lib/firefox/dictionaries 
Creating 'local diversion from /usr/lib/firefox/dictionaries to /usr/lib/firefox/dictionaries.distrib'

Restart Firefox and/or Thunderbird. If you got no dictionary after that, don't worry: just go to Tools -> Complements menu options and install your desired dictionaries. After that, you probably won't need to worry about those settings again until you got a fresh setup of Ubuntu.

Monday, September 30, 2013

Recreating your customized components the easy way

About 2 years ago, I received the task to migrate Siebel Enterprises to a new data center. That would require checking all necessary steps to reinstall the Siebel Enterprise since the new hosts would be using different names due a network policy.

One of the tasks were migrating the customized components, most of them workflow monitors. Creating customized components can be a lot of typing, a boring and repetitive task prone to errors. Wouldn't be cool if one could just do something like a CRTL+C and CRTL+V to all parameters from an Enterprise to another?

That is possible. Well, at least with two operations.

About that time the API of Siebel::Srvrmgr was already usable, so I took a shot to build such automation.

The result is a Perl script named Siebel Srvrmgr Exporter. This little command line tool connects to a Siebel Server and exports components information in the form of multiple CREATE COMPONENT commands to standard output. With that, it is just a matter to connect to the new Siebel Enterprise and execute the same commands for the new servers with srvrmgr program.

Here is the command line help:



The program also includes a simple ASCII animation for the user to avoid the feeling that the application stopped working because it's expending some seconds retrieving data from the Siebel Server.




Siebel Srvrmgr Exporter started as a tool to solve a real problem, and later I had included it as an POC (proof of concept) of Siebel::Srvrmgr API. In release 0.9 of Siebel::Srvrmgr, I had included more features to it:
  • Exports only components attributes that have a defined value;
  • Exports all components which name that matches a given regular expression;
  • Some internal improvements;
Siebel Srvrmgr Exporter is now an standalone download available at http://siebel-monitoring-tools.googlecode.com/files/Siebel-Srvrmgr-Exporter-0.01.tar.gz. Be sure also to checkout the wiki page of the project with instructions to setup the necessary Perl modules for running Siebel Srvrmgr Exporter.
Here are some examples of the output generated:
Components with "Work" string in their names, with all parameters




Components with "Work" string in their names, with only the parameters that have defined values
In those two examples, all components with "Work" in their respective names are exported, the difference is that the first one exports all parameters of the component, the second only the parameters that actually have a defined value.

How does this thing work?


Recently I was showing the tool to a colleague at work and he asked how so I had the patience to type all that information. I found it funny because in fact I had typed quite a few lines of Perl code to do that. By using the classes available at the Siebel::Srvrmgr API, the program:
  1. Connects to a Siebel Server in a Enterprise;
  2. Recover each component (that is available in the server) information;
  3. Recover their parameters;
  4. Enter in a loop print all the information as a "create component" command of srvrmgr.
Since the program prints this information to the standard output, if they are many commands to just copy and paste you probably will want to redirect the output to a text file and use srvrmgr in batch mode. Maybe I'll include a new command line parameter for that in the future.

Later, by using srvrmgr, you can submit those commands to the new Siebel Server, recreating those components with the same parameters defined before.

And that's it! Two commands:
  1. export_comps.pl
  2. srvrmgr
And you have "copied" your components from one Siebel Server to another.

Caveat

There is always one: the parameters that includes passwords are recovered from the Siebel Server as a "****", so you probably will want to change those (a search/replace with your favorite text editor will do) between steps 1 and 2. Maybe I'll implement some feature to ask for a real value before printing the CREATE COMPONENT commands in the future (probably after Christmas lunch, just don't know from which year).

If anything fails and you don't know why, make to sure to check the Troubleshoot wiki on the project main page. 

Friday, August 16, 2013

Why Siebel UCM uses PIP instead of Siebel Replication Manager?

Originally this was posted in one Linkedin's Siebel group that I'm subscribed, but since I got no answer, I decided to post here too... maybe a good soul can answer this one in the future because for me it still does not make any sense.

Some time ago I started wondering why Siebel UCM 8 does not uses the Siebel Replication Manager instead of using web services from PIP.

While Siebel Replication Manager can update slave-configured systems with only the absolute necessary data, PIP sends the entire "entity" (think about updating the e-mail of a contact, for example, PIP will send the whole contact in the payload).

I'm not a Siebel UCM professional, but every time I talk about it with a college I hear "it is quite the same thing than Siebel CRM, using basically the same infrastructure". I even asked Oracle support about it and all I got is:
"I contacted the Product Manager for Siebel Remote and he advised that you ask your Sales Consultant or TAM to get the answer. He is doubtful that Replication Manager can replace PIP integration."
I see no technical reason for that: Siebel CRM and UCM are products from the same vendor and use the basic infrastructure, so why would be necessary to use expensive "heavy" calls between the systems by using SOAP compared to moving tiny DX files between them? On the other hand, using PIP to integrate with other systems would make much more sense.

Wednesday, May 29, 2013

Siebel eScript and your file system

Everybody knows (including you, right?) that the usage of eScript should be limited for customizations in Siebel in all cases that is not possible (or too hard) to use Siebel Workflows and vanilla Business Services.

Considered that, eScript surely has it's space: in some cases, applying business logic with eScript is far easier than using a Workflow. eScript also has more generic features than Workflows, like working with Siebel Server file system.

Clib provides several functions to work with file systems, most of them looking like ANSI C functions. I won't write here the same information that is already available at the corresponding Siebel Bookshelf (Siebel eScript Language Reference) but I will give some tips about what I had learned the hard way.

Test the directory before using it

Before trying to use a directory it is always a good idea to check if the eScript will have access to it (specially if the Siebel version that you have doesn't have the Clib.strerror working correctly). One way to test a directory is using the functions Clib.chdir and Clib.getcwd combined, as shown the testDir function below:

function testDir(dir)
{
    try
    {
        //tests if the directory is valid
        var previousDir = Clib.getcwd();
        var testResult = -1;
      
        testResult = Clib.chdir( dir );
      
        if ( testResult == -1 )
        {
            TheApplication().RaiseErrorText('The directory ' + dir + ' does not exists or cannot be accessed.');
        }
    }
    catch(e)
    {
        TheApplication().RaiseErrorText('Error in function testDir: ' + e.toString());
    }
}


This will help to find out missing file system permissions (or missing the directory itself) before trying to use the directory.

Of course, using Clib.strerror should be your first option if the function is returning the error message from the OS as expected!

Check if the file was really opened

Looks like a silly thing, but here and then I find some programs that still do not check the file pointer after trying to open a file with Clib.fopen. It is very simple to test it, as you can check below:

    fp = Clib.fopen(myFile,"au");

    if ( fp == null )
    {
        TheApplication().RaiseErrorText('An error ocurrered when trying to open ' + myFile);
    }


With three lines of code you will avoid having to debug code due file system permissions, for example. Why that? Because those errors are not considered exceptions by Clib, and failure will happen only if you try to use the file pointer...

By the way, the functions Clib.fputs e Clib.fclose follows the same idea, returning null in something goes wrong. If you're felling specially paranoiac you can include tests for those functions as well.

Use the buffer Luke!

The read and write functions of Clib use buffering by default: that means they will keep data in memory before doing a I/O operation. This helps with I/O performance of your program since read/writing to the disk is slower than in RAM memory (I'm not even considering disk cache). If you write code like this in the same scope:

    fp = Clib.fopen(myFile,"au");

    if ( fp == null )
    {
        TheApplication().RaiseErrorText('An error ocurrered when trying to open ' + myFile);
    }
   
    Clib.fputs(myData,fp);
    Clib.fclose(fp);

    //later your repeat the same process

you're simply wasting buffer usage.

Once a college of mine argued that using Clib for logging messages would be "reinventing the wheel" since there are already functions for that like TheApplication().Trace. The problem with Trace is that you don't have any control about the buffer: the function would (probably) use I/O immediately to the disk, in other words:
  1. Open the file.
  2. Write to the file.
  3. Close the file.
Closing a file does an implicit flush of buffer.

With Clib you could have better control about buffer usage and if you need to write large sets of data to a file it will help with the I/O performance.

To accomplish that, just create a global file pointer variable (in the Declarations area) and open the file just once with Clib.fopen. After that, just go on using Clib.fputs and it's cousins.

How to guarantee that the file will be properly closed? This is even more important if the Siebel Server is using a file system with mandatory locking like NTFS: the file will be kept as locked until the process that opened it close the file or cease to exist.

To avoid this kind of issue, just use the finally block to close the file:

# Declarations area
    var fp;
   
function foobar {

    try {

    fp = Clib.fopen(myFile,"au");

    if ( fp == null )
    {
        TheApplication().RaiseErrorText('An error ocurrered when trying to open ' + myFile);
    }
   
    Clib.fputs(myData,fp);
   
    } catch(e) {
   
        //do something
   
    } finally {
   
        Clib.fclose(fp);
   
    }
   
}


Doing this will guarantee that the process that is executing the Business Service will release the file lock and the file will be closed, having all data that was not yet written to the file to be flushed from the buffer.

And finally, if you need to give more security to not loosing data in the case of a hard failure, at each execution of  Clib.fputs execute a Clib.fflush, without closing the file.

Copying files

In the article "How Can You Copy Files from One Location to Another Using Siebel Scripting?" (ID 477948.1) available at Oracle support webpage there is a description of some solutions to copy files with eScript since Clib does not have those features (one thing that I still trying to understand why not).

One of the solutions is to use Clib.rename: if you ever used any UNIX shell you will notice it looks like the mv command.

The other alternatives in the article (Clib.system and SElib) would be too "expensive" in terms of computer resources to use in my opinion (while I must say I never did a formal test about this).

If you need to copy lots and lots of files, my suggestion is to create a small program (in any language that you prefer) and then execute it with Clib.system: this way you will create just one additional OS process to do all the necessary work.

Listing files in a directory

This tip only works for Siebel setups in a Microsoft Windows OS.

The Oracle technical support website has several articles teaching to use SElib and the kernel32.dll to look for filenames patterns in some directory. Unfortunately those articles just teach you that you should use kerne32.dll for the job, but not how to use it. I had already saw code that keeps calling FindFirstFileA recursively when the function FindNextFileA is just there waiting to be used. Reading the kernel32.dll documentation is a painful process if you're not a C++ programmer (and I'm not) but it is there ((http://msdn.microsoft.com/en-us/library/windows/desktop/aa364418%28v=vs.85%29.aspx) to be checked to avoid doing silly things with the DLL.

Another issue is to deal with NUL characters: a predefined amount of memory is allocated to keep the filename that was found and in the case missing something more interesting the functions FindFirstFileA and FindNextFileA fill the missing alocated space with NUL characters if the filename is as large as the buffer. This is quite normal to be expected if you're programming with C or C++, but try to concatenate strings (or do anything else like using split) with eScript and the result will not be something that you're expecting.

The function below is a possible solution for the problem (probably not the most elegant one): it will look for a NUL character in a string and return the number of "printable" characters read until it.


function removeNul(string)
{
    try
    {
        var iEndPos = string.length;
        var lastPrintChar = 0;

        for (var i = 0; i < iEndPos; i++) {
 
            //if char is printable, go to next until test fails
            if ( Clib.isprint( string.charAt(i) ) )
            {
           
                lastPrintChar++;
           
            }
            else {
           
                break;
           
            }
       
        }
       
        return string.substring(0, lastPrintChar);
   
    }
    catch(e) {
   
        TheApplication().RaiseErrorText('An error ocurred with removeNul function: ' + e.errText);
   
    }

}


Finally, below I put a complete example of listing files considering the points that I commented, just to illustrate the concepts. Pay attention that the results of found files are stored in a array so the garbage collector of eScript will have a chance to release as soon as possible the allocated memory for the objects and kernel32.dll.

    try {

        var blob = new blobDescriptor();
        var fileData;
        var hFind;
       
        blob.dwFileAttributes   = SWORD32;
        blob.ftCreationTime     = SWORD32;
        blob.ftCreationTime_    = SWORD32;
        blob.ftLastAccessTime   = SWORD32;
        blob.ftLastAccessTime_  = SWORD32;
        blob.ftLastWriteTime    = SWORD32;
        blob.ftLastWriteTime_   = SWORD32;
        blob.nFileSizeHigh      = SWORD32;
        blob.nFileSizeLow       = SWORD32;
        blob.dwReserved0        = SWORD32;
        blob.dwReserved1        = SWORD32;
        blob.cFileName          = 1000000;
        blob.cAlternateFileName = 1000000;
       
        var nextIndex = filesToProcess.length;
       
        // brand new start of filesToProcess
        if (typeof nextIndex == 'undefined') {
       
            nextIndex = 0;
            filesToProcess[0] = '';
       
        }
       
        hFind = SElib.dynamicLink("Kernel32.DLL", "FindFirstFileA", STDCALL, filenameToMatch, blob, fileData);

        //found a file with the request pattern
        if (hFind != -1) {

            filesToProcess[nextIndex] = removeNul(fileData.cFileName.toString());

            while (SElib.dynamicLink("Kernel32.DLL", "FindNextFileA", STDCALL, hFind, blob, fileData) != 0)
            {
           
             nextIndex++;
            
             filesToProcess[nextIndex] = removeNul(fileData.cFileName.toString());            
           
            }
               
            return true;
        }
    }
    catch(e) {
   
        throw(e);
       
    }
    finally {
   
        nextIndex = null;
        fileData = null;
   
    }
}


Thinking a little bit more, working with a UNIX-like OS would be possible if you have a library with the same functions of kernel32.dll, but this would be highly dependent of the OS being used (as the same thing happens to kernel32.dll). So, I wonder, why doesn't Siebel have such an library? Doing something with ANSI C should be enough portable.

Read and write in Unicode

One of the file modes available for Clib.fopen is reading and writing in Unicode by using the "u" character specified.

Working with this mode is a better guarantee that you will be able to export or import data in a portable way between systems, specially if the Siebel data base is configured to use Unicode. Working with Unicode will avoid a lot of headaches with accents and other characters.

But be sure to remember this: about the time I'm writing this article, Siebel regular expressions does not support Unicode.

Wednesday, April 24, 2013

The CPAN Testers game

For those who don't know, one of the greatest features of Perl programming language is the centralized repository of Perl modules known as CPAN (Comprehensive Perl Archive Network).

CPAN includes a vast number of different solutions for a vast range of applications. For almost everything you will be able to find a match (or more) of Perl modules (and maybe using different programming paradigms) for you.

CPAN is not just that: it is highly organized and this includes proper documentation about the standards to be used to create useful Perl modules and make them available.

Those standards includes how to include automated tests for your released modules. Those tests will be executed by:
  1. those people who wants to use your modules;
  2. people who created smokers to test everything at CPAN automatically.
This post is specially about the later.

Anyone can became a tester for CPAN and I'll try to illustrate the steps to become one.

Configuration to submit those distributions installed manually


In fact, this is so simple that anybody that uses CPAN should do it, including system administrators.

The wiki http://wiki.cpantesters.org/wiki/TestDuringInstall have almost all the details related.The tutorial is highly based on UNIX-like operation systems. You may need to do some adaptations (like, for example, using something else instead chmod to give execution permissions).

I also would add a step to upgrade the CPAN module (which is the default shell to interface with CPAN repository) before installing Task::CPAN::Reporter, following the steps below:

$ perl -MCPAN -e shell
cpan[1]> install CPAN
cpan[1]> reload CPAN
cpan[1]> exit

Usually reloading CPAN is enough, but I already have issues by reloading it only, so exiting CPAN and entering again is more an additional guarantee than a necessary step.

Installing Perl modules through CPAN is quite easy and it will make your life easier to install modules, specially because that are additional features that you can add to it (like colorized output and command history).

CPAN can work behind a HTTP/HTTPS proxy if you need to use one, as CPAN-Reporter does. CPAN-Reporter uses HTTPS access to send the tests results, but that can be changed to HTTP by editing the configuration file ~/.cpanreporter/config.ini, changing the protocol transport of the URL metabase.cpantesters.org/api/v1/ from HTTPS to HTTP.

Creating a smoker machine

For software testing practice, smoke tests is a term to designate the possibility to execute automated tests without need of human intervention.

CPAN::Reporter::Smoker will let you do that but the configuration of it is a lot more involving.

First of all, doing smoke tests can be a dangerous thing, specially because you are downloading code from people you don't know (probably) and don't have the slightest idea what the code does. I never heard about somebody that upload malicious code to CPAN, but while that could happen, it is usually ugly bugs that are a risk to damage the smoker machine or even your LAN.

So, to avoid that, the best is to create a Virtual Machine (VM). There are several options to create one. I myself found that using Virtual Box (from Oracle, bought from SUN) a good option (easy to setup and good enough performance). You can use any OS that supports Perl for that.

Which OS is better? Well, Perl runs better in UNIX-like OS's, that is not exactly a secret. Doing smoke tests are far easier at a UNIX-like box. Tests will not usually hang like using Microsoft Windows or Cygwin (I've tried both). Of course there are some modules that runs exclusively in some OS, and you can do some good to add more tests in those "weirdos" instead of a UNIX-like OS.

After that, consider limiting the smoker machine access to your LAN and to the Internet. Smoking tests are also possible by using an HTTP/HTTPS proxy.

The wiki http://wiki.cpantesters.org/wiki/CPANReporterSmokerLong has a detailed description about setting up the smoker, but again is highly based on UNIX-like OS.

There are some steps that I recommend doing before following the wiki instructions:
  1. Install a local perl to avoid root/administrator privileges: you can do it manually or use Perlbrew. I specially like using Perlbrew because I can get a local, customized perl installation without messing with the (probably) already perl setup available. You can get some performance improvement by compiling perl in your machine, specially if you setup it without some unnecessary features. Here there is a good tip about doing a faster Perlbrew setup.
  2. Install the CPAN::SQLite distribution: this will make CPAN to runs faster by using a SQLite database. Installing it may required you to install additional libraries depending on your OS of choice. Be sure to check CPAN::SQLite environment variables too.
  3. Install CPAN::Mini: be sure to read the related Pod to get the details, specially regarding the minicpan command line program.
Finally, after following the instruction in the wiki, be sure to check the hints documentation of CPAN::Reporter::Smoker.

What about the game (in the post title)?

Well, there is a little bit of game about who submit more tests to CPAN: it is just a funny way to incentive "competition" between the testers. There is even a website to give you the details here.

The last time that I checked I was at position 117 (my user id is ARFREITAS). To give you a hint how serious people get about creating smoker machines to submit their tests I created the two following charts below, being the Y axis the number of submissions and the X axis the ranking position:




Actually I would create a single chart, but the difference between the ten first submitters to the rest is so big that you wouldn't be able to see it nicely. As you can see, getting the first place from BINGOS is really a hard task. I hope you got a cluster for that. :-)

Conclusion


Becoming one of the greatest CPAN testers can be fun, but the more important thing is helping the Perl community doing all those tests, even if you're not a developer. Alas, you can learn a lot about tests automation with the experience (I did myself). Also, you can consider that TAP is a big reference about software testing automation, since other languages is implementing the concept.

Anybody that released a Perl distribution on CPAN will get an e-mail about your testing report, and thus have a change to correct bugs that would be totally unexpected when the distribution was initially tested.

After all, in the spirit of open source, everybody wins by getting better code from CPAN.

Acknowledgments 


Thanks to Breno (breno at rio.pm.org) about giving the details about configuring protocol transport for CPAN::Reporter.

Wednesday, April 17, 2013

Reactivating the My Oracle Support autocomplete

This post is an updated version of the original one here (in Brazilian Portuguese).

I'm a Siebel CRM certified professional and I use My Oracle Support some couple of times during a week.

That said, it would be required that I type my login and password every time that I want to go in the website. Well, that is not very fun because even if you want your browser will not remember those values.

I strong disagree with such website configuration: if I want to keep my credentials saved within the browser, that's my problem, not Oracle's. A simple "remember me" check box would be far more polite.

In the meantime, Oracle's website is still HTML with steroids at the end of your HTTP request. Even if they disable the browser login reminder it is still possible to enable it. And that's the trick I'm posting here.

The restriction applied by Oracle is an HTML tag called autocomplete. What it does is tell to the browser if it can or cannot save values from form fields and automatically fill them in again when you come back. If the autocomplete=off, the web browser will not even try to save those values for you.

Nowadays several browsers have tools to access the HTML of the HTTP response and enable the user to change it in memory. Mozilla Firefox, my preferable web browser, has such tools.

There are some bookmarklets that would, theoretically, change those configurations automatically, but the ones that I tried didn't work. But it is still possible to change them manually.

First, install the Firebug plug-in into Firefox.

Then, after restarting Firefox, go to the My Oracle Support website login page and click the "sign-in" button.

The login form will be shown to you (mine is in Brazilian Portuguese, but that doesn't matter). Open the Firebug (use the F12 keyboard shortcut if you can't find the related menu option). You will see something like the screen-shot below:


Just ignore that the form shown has a value for the login field.

In the lower window the Firebug plug-in will show you the source of the HTML. Press CTRL+F and type "autocomplete" as shown in the screen-shot and press the next button. The value of the property will be "off", disabling Firefox for trying to save the login and password.

After that, select the value of the property by left clicking on it: Firebug will offer to change this value. Go ahead and replace "off" with "on".

Now it is just a matter to type your login and password and access the My Oracle Support as you would do normally.

Testing time: logout and go back to the login page.

In the form, double click with the left button in the login field. Firefox will offer to you the login typed previously even though the form from My Oracle Support has the autocomplete=off again.



Finally, select the suggested login and Firefox will fill in both form fields for you. Yes, it is that simple.

Friday, April 5, 2013

Get Siebel Tools 8.1 working with Windows 7

Alright boys and girls... all you know that Microsoft is always innovating: they keep creating new annoyances for each new MS Windows version they release. Today is the day to be annoyed by Windows 7 default file system permissions.

Some couple of days ago I got my old notebook replaced by a shinny new one. The downside is that I had to reinstall and reconfigure a lot of stuff, including Siebel Tools.

I know Siebel Tools 8.1 does not officially supports Windows 7, but there is no way I'll be able to go back to Windows XP (unfortunately in this case).

After running the Siebel Tools setup program as administrator, I got a non-working Siebel Tools: I could not even find out what was going on because I didn't have any permission to create log files under \Log directory.

After fixing that, it was also necessary to change permissions to the sse_data.dbf file since my user got no write rights on it. Obviously I got some permissions issue (maybe because I executed the setup program as Administrator?) even my user being part of the group Administrators of my notebook. I tried the not so fancy solution trying to give me full permissions for all files/directories under the Tools directory.

I didn't work... don't ask my why. Looks like the permission dialog does not work as expected. I tried to grab permissions several times, even tried to put my user as owner of all objects... no results.

I was able to open Tools but take a look of what I got after trying to "Edit Web Layout" of a applet:



Not cool. Probably more permissions issues. This is what I got in the respective log file:

2021 2013-03-25 17:07:28 0000-00-00 00:00:00 -0300 00000000 001 003f 0001 09 siebdev 9796 4412 C:\Siebel\8.1\Tools\log\siebdev.log 8.1.1.6 [21233] ENU
ObjMgrLog    Error    1    0000000251502644:0    2013-03-25 17:07:28    (filetransport.cpp (495)) SBL-GEN-10116: Error acquiring file lock
 

ObjMgrBusServiceLog    Error    1    0000000251502644:0    2013-03-25 17:07:28    (eaixmlprtsvc.cpp (348)) SBL-EAI-04261: Error getting XML from file 'standard.xml'.
 

ObjMgrLog    Error    1    0000000251502644:0    2013-03-25 17:08:45    (filetransport.cpp (495)) SBL-GEN-10116: Error acquiring file lock
ObjMgrBusServiceLog    Error    1    0000000251502644:0    2013-03-25 17:08:45    (eaixmlprtsvc.cpp (348)) SBL-EAI-04261: Error getting XML from file 'C:\Siebel\8.1\Tools\bin\expression_builder.xml'.


ObjMgrBusServiceLog    Error    1    0000000251502644:0    2013-03-25 17:08:45    (ebservice.cpp (3326)) SBL-DEV-61159: No valid Expression Builder could be found for the field Name on business component Repository Applet.


As always, going down to a terminal give me some more powerful options.

First I used the program Takeown from Microsoft itself: it just put everything under my user as proprietary by using it's recursive argument.

takeown /f C:\Siebel\8.1\Tools /r

After that, I went to a Windows Powershell prompt and did the following:

$newACL = Get-Acl C:\Siebel\8.1\Tools
Get-ChildItem C:\Siebel\8.1\Tools -Recurse -Force | Set-Acl -AclObject $newACL

If you know a bit of Powershell you will notice that I got a "ACL model" from the Tools directory itself: at least this directory I could put my desired permissions, the problem was to replicate this configuration for it's children objects. That's why I used it's ACL to replicate the configuration with the Get-ChildItem and Set-Acl cmdlets.

That's it: Siebel Tools was back to work. Of course I probably gave much more permissions to the folders than it was necessary, but anyway, life is too short to go looking for all necessary permissions on a not-available-for-users document from Oracle telling me what is required or not.

Thursday, April 4, 2013

Improving bulk inserts on SQLite

One of these days I decided that I needed to have a way to transport data from Siebel SQLAnywhere local database to anything that I could review information without having to install any Siebel application for it. My choice to go was SQLite and Perl to extract the data.

SQLite is far more "transportable" than SQL Anywhere, specially if you don't need all the fancy features it has when compared with SQLite.

First I checked if SQL::Translator wouldn't do the job, but it seems SQL Anywhere syntax is not supported by now. Then, after playing around with Perl DBI, DBD::ODBC (yes, Siebel local databases still uses ODBC, don't ask me why) and DBD::SQLite, I got something working that could check all SIEBEL schema tables DDL, create the tables into SQLite database and populate those tables with data. After all I got something like a "data dumper".

Unfortunately, performance was not that good. Specifically talking about the computer that I used for that (a old notebook with encrypted HD), the I/O was a performance hog: perl interpreter was not getting even 20% of all processor power available but the insert steps where really slow.

First thing when doing bulk inserts with DBI is to be sure to use binding parameters, which in fact was already applied:
my $dest_insert =
    'INSERT INTO '
  . $table . ' ('
  . join( ', ', @quoted_columns )
  . ') VALUES('
  . join( ', ', ( map { '?' } @quoted_columns ) ) . ')';

my $insert_sth = $sqlite_dbh->prepare($dest_insert);
$select_sth->execute();

while ( my $row = $select_sth->fetchrow_arrayref() ) {

    for ( my $i = 0 ; $i < scalar(@columns) ; $i++ ) {
        $insert_sth->bind_param( ( $i + 1 ),
        $row->[$i], $columns[$i]->sql_type() );
    }

    $insert_sth->execute();
    # and code goes on
}

With my homework done, I went to another basic step: check the storage back-end.

The thing is, the notebook was "plagued" with disk encryption: this means that everything in that HD that needs to be read/write must pass through the trouble to be encrypted/decrypted. While quite secure, doing I/O in these conditions are really bad for bulk inserts.

I decided then to go with a RAM disk: such solutions are uncommon in Microsoft realm, but after some configuration I got some good improvement. The problem is, the notebook didn't have enough RAM to have a RAM disk size that could hold the complete database file and keep everything else running... I ended exhausting all RAM memory available.

Next steps where taking a look about features of SQLite that could help on that... and the little database didn't let me down. Here is a quick list of the features that needed to be turned on:
By enabling those features I was able to get something in the middle of good performance and not exhausting the notebook resources.

One observation: COMMIT control in SQLite is really weird. I finished by using a combination to disabling Autocommit during database connecting and disabling it again after each COMMIT executed.

Tuesday, February 26, 2013

Accessing Siebel with COM

Introduction

Siebel has a set different technologies to integrate with other systems, including online and batch interfaces.

Those different technologies have different advantages and drawbacks, and their application depend heavily on the nature of the task to be executed and the environment available.

One of those interfaces available for usage is the COM interface.

COM is an abbreviation for "Component Object Model" developed by Microsoft as a framework for developing modular software components and to support integration services (exposing functionality and data transfers).

COM is available for Siebel in Microsoft Windows environments only (since COM is a Microsoft exclusively technology) in three different "flavors":
  • Automation server
  • Data server
  • Data control
In this article I'll talk only about Data Control and Data Server: the programming interface for both is almost the same as you will be able to see later, although there are important architecture differences between them, including connection details.

Usually using Siebel COM Data Control is preferable to use since it is faster to load and execute and can multiplex connections to a Siebel Enterprise, but DataServer still have it's specific uses:
  1. Provides CRUD operations in the Siebel Client local database.
  2. Avoid object restrictions in the Siebel Repository by using a local modified SRF.
  3. Avoid security restrictions applied to the Siebel Enterprise, like firewalls and authentication.

Through COM usage, any language that supports it can connect to Siebel and have access to it's internal objects, more specifically the Business Layer when talking about Siebel COM Data Control or Data Server. Some examples include VBA, Windows Powershell and Perl.

The best document to start reading about Siebel COM is the 1402536.1 "Master Notes For COM and JDB(Java Data Bean) Interface Issues": it has references to other documents about Siebel COM. The bookshelf "Siebel Object Interfaces Reference" is another must read resource.

Getting to know the API requires checking the proper documentation available. When they are not available, you can consider yourself lucky if you have Microsoft Office installed, since you can check the API documentation from the DLL itself by using the Microsoft Visual Basic for Applications editor and checking the references loaded:


Requirements


You must have the Siebel Client installed in a Microsoft Windows 32 bits OS. This includes Windows 7, although Oracle claims it does not supports Siebel 8.1 in such OS.

Even after installing the Siebel, sometimes the required DLL's are not registered correctly (see DLL Hell for details on that). Below is a quick reference for the DLL's that you need to have:

Siebel Interface Programmatic Id DLL/TLB
Siebel COM Data Server SiebelDataServer.AplicationObject sobjsrv.ltb
Siebel Data Control SiebelDataControl.SiebelDataControl.1 sstchca.dll

Advantages

It is easy to setup if you are in a Microsoft Windows environment. Generally speaking, if you have a Siebel Client installed, you already got everything you need to use Siebel COM.

If you consider VBA, you can even get data into your Excel spreadsheet, which is a huge advantage if all you need is make data available to end users. Sometimes, it is probably easier than creating customized reports and making them available for them.

Siebel COM is also useful to provide quick data fixes, retrieve information which would be much more complicated if done from the Data Layer or even "upserting" small amounts of data.

In some specific scenarios, you could provide additional functionality to Siebel Mobile users by connecting to the local database or even doing data changes by using a modified local SRF which would not be possible due restrictions in the "regular" SRF (that could be a dangerous thing to do, but it is possible).

Caveats


As simple as it looks like, COM does have it's caveats. Siebel and in fact even .NET, while still supporting it, are replacing it with other technologies.

For example, Powershell uses the .NET support to COM by using of an interop assembly, which is a special .NET Framework assembly that connects complex COM components to the .NET Framework. Sometimes a COM component doesn't have it's interop counterpart available, so when you try to instantiate it all you got is a connection to the interop assembly and not the COM component itself, resulting in not having the proper access to the properties and methods from the component (to solve this, use the "-strict" parameter with New-Object cmdlet).

Another drawback for using COM is that you cannot use it with systems that are not Microsoft based: you cannot use it from Linux, for example (but you can use a Microsoft Windows workstation to connect to a Siebel Enterprise based on Linux).

Other issues are related to scalability and speed: some resources even suggest that when necessary only to query Siebel to use direct queries to the database instead using the AOM for speed. Scaling COM with Siebel should involve using more AOM instances to connect, probably doing that with a load balance (refer to document 478270.1 "Use of Native Load Balancing With Java Data Beans Interface and COM Data Control Interface" for more information about that).

Since COM uses DLL's, there is still a present issue regarding 32 and 64 bits support: when writing this article Siebel still does not support usage of COM in 64 bits systems, as show in document 1393381.1 "Can COM Interface Be Used in a 64 Bit Environment in 64 Bit Mode?". As it seems, it is just a matter of time that you won't be able to use COM anymore when we stop to use computers with 32 bits support. Maybe Oracle will update those DLL's, but I would not bet on that.

There are more issues related to COM. You can check more examples at Wikipedia webpage about COM.

Examples

There are several examples available in the internet to use Siebel COM with VBA and even Perl. I'll restrict to show an example with Windows Powershell since I could not find any when writing this article.

set-strictmode -version 2.0
$siebelApp = New-Object -ComObject "SiebelDataControl.SiebelDataControl.1" -Strict

$gateway = "siebdev"
$server = "siebdev"
$langCode = "ENU"
$enterprise = "SIEBEL"
$aom = "eFoobarObjMgr_enu"
$connAddress = "host=""siebel://$gateway/$enterprise/$aom"" Lang=""$langCode"""
$user = "foo"
$password = "bar"

try {

    if ($siebelApp -is 'System.__ComObject') {

        $siebelApp.EnableExceptions($true)
        Write-Host "connecting to" $connAddress
        [void] $siebelApp.Login($connAddress, $user, $password)
      
        if ( $siebelApp.GetLastErrCode() -ne 0 ) {
      
            write-host "Not possível to connect to" $connAddress
            throw $siebelApp.GetLastErrText()
      
        } else {
      
            write-host "Connected to server $server"
          
            try {
          
                $bo = $siebelApp.GetBusObject("Contact")
                $bcContact = $bo.GetBusComp("Contact")
                [void] $bcContact.ClearToQuery
                [void] $bcContact.SetViewMode(3)
                [void] $bcContact.ActivateField("First Name")
                [void] $bcContact.ActivateField("Last Name")
                [void] $bcContact.ExecuteQuery(257)
                   
                if ($bcContact.FirstRecord()) {
              
                    do {
                  
                        $string = "First Name =" + $bcContact.GetFieldValue("First Name") + ", Last Name = " +  $bcContact.GetFieldValue("Last Name")
                        Write-Host $string                  
                  
                  
                    } while ($bcContact.NextRecord())

                } else {
              
                    Write-Warning "It was not possible to find any contact"
              
               }

            } catch {

                Write-Host $error[0].exception -ForegroundColor red

            } finally {

                remove-variable -name bcContact
                remove-variable -name bo

                [void] $siebelApp.Logoff()

            }

        }

    } else {

        throw "It was not possible to connect to Siebel"

    }

} catch {

    Write-Host $error[0].exception -ForegroundColor red

} finally {

    write-host "Releasing resources" -foregroundcolor green

    remove-variable -name bcContac
    remove-variable -name bo
    remove-variable -name siebelApp

}
Let's comment some details about this code.

First thing is the "-strict" option when creating the COM object: this is necessary to raise an exception if, for any reason, the COM object cannot be instantiated.

It is also a good idea to check to if the object created is really a COM object, since this will avoid calling methods from an invalid object: -is 'System.__ComObject' does that.

Another good idea is to turn on automatic exception from the Siebel Data Control: EnableExceptions method will do just that.

You will notice that some methods invocation have a "[void]" prefixed to them: this is useful because those methods returns something but usually you won't want those thing to go to standard output (as Powershell does by default). Also, automatic exceptions are turned on, so don't worry too much about checking methods returns.

Unfortunately, Siebel COM will not give you constants for free. That's why you will need to use the numeric value of them as done in ExecuteQuery() method or create those constant yourself.

Everything else should be straightforward if you have already programmed with eScript. I tried to make some effort by using try/catch/finally to check for exceptions, trying to logoff automatically and release the objects created with remove-variable cmdlet, but during my test by turning on trace with Trace method did not showed any different by using such schemes (to release objects), at least in this example.

Get your feet wet

If you know how to program with anything that supports COM, then you have a good chance to do yourself a favor and stop using EIM for simple things like updating 50 accounts or even doing it by hand. Automation with Siebel COM is a real option, just don't try to take it too much seriously as putting thousand of requests/data over it.

Thursday, January 17, 2013

Getting into Siebel with Perl

Inspired in the post made by Jason Brazile (http://jbrazile.blogspot.com.br/2008/03/siebel-com-programming-with-perl.html) I decided to get something more "Perlish" working example.

This is the first working testing that I got, basically the same example given by Jason with some interesting differences:

use strict;
use warnings;
use Siebel::COM::App::DataServer;
use feature 'say';

my $schema = {
    'Contact' => [
        'Birth Date',
        'Created',
        'Created By Name',
        'Email Address',
        'Fax Phone #',
        'First Name',
        'Id',
        'Job Title',
        'Last Name',
        'Updated',
        'Updated By Name',
        'Work Phone #'
    ]
};

my $sa;

eval {

    $sa = Siebel::COM::App::DataServer->new(
        {
            cfg         => 'C:/Siebel/8.1/Client_1/BIN/ptb/scomm.cfg',
            data_source => 'ServerDataSrcDEV',
            user        => 'foobar',
            password    => 'foobar'
        }
    );

    my ( $bo, $bc, $key, $field, $moreResults );

    $sa->login();

    foreach $key ( keys(%$schema) ) {

        $bo = $sa->get_bus_object($key);
        $bc = $bo->get_bus_comp($key);

        foreach $field ( @{ $schema->{$key} } ) {

            $bc->activate_field($field);

        }

        $bc->clear_query();
        $bc->query();
        $moreResults = $bc->first_record();

        while ($moreResults) {

            printf("-----\n");

            foreach $field ( @{ $schema->{$key} } ) {

                printf( "%40s: %s\n", $field, $bc->get_field_value($field) );

            }

            $moreResults = $bc->next_record();
        }
    }

};

if ($@) {

    warn $@;
    die $sa->get_last_error() if ( defined($sa) );

}


You might be wondering "where is the heck of Win32::OLE?". Well, the beautiful of this script is the usage of the distribution Siebel::COM, freshly available at http://code.google.com/p/siebel-com/, which provides syntax sugar and automatic error checking by using basically only Moose.

Of course there is still a lot to implement, and the code available at the SVN doesn't have yet tests neither POD documentation. But it is a good start and I'm positive that this will be available at CPAN until the end of the month (although implementing mock objects in the tests will be a problem).