OpenText Intelligent Viewing 25.4 Linux install guide

This is a step-by-step guide to install Intelligent Viewing 25.4 in WSL2 using the Rocky Linux 9 image with PostgreSQL 17.

As a requisite, you’ll need to have a valid license for it, otherwise services will fail to properly start.

First step is to prepare the database, as I’m using the same VM used for Documentum 25.4 PostgreSQL 17 on Rocky Linux 9.6 (WSL2) Install Guide, you can create an additional “iv” user/database following the steps detailed on OTDS 24.4 on Rocky Linux 9.5 (WSL2) Install Guide. Just be aware that you need to create the “pgcrypto” extension.

Once the database is installed, we can start installing the first required applications and required packages such as nodeJS, npm, etc (Java is already present from Documentum Content Server installation)

[root@localhost ~]# yum -y install nodejs npm mesa-libGL mesa-libGLU Xvfb logrotate
[root@localhost ~]# node -v
[root@localhost ~]# npm -v

Once we’ve checked node and npm are installed and working, we have to install RabbitMQ, and for that we need to install the erlang dependency:

[root@localhost ~]# vi /etc/yum.repos.d/rabbitmq.repo
# In /etc/yum.repos.d/rabbitmq.repo

##
## Zero dependency Erlang RPM
##

[modern-erlang]
name=modern-erlang-el9
# Use a set of mirrors maintained by the RabbitMQ core team.
# The mirrors have significantly higher bandwidth quotas.
baseurl=https://yum1.rabbitmq.com/erlang/el/9/$basearch
        https://yum2.rabbitmq.com/erlang/el/9/$basearch
repo_gpgcheck=1
enabled=1
gpgkey=https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-erlang.E495BB49CC4BBE5B.key
gpgcheck=1
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
pkg_gpgcheck=1
autorefresh=1
type=rpm-md

[modern-erlang-noarch]
name=modern-erlang-el9-noarch
# Use a set of mirrors maintained by the RabbitMQ core team.
# The mirrors have significantly higher bandwidth quotas.
baseurl=https://yum1.rabbitmq.com/erlang/el/9/noarch
        https://yum2.rabbitmq.com/erlang/el/9/noarch
repo_gpgcheck=1
enabled=1
gpgkey=https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-erlang.E495BB49CC4BBE5B.key
       https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq-release-signing-key.asc
gpgcheck=1
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
pkg_gpgcheck=1
autorefresh=1
type=rpm-md


##
## RabbitMQ Server
##

[rabbitmq-el9]
name=rabbitmq-el9
baseurl=https://yum2.rabbitmq.com/rabbitmq/el/9/$basearch
        https://yum1.rabbitmq.com/rabbitmq/el/9/$basearch
repo_gpgcheck=1
enabled=1
# Cloudsmith's repository key and RabbitMQ package signing key
gpgkey=https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-server.9F4587F226208342.key
       https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq-release-signing-key.asc
gpgcheck=1
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
pkg_gpgcheck=1
autorefresh=1
type=rpm-md

[rabbitmq-el9-noarch]
name=rabbitmq-el9-noarch
baseurl=https://yum2.rabbitmq.com/rabbitmq/el/9/noarch
        https://yum1.rabbitmq.com/rabbitmq/el/9/noarch
repo_gpgcheck=1
enabled=1
# Cloudsmith's repository key and RabbitMQ package signing key
gpgkey=https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-server.9F4587F226208342.key
       https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq-release-signing-key.asc
gpgcheck=1
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
pkg_gpgcheck=1
autorefresh=1
type=rpm-md

Now we can install the dependency:

[root@localhost ~]# dnf update -y
[root@localhost ~]# dnf install erlang -y

And install RabbitMQ:

[root@localhost ~]# wget https://github.com/rabbitmq/rabbitmq-server/releases/download/v4.2.2/rabbitmq-server-4.2.2-1.el8.noarch.rpm
[root@localhost ~]# rpm --import https://www.rabbitmq.com/rabbitmq-release-signing-key.asc
[root@localhost ~]# dnf install https://github.com/rabbitmq/erlang-rpm/releases/download/3.0/rabbitmq-erlang-26.2-1.el9.noarch.rpm -y

Now configure RabbitMQ:

[root@localhost ~]# rabbitmq-plugins enable rabbitmq_management
Enabling plugins on node rabbit@localhost:
rabbitmq_management
The following plugins have been configured:
rabbitmq_management
rabbitmq_management_agent
rabbitmq_web_dispatch
Applying plugin configuration to rabbit@localhost…
The following plugins have been enabled:
rabbitmq_management
rabbitmq_management_agent
rabbitmq_web_dispatch

started 3 plugins.

[root@localhost ~]# rabbitmqctl add_user mqadmin admin
[root@localhost ~]# rabbitmqctl set_user_tags mqadmin administrator
[root@localhost ~]# rabbitmqctl set_permissions -p / mqadmin ".*" ".*" ".*"

[root@localhost ~]# systemctl enable rabbitmq-server
Created symlink /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service → /usr/lib/systemd/system/rabbitmq-server.service.
[root@localhost ~]# systemctl start rabbitmq-server

Now, you should be able to login into RabbitMQ’s console from http://localhost:15672 with the user just created:

Now, you need to copy the OT IV license replacing the existing INTELLIGENT_VIEWING.lic file on the installer folder. Once you have the license, you can run the OTDSConfig program that will upload the license and create the required configuration on OTDS:

[dmadmin@localhost /opt/documentum/sw/iv]$ ./OTDSConfig -u http://localhost:8180/otdsws -a <otds admin user> -p <otds admin password>

This will generate a _default_Auth.properties file with the following info:

LICENSE_RESOURCE=<value>
PUBLICATION_AUTH_CLIENT_ID=iv-publication
PUBLICATION_AUTH_CLIENT_SECRET=<value>
PUBLISHER_AUTH_CLIENT_ID=iv-publisher
PUBLISHER_AUTH_CLIENT_SECRET=<value>
SEARCH_AUTH_CLIENT_ID=iv-search
SEARCH_AUTH_CLIENT_SECRET=<value>

Take note of the values generated for the client ids as well as the value for the license resource. Now we have to edit the IntelligentViewing_Linux.properties file. Important values to modify here are the following:

ALLOW_DEFERRED_CONFIGURATION=false
JAVA_HOME=/opt/documentum/jdk-21.0.9+10
DEFAULT_HOST=localhost
OTDS_ORIGIN=http://localhost:8180/otdsws
OTDS_AUTH_FILE_PATH=/opt/documentum/sw/iv/_default_Auth.properties

LICENSE_RESOURCE=<value>
OT_INTEGRATION=true

DEFAULT_DB_HOST=localhost
DEFAULT_DB_PORT=5432
DEFAULT_DB_NAME=iv
DEFAULT_DB_USER=iv
DEFAULT_DB_PWD=<password>
DEFAULT_DB_USE_SSL=false
DEFAULT_DB_PROVIDER=postgresql

RMQ_HOST=localhost
RMQ_PORT=5672
RMQ_USER=mqadmin
RMQ_PWD=admin

PUBLISHER_AUTH_CLIENT_ID=iv-publisher
PUBLISHER_AUTH_CLIENT_SECRET=<value>
PUBLICATION_AUTH_CLIENT_ID=iv-publication
PUBLICATION_AUTH_CLIENT_SECRET=<value>
SEARCH_AUTH_CLIENT_ID=iv-search
SEARCH_AUTH_CLIENT_SECRET=<value>

Everything else should be ok with the existing values, so now we just have to run the intaller:

[root@localhost ~]# subscription-manager repos --enable rhel-7-server-optional-rpms

[root@localhost ~]# ./IV.Installer --install ./rpm --prereqs --configure ./IntelligentViewing_Linux.properties -l ./log.txt

[root@localhost ~]# chown -R otiv:otiv /home/otiv/.opentext

Hopefully the installer will finish correctly and IV will be installed. Logs can be checked on /var/log/opentext but we can also use the console application to check the status/adjust configuration:

[root@localhost ~]# /opt/opentext/ConfigurationTool/config-tool.sh

Now, we should try checking if this works or not. In the past, IV included a demo application (metis) but that is no longer the case. However, there’s a publicly available sample application on github: https://github.com/opentext/ivsa

Download the code, and create a .env file on the root of the project with the following values:

OTDS_ORIGIN=http://localhost:8180
OAUTH_CLIENT=test-iv
PUBLICATION_AUTHORITY=http://localhost:3356
MARKUP_AUTHORITY=http://localhost:3352
HIGHLIGHT_AUTHORITY=http://localhost:3357
VIEWER_AUTHORITY=http://localhost:3358
FILESTORE_PLUGIN=httpurl

This configuration will allow to test a file provided by url. For doing this I just copied the sample documents provided with this application to one of the tomcat’s ROOT webapp, so it can be accessed via browser.

Now we need to create an OTDS Oauth client named “test-iv” and add the following values to permisible scopes and default scopes:

view_publications
create_publications

Now we can build the application:

npm install
npm run build

As usual, the scripts are windows encoded, so before running the ivsa command make sure to run the dos2unix command on all files on the project, otherwise it won’t run. Finally, try to execute the ivsa application on one of the copied files to tomcat providing and user/pass existing in OTDS allocated to the IV license:

./ivsa -u <user> -p <pass> -V http://localhost:8180/wilhome.dwg

And you should see this:

Documentum 25.4 PostgreSQL 17 on Rocky Linux 9.6 (WSL2) Install Guide

This is a step-by-step guide to install Documentum 25.4 in WSL2 using the Rocky Linux 9 (closest to the supported RH9) image with PostgreSQL 17. This version comes with some changes:

  • OT has repackaged (and rebranded for the 100000th time again the products) the installer into zip files
    • Documentum CM runstack:
      • Content Server PostgreSQL and Oracle
      • Documentum Administrator
      • Process Engine
      • Workflow Designer
      • Thumbnail Server
      • iJMS
    • Documentum CM experience:
      • Documentum Clients
        • D2 (now called “Classic”) and SDK
        • D2-Config
        • D2-Smartview and SDK
        • Administration Console
        • D2-REST and SDK
        • D2 Mobile
        • D2 Plugins (C2, O2, etc.)
    • Documentum CM API and Dev tools:
      • Composer
      • DFC
      • DFS and SDK
      • DCTM-REST and SDK
      • CMIS
      • Connector for Core
  • Documentum Search is no longer available (until at least 2027) because of performance and result inconsistency issues
  • Tomcat supported version is now 11 (will OT ever update xPlore’s Tomcat?)
  • JMS tomcat no longer has the version on the folder path (now is $DOCUMENTUM/tomcat)
  • OTDSAuthenticator is no longer part of the JMS, but it runs as an http service listening on port 8400. The binaries/configuration are located on $DM_HOME/OTDSAuthLicenseHttpServerBin and the otdsauth.log is now located on $DOCUMENTUM/dba/log
  • We have the new admin console which in theory should replace (one of this years :D) DA. As we have seen for many years with D2, everything comes “preconfigured” for windows (dfc.properties points to “C:\Documentum\config\dfc.properties” on Linux packages). In a “default” installation you can skip deploying this as it has no use:
  • Clients (D2) now have a similar installer as the CS, but OT still can’t properly configure log paths (Engineering must have never heard of catalina.base/home variables. Will they ever stop making all log files on Linux point to C:\Logs\xxx.log?)

Initial Configuration

I will not go through the basic configuration, as you can follow the steps detailed for Documentum 25.2. You just need to use proper Java version and modify the environment variables that change ($DM_HOME, $DM_JMS_HOME, $JAVA_HOME).

Also remember to start the OTDS authentication HTTP service if you want to use licenses, otherwise you could only login into DA. In case you want to setup licensing you can follow the steps from theOTDS licensing configuration post.

Client installation

You’ll need to unzip the clients package zip and run ./Documentum-Client-Installer.bin:

It looks like we finally have support for multiple repositories configuration:

After the installation is done you’ll have the war files ready to be deployed:

  • Drop the files to your Tomcat 11 installation
  • Update dfc.properties and log4j2 and logback configuration (as everything will be writing to C:\xxx)
  • Register dfc.keystores as approved clients

Documentum xPlore vs. Tomcat 10

If you are registered on the OpenText site you’ve probably received this week the notification about the status of xPlore:

  • xPlore 22.1 will be the last version of xPlore, with Patch 12 being the supported version
  • Documentum search is now officially listed as the replacement of xPlore
  • Support for xPlore will finish on 2027 (+ extended)

If you have checked “recently” xPlore 22.1 patches, you’ll have realized that OpenText moved from Wildfly to Tomcat 9 during these patches (on Patch 04 or 05, if I remember correctly) as well as moving to JDK 17. However, despite P12 supporting JDK 21, xPlore still uses Tomcat 9 (remember that latest branch is Tomcat 11, with Tomcat 10 being the version considered “standard” as of today). This situation is strange because all the Documentum stack is running in Tomcat 10… except this. And despite having KBs on how to upgrade Tomcat with minor versions (KB0833958 and KB0833423) is still weird that this is not done yet.

It happens that I had opened a case just a few days before getting the notification mail asking about the status of xPlore and Tomcat 10 (due to Tomcat 9 vulnerability topics) and got this answer:

"Moving to Tomcat 10.x would need upgrading code compilation to Java17. 
This is a larger effort which is not planned." 

So… not believing this statement (as we have recently upgraded everything to Java 17 and it has been basically replacing javax.* packages with jakarta.* packages, and it hasn’t been that much of a “larger effort”) for 3 web applications (dsearch, dsearchadmin and indexagent) I decided to check how difficult this would really be.

First thing, I had to install locally xPlore 22.1, then upgrade it to 22.1 P12 and finally create the PrimaryDsearch and IndexAgent instances. So far, so good.

So let’s try this:

  • Download latest Tomcat 10.1 (currently 10.1.47)
  • Unpack twice the Tomcat 10.1 package (PrimaryDsearch_tomcat10.1.47 and Indexagent_tomcat10.1.47)
  • For PrimaryDsearch instance:
    • Copy PrimaryDsearch_tomcat9.0.100/admindata
    • Copy PrimaryDsearch_tomcat9.0.100/dctmInfo.properties
    • Copy PrimaryDsearch_tomcat9.0.100/webapps (dsearch and dsearchadmin)
    • Copy PrimaryDsearch_tomcat9.0.100/bin scripts (start/stopPrimarySearch and dctmServerStatus.sh)
    • Update PrimaryDsearch_tomcat10.1.47/admindata/admindb/XhiveDatabase.bootstrap with Tomcat 10 path
    • Update PrimaryDsearch_tomcat10.1.47/conf/server.xml to use port 9300
    • Update copied scripts to bin folder with Tomcat 10.1 path
    • Update loggging configuration with Tomcat 10.1 path
  • For IndexAgent instance:
    • Copy Indexagent_tomcat9.0.100/dctmInfo.properties
    • Copy Indexagent_tomcat9.0.100/webapps (IndexAgent)
    • Copy Indexagent_tomcat9.0.100/bin scripts (start/stopPrimarySearch and dctmServerStatus.sh)
    • Update Indexagent_tomcat10.1.47/webapps/IndexAgent/WEB-INF/classes/indexagent.xml with Tomcat 10.1 path
    • Update Indexagent_tomcat10.1.47/conf/server.xml to use port 9200
    • Update copied scripts to bin folder with Tomcat 10.1 path
    • Update loggging configuration with Tomcat 10.1 path

Then we start both Tomcat 10 and… nothing 😀 As expected the javax.* references make this unable to work with Tomcat 10. So what can we do? “Normal” approach would be taking the original source code and replacing javax.* with jakarta.* and dependenies on libraries but a) I do not have the original source code and b) I do not want to be decompiling all the jars. So it is really a large effort? I do not think so.

Apache provides a migration tool that can modify all the references to javax in all the files of your application. So… what if we give it a try? We just need to download the tool and execute it:

./migrate.sh /opt/xplore/PrimaryDsearch_tomcat10.1.47/webapps/javax_dsearch /opt/xplore/PrimaryDsearch_tomcat10.1.47/webapps/dsearch
./migrate.sh /opt/xplore/PrimaryDsearch_tomcat10.1.47/webapps/javax_dsearchadmin /opt/xplore/PrimaryDsearch_tomcat10.1.47/webapps/dsearchadmin
./migrate.sh /opt/xplore/IndexAgent_tomcat10.1.47/webapps/javax_IndexAgent /opt/xplore/IndexAgent_tomcat10.1.47/webapps/IndexAgent

And after few seconds the tool will finish converting the applications and we just have to delete the javax_xxx folders and start PrimaryDsearch and IndexAgent. And what happens now?

Dsearchadmin:

IndexAgent:

IndexAgent status from DA:

Search results:

Total time for this: ~20 min give or take. Considering that probably writing this post has taken me longer that this, not sure if we can qualify this as a “large effort” 😀

Documentum D2 LoadOnStartup performance improvements

If you’re using D2 (or Smartview) you probably have realized that the LoadOnStartup parameter which is required for certain features (ie: application list) also makes D2 startup slower.

Depending on the number of repositories/applications this startup time can be very long or extremely long (we’re talking about a web application). As an example, our servers take ~5-7 minutes to complete the startup of D2, but even “funnier” is the situation with our laptops, where (thanks to all the security AV, inventory tools and such) the same war file from dev/qs/prod can take ~20 minutes to start.

How is this possible? Well, I asked myself the same question when I accidentally had to check some code on some classes related to this topic. What I found was that what LoadOnStartup does is basically this:

  • Populates the type cache
  • Populates the labels for the types
  • Caches the configuration objects (so if you have a lot, this will take some “seconds”)
  • Populates the labels for every attribute and language (and this will take even minutes depending on how many languages/attributes you have, as it happens for everything, not discriminating even internal attributes not used)

You can see these in the following excerpt from D2.log in full debug mode:

Refresh cache for type : dm_document
Refresh cache for type : d2_pdfrender_config
...
Refresh format label cache
...
Refresh xml cache for object d2_portalmenu_config with object_name MenuContext
...
Refresh en attribute label cache
..

Is this really a problem? Well, not “by default”, but if you have 5-6 repositories with many types and many languages, this becomes… slow.

So, taking a look at the code, we saw that most of the time (or the most significant situation) was happening with the attribute label cache, which would stop for 30-40 seconds for each language. This code is located in com.emc.common.dctm.bundles.DfAbstractBundle.loadProperties method:

query.setDQL(dql);
col = query.execute(newSession, 0);

while(col.next()) {
	String name = col.getString("name");
	String label = col.getString("label");
        if (col.hasAttr("prefix")) {
		String prefix = col.getString("prefix");
                StringBuffer tmpName = new StringBuffer(prefix);
                tmpName.append('.');
                tmpName.append(name);
                name = tmpName.toString();
        }

This dql is “select type_name as name, label_text as label from dmi_dd_type_info where nls_key = ‘en’” which can return tens of thousands of results, and this is executed for each language configured in the repository. And this code is called from the com.emc.d2fs.dctm.servlets.init.LoadOnStartup class:

int count = docbaseConfig.getValueCount("dd_locales");
for (int i = 0; i < count; i++) {
      String strLocale = docbaseConfig.getRepeatingString("dd_locales", i);
      DfSessionUtil.setLocale(session, LocaleUtil.getLocale(strLocale));
      LOG.debug("Refresh {} type label cache", strLocale);
      DfTypeBundle.clearCache(session);
      DfTypeBundle.getBundle(session);
      LOG.debug("Refresh {} attribute label cache", strLocale);
      DfAttributeBundle.clearCache(session);
      DfAttributeBundle.getBundle(session);
}

The getBundle method is running at the end that query for the labels… So improvement possibilities? Clear one: multithreading. We modified this block of code to run with multiple threads (one per language), and what happened? We cut down the startup time by 2-3 minutes, fantastic, right? Yes 🙂 but then we thought: The log clearly shows that LoadOnStartup is a sequential process that repeats the same stuff for each repository… so could we run the “repository initialization” in parallel? Let’s see:

Iterator<IDfSession> iterator = (Iterator<IDfSession>)object;
        while (iterator.hasNext()) {
          IDfSession session = iterator.next();
          try {
            if (session != null) {
              refreshCache(session);
              if (cacheBocsUrl)
                loadBocsCache(session); 
            } 
          } finally {
            try {
              if (session != null)
                session.getSessionManager().release(session); 
            } catch (Exception exception) {}
          } 
        }

This block of code is what initiates the LoadOnStartup process for each repository with the “refreshCache” method. So what happens if we also add multithreading to this block of code? Well, that it works:

[pool-5-thread-2] - c.e.d.d.s.i.LoadOnStartup[   ] : Refresh cache for type : d2_subscription_config
[pool-5-thread-1] - c.e.d.d.s.i.LoadOnStartup[   ] : Refresh cache for type : d2_toolbar_config
[pool-5-thread-3] - c.e.d.d.s.i.LoadOnStartup[   ] : Refresh cache for type : d2_attribute_mapping_config
[pool-5-thread-2] - c.e.d.d.s.i.LoadOnStartup[   ] : Refresh cache for type : d2_sysobject_config

You can see how the type cache is populated in parallel by using a different thread for each repository. And what about times? Well, these are the times for the “normal” startup, parallel loading of labels, and parallel loading of the repository configuration and labels taken on my laptop:

[1271387] milliseconds
[1058355] milliseconds
[435735] milliseconds

So, original startup time without touching anything: 21 minutes. Multithreading the attribute label cache: 17 minutes. Full multithread: 7 minutes.

Don’t know, but maybe OT guys should take a look at this and consider a “performance improvement” patch for D2/Smartview…

Documentum 25.2 PostgreSQL 17 on Rocky Linux 9.6 (WSL2) Install Guide

This is a step-by-step guide to install Documentum 25.2 in WSL2 using the Rocky Linux 9 (closest to the supported RH9) image with PostgreSQL 17. This versions comes with some changes:

  • Due to the deprecation of MS SMTP basic authentication, installer now allows using a M365/MSGraph account for setting mail:

Initial Configuration

There’s no official image on Microsoft Store so you’ll need to download the container image from the Rocky Linux page. Once this is done, you can import the image:

wsl --import RockyLinux9Dctm252 c:\Users\aldago\rockylinux252\ d:\dctm\dctm252\Rocky-9-Container-Base.latest.x86_64.tar.xz

After importing the image, we can log in (as root) and start the basic configuration of the server:

[root@desktop ~]# yum -y update
[root@desktop ~]# yum -y install sudo procps tcl expect libXtst xterm libxcrypt-compat

[root@desktop ~]# adduser dmadmin
[root@desktop ~]# passwd dmadmin
Changing password for user dmadmin.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.

[root@desktop ~]# usermod -aG wheel dmadmin
[root@desktop ~]# su - dmadmin
[dmadmin@aldago-desktop ~]$ sudo vi /etc/wsl.conf
[boot]
systemd=true

PostgreSQL Configuration

First we need to install PostgreSQL:

[root@desktop ~]# dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-x86_64/pgdg-redhat-repo-latest.noarch.rpm
[root@desktop ~]# dnf -qy module disable PostgreSQL

[root@desktop ~]# dnf install -y postgresql17-server

[root@desktop ~]# /usr/pgsql-17/bin/postgresql-17-setup initdb
Initializing database ... OK

[root@desktop ~]# systemctl start postgresql-17
[root@desktop ~]# systemctl enable postgresql-17
Created symlink /etc/systemd/system/multi-user.target.wants/postgresql-17.service → /usr/lib/systemd/system/postgresql-17.service.
[root@desktop ~]# systemctl status postgresql-17

[root@desktop ~]# su - dmadmin
[dmadmin@desktop ~]$ sudo su - postgres
[sudo] password for dmadmin:
[postgres@desktop ~]$ psql
psql (17.5)
Type "help" for help.

postgres=# \password postgres
Enter new password for user "postgres":
Enter it again:
postgres=# exit
[postgres@desktop ~]$ exit
logout

[dmadmin@desktop ~]$ sudo passwd postgres
Changing password for user postgres.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.

Next, we can install phpgadmin for easier administration:

[root@desktop ~]# dnf -y install yum-utils
[root@desktop ~]# rpm -i https://ftp.postgresql.org/pub/pgadmin/pgadmin4/yum/pgadmin4-redhat-repo-2-1.noarch.rpm
warning: /var/tmp/rpm-tmp.CQz9dX: Header V4 RSA/SHA256 Signature, key ID 210976f2: NOKEY
[root@desktop ~]# yum-config-manager --disable pgdg-common
Error: No matching repo to modify: pgdg-common.
[root@desktop ~]# dnf update -y                                                                            
[root@desktop ~]# dnf install pgadmin4 -y

[root@desktop ~]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@desktop ~]# systemctl status httpd

[root@desktop ~]# /usr/pgadmin4/bin/setup-web.sh
Setting up pgAdmin 4 in web mode on a Redhat based platform...
Creating configuration database...
NOTE: Configuring authentication for SERVER mode.
Enter the email address and password to use for the initial pgAdmin user account:
Email address: dmadmin@dmadmin.dmadmin
Password:
Retype password:
You can now start using pgAdmin 4 in web mode at http://127.0.0.1/pgadmin4

And finally, we can configure the odbc connection:

[root@desktop ~]# dnf install https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-x86_64/pgdg-redhat-repo-latest.noarch.rpm
[root@desktop ~]# yum -y install postgresql17-odbc.x86_64 unixODBC.x86_64
[root@desktop ~]# vi /etc/odbc.ini
[MyPostgres]
Description=PostgreSQL
Driver=PostgreSQL
Database=postgres                                                                                                       
Servername=localhost
UserName=postgres
Password=dmadmin
Port=5432                                                                                                               
Protocol=17
ReadOnly=No
RowVersioning=No
ShowSystemTables=No
ShowOidColumn=No                                                                                                        
FakeOidIndex=No
UpdateableCursors=Yes
DEBUG=Yes

[root@desktop ~]# vi /etc/odbcinst.ini
[PostgreSQL]
Description     = ODBC for PostgreSQL
Driver          = /usr/pgsql-17/lib/psqlodbcw.so
Driver64        = /usr/pgsql-17/lib/psqlodbcw.so
Setup64         = /usr/lib64/libodbcpsqlS.so.2
FileUsage       = 1

[root@desktop ~]# su - dmadmin
[dmadmin@desktop ~]$ isql -v MyPostgres
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL>

Documentum Server

First, we need to create the DB folder:

[dmadmin@desktop ~]$ su - postgres
Password:
[postgres@desktop ~]$ mkdir /var/lib/pgsql/17/data/db_dctm252_dat.dat

The Documentum folders and JDK (stick to the supported 21 version and remember to remove anon from the disabled algorithms to avoid issues):

[dmadmin@desktop ~]$ sudo mkdir -p /opt/documentum/sw && sudo mkdir -p /opt/documentum/product/25.2
[dmadmin@desktop ~]$ sudo chown -R dmadmin:dmadmin /opt/documentum

Add environment variables to .bash_profile:

[dmadmin documentum]$ vi ~/.bash_profile
DOCUMENTUM=/opt/documentum
export DOCUMENTUM

DM_HOME=$DOCUMENTUM/product/25.2
export DM_HOME

DM_JMS_HOME=$DOCUMENTUM/tomcat10.1.39
export DM_JMS_HOME

POSTGRESQL_HOME=/usr/pgsql-17
export POSTGRESQL_HOME

JAVA_HOME=/opt/documentum/jdk-21.0.5+11
export JAVA_HOME

PATH=$PATH:$DM_HOME/bin:$POSTGRESQL_HOME/bin:$HOME/.local/bin:$HOME/bin:$JAVA_HOME/bin:$DOCUMENTUM/dba
export PATH

LC_ALL=C
export LC_ALL

LD_LIBRARY_PATH=$POSTGRESQL_HOME/lib:$DM_HOME/bin:/usr/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH

export DM_CRYPTO_MIN_PASSWORD_LENGTH=8

DISPLAY=:0
export DISPLAY

export PS1='[\u@\h \w]\$ '

Reserve ports and configure limits.conf:

[dmadmin ~]$ sudo vi /etc/services
dctm252 50000/tcp # dctm 25.2 repo
dctm252_s 50001/tcp # dctm 25.2 repo

[dmadmin ~]$ sudo vi /etc/security/limits.conf
dmadmin – core -1

[dmadmin@desktop /opt/documentum]$ sudo ln -s /usr/lib64/libsasl2.so.3.0.0 /usr/lib64/libsasl2.so.2
[dmadmin@desktop /opt/documentum]$ sudo ln -s /usr/lib64/libssl.so.3 /usr/lib64/libssl.so.10

And now you can simply install content server normally 🙂

Documentum 24.4 OTDS licensing configuration

Starting with Documentum 24.4, OTDS is now mandatory for managing licenses, meaning that you won’t be able to use any client without license (not even with dmadmin). OpenText has a white paper on the support page called “OpenText™ Documentum 24.4 – OTDS Configuration” that you can check (as it has had several iterations with several updates).

This post will cover the minimal steps required to set-up a development environment (after you request a developing license), as these are… some steps (wonder how is this supposed to work on a cloud environment… you deploy your wonderful kubernetes cluster and… nothing works until you manually perform all these steps?).

Following the white paper, the first step is creating a new non-synchronized partition, that later will be populated with the repository inline users (I do not even want to ask what happens when you have more than one repository and these users have different passwords!):

After this, you can ignore the creation of the resource, as it is not required to create the users via OTDS (if you’re going to use dmadmin or a few users, you probably already have these on the repository and will be “imported” to OTDS, so the resource is not needed).

Then, you need to import the license provided by OT support.

Now that we’re done with the “prerequisites”, we need to create what OT calls “business admin user” which is basically the user DCTM will use to connect to OTDS and check the license. This user needs to be created first on OTDS and then added to the “otdsbusinessadmins” groups.

After this, we need to create the user again in Documentum. For this, the guide suggests to use the following API script:

create,c,dm_otds_license_config
set,c,l,otds_url
http://localhost:8180/otdsws/rest
set,c,l,license_keyname
dctm-xplan
set,c,l,business_admin_name
licenseuser
set,c,l,business_admin_password
<password>
save,c,l

Once this is created we need to allocate the existing license to an OTDS partition. With this step the “initial setup” is done and you should see something like this on DA:

Now we need to create the inline users (dmadmin) in the OTDS partition so these are “licensed” to use application. For some reason (lack of knowledge I guess), OT complicates things too much as, not only you have to run a command-line Java command, but the documentation forces you to create a copy of dfc.properties in the dfc folder (??!!??!!) and to explicitly declare the environment variables for Java… that should be already present on the system (otherwise Documentum won’t be working properly… which I could expect happens to the talented team :D).

So instead of following the instructions on the white paper, just run the following command:

java -Ddfc.properties.file=$DOCUMENTUM/config/dfc.properties -cp "$DOCUMENTUM/dfc/*" com.documentum.fc.tools.MigrateInlineUsersToOtds <repository> <install owner> <password> <non-sync Partition name>

which looks way better than this monster:

Finally (yes, we’re almost done), you need to update otdsauth.properties file under JMS with the proper information, restart everything and hopefully, you’ll have now a licensed development environment.

OTDS FAQ for Documentum

If you work with Documentum, sooner or later you’ll have to face that moment were you’re going to have to use OTDS in your system (as this is mandatory since Documentum 23.4). I will try to provide some insights and tips about OTDS and Documentum in this post.

OTDS is basically an IDP (an authentication/access control system) that OpenText created (having in mind “other” solutions more limited than Documentum) and that they have been pushing to many of the products of their portfolio.

What are the positive things that OTDS brings to the table when we are talking about Documentum?

  • Centralized authentication management (as in: configure you authentication in 1 place, and reuse for all components, removing the need of having different libraries/configurations in webtop, d2, rest, etc.)
  • Centralized user/groups management: I just don’t buy this, because it relies on companies having a “perfectly organized LDAP/AD” which I’ve never ever seen. Even worse if we include here application-level group/roles, where in Documentum you can have group structures defined that are years away for any existing configuration in any AD/LDAP (and I do not see anyone manually creating thousand of existing Documentum groups in OTDS).
  • Centralized licensing management (another push from other products, we will see how this really works, as I already expressed my concerns in a previous post)

Obviously, not everything is fantastic, and the are several topics you should be aware of:

  • An authentication system is totally outside the scope of ECM departments, meaning that no ECM expert is capable of properly evaluating, configuring or maintaining a system of this kind (have a talk with your cybersecurity/authentication team before doing anything!). Not only that, this can conflict with existing policies in your company regarding authentication policies.
  • OTDS is not a product (check OT’s product page) but it is considered a “component” of other products. What does this mean?
    • You’re using a critical (as it is what it is supposed to handle access to your data) product which is not “recognized” even by their own vendor
    • OT has a leader solution in this field, NetIQ, which came with MicroFocus, so it is not clear what their roadmap is regarding IDPs.
    • There’s no “product support” for OTDS, but this is delegated to other product teams (ie: Documentum support). Obviously, they have no idea about OTDS itself, and OT bureaucracy makes highly complicated to get an answer when you have an issue.
  • OTDS is a single point of failure: OTDS doesn’t work -> No user can work, even if everything else is up and running.
  • OTDS was conceived for other, much simpler, OT’s products. As Documentum is kind of a Swiss army knife, OTDS greatly limits existing DCTM configurations (which makes this integration a challenge in certain environments)

So, given these topics, what scenarios can we found when integrating OTDS?. Well, it depends on your existing system. I think most systems can be grouped in three different categories:

  • Highly regulated / complex systems: You have your own IDP (EntraID, NetIQ, etc.), you also have your own system to handle access to Documentum (ie: automatic creation of users on repositories). This also includes multiple repositories in the organization, and many applications with many groups.
  • Small installations: Single repository approach, not many users, not many groups, still using user/password login
  • New systems / upgrades to a “clean” system

Based on this, what is the best (or only) approach to integrate OTDS in these scenarios?

  • Highly regulated / complex systems: Forget about documentation. You do not need resources, access roles or anything. Just use a synchronized partition with the required oauth clients and delegate authentication to your existing IDP. Minimal configuration, minimal maintenance (other than getting this to work). OTDS here acts just a proxy for getting the OTDS token that will be used by DCTM applications.
  • Small installations: Ideal scenario as you’re using Documentum as some of the other, more limited products from OT, so this is what OTDS was originally intended for. Probably you’ll only effort will be manually configuring groups.
  • New systems / upgrades: You “should” try to use OTDS in the “expected” way. Be aware of several limitations coming from the lack of support for existing solutions in Documentum:
    • Multirepository configurations are a nightmare. Nobody seem/wants to understand that you can have different users, in different repositories, and this can be a challenge.
    • Mix of functional/inline accounts and users accounts can be a challenge as well.

Finally, some tips that you should consider when using OTDS:

  • As soon as it is configured, add you organization users responsible for OTDS to the admin group, disable the otds.admin account and force 2FA authentication (and manually remove the user/pass login from otds-admin). You do not want to expose the admin page to anybody (even less if you have this running on a public cloud) as this is a huge security risk.
  • Token TTL is a topic when dealing with timeouts. Until now you only had to worry about web application timeout, but now also the OTDS token can expire, so the TTL should be something like TTL=timeout+buffer of time, so if a user stops using an application after 1 hour, and you have defined a 4 hours timeout on the application, your token needs to be valid for at least 5 hours.
  • When configuring D2, completely ignore the documentation. By no means mess with the tickets (who thought of this approach? who allowed this to be done??) or perform the configuration part that tells you to enable the “Global Registry encryption in other repositories”. This is no longer required since 23.4 P09 (you’re welcome) as it is a huge security risk (and I’m quoting the talented engineering team here: “a design flaw”, but they still seem to have forgotten to remove that section from documentation).
  • Make sure you test all scenarios before going live or you’ll be in trouble, as fixing these “live” will be challenging:
    • Web interface authentication
    • DFC interface authentication
    • Token authentication
    • OTDS token generation from external authenticator token
    • Any existing user configuration authentication (LDAP, inline, functional accounts, direct url access to components, etc.)

Documentum Search 24.4 on Rocky Linux 9.5 (WSL2) Install Guide

This is a step-by-step guide to install Documentum Search 24.4 in WSL2 using the Rocky Linux 9 (closest to the supported RH9) image created in the previous post (or you can use a clean one for this). Documentum Search 24.4 requires:

  • Java being available on the system (this should be already done from the previous post)
  • It is recomended to modify WSL2 configuration to use at least 16GB RAM for the Linux subsystem

Initial Configuration

We will start with configuring the environment for root (using root is not the best approach, but it is “the easiest” way to deal with the provided installation scripts, as these require some root operations):

[root@desktop ~]# vi .bash_profile
export JAVA_HOME=/opt/java/jdk-17.0.12+7
export PATH=$PATH:$JAVA_HOME/bin
export JAVA_TOOL_OPTIONS="-Djdk.util.zip.disableZip64ExtraFieldValidation=true -Djava.locale.providers=COMPAT,SPI --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-exports=java.base/sun.security.provider=ALL-UNNAMED --add-exports=java.base/sun.security.pkcs=ALL-UNNAMED --add-exports=java.base/sun.security.x509=ALL-UNNAMED --add-exports=java.base/sun.security.util=ALL-UNNAMED --add-exports=java.base/sun.security.tools.keytool=ALL-UNNAMED"

ulimit -n 85000
ulimit -u 85000

Next, we will create required folders:

[root@desktop /opt]# mkdir search 
[root@desktop /opt]# mkdir 3rdParty
[root@desktop /opt]# mkdir /SSVC
[root@desktop /opt]# mkdir zkdata
[root@desktop /opt]# mkdir zkdatalog  

Finally, before starting to install/configure DCTM Search 24.4, we need some additional packages:

[root@desktop /opt]# yum -y install wget lsof ncurses procps

Documentum Search Installation

Copy the zip file with the installer to the Linux image, and unzip the zip file and all the included zip files. After everything is unzipped (and all .sh scripts have execution permission) we can start installing Zoekeeper, ActiveMQ and Solr using the provided scripts (that will download automatically the installers)

[root@desktop /opt/search/3rdParty/nix]# ./dwl-zk.sh
[root@desktop /opt/search/3rdParty/nix]# ./dwl-activemq.sh
[root@desktop /opt/search/3rdParty/nix]# ./dwl-solr.sh

[root@desktop /opt/search/3rdParty/nix]# ./cfg-zk.sh
Configuration Complete

[root@desktop /opt/search/3rdParty/nix]# ./cfg-activemq.sh
Creating ActiveMQ Artemis instance at: /opt/3rdParty/activemq/bin/mybroker

Auto tuning journal ...
done! Your system can make 1.29 writes per millisecond, your journal-buffer-timeout will be 776000

You can now start the broker by executing:

   "/opt/3rdParty/activemq/bin/mybroker/bin/artemis" run

Or you can run the broker in the background using:

   "/opt/3rdParty/activemq/bin/mybroker/bin/artemis-service" start

Activemq: Default message broker created!!
Done
Configuration Complete

[root@desktop /opt/search/3rdParty/nix]# ./cfg-solr.sh
Configuration Complete

And we need to start these components before installing Documentum Search:

[root@desktop /opt/search/3rdParty/nix]# ./start-all.sh

For Documentum Search, we need to run the installer providing the server’s dns/ip:

[root@desktop search]# ./documentumsearch-nix-install.sh localhost
/opt/search/silent_install.log
Installation started
This installation will delete/overwrite all data in /SSVC. Continue?
1) Yes
2) No
Select 1 or 2 :: 1

We have to configure dfc.properties:

[root@desktop search]# cd /SSVC/config/dfc
[root@desktop dfc]# vi dfc.properties
dfc.docbroker.host[0]=desktop
dfc.docbroker.port[0]=1489  

And the connection to the repository (be aware that documentation ignores to note that the password needs to be encrypted):

[root@desktop dfc]# cd .. && cd config
[root@desktop config]# java -cp /opt/search/shared/dfc.jar com.documentum.fc.tools.RegistryPasswordUtils <password dmadmin>

[root@desktop config]# vi docbase-details.properties
docbasename=dctm244
docbaseuser=dmadmin
docbasepwd=<encrypted password>

And finally run the repository configurator before starting all the services:

[root@desktop config]# ./configure-docbase.sh
User dmadmin successfully configured for docbase dctm244

Now is the moment to start all the components and check the provided urls to see if everything is working:

[root@desktop config]# cd .. && cd bin
[root@desktop bin]# ./start-cps.sh
Starting PARSER on port 9300. Check http://localhost:9300/actuator/info for details.

[root@desktop bin]# ./start-indexer.sh
Starting INDEXER on port 9200. Check http://localhost:9200/actuator/info for details.

[root@desktop bin]# ./start-coressvc.sh
Starting CORESSVC on port 9100. Check http://localhost:9100/actuator/info for details.

[root@desktop bin]# ./start-fetcher.sh
Starting FETCHER on port 8900. Check http://localhost:8900/actuator/info for details.

[root@desktop bin]# ./start-index-agent.sh
Starting INDEX_AGENT on port 8300. Check http://localhost:8300/actuator/info for details.

[root@desktop bin]# ./start-search-agent.sh
Starting SEARCH_AGENT on port 8200. Check http://localhost:8200/actuator/info for details.

[root@desktop bin]# ./start-admin.sh
Starting ADMIN on port 8820. Check http://localhost:8820/actuator/info for details.

You can check the admin page (only index agent, it doesn’t seem to have something like dsearch) on http://localhost:8820 and login with the default credentials (admin/password):

And you’ll see something like this:

OTDS 24.4 on Rocky Linux 9.5 (WSL2) Install Guide

This is a step-by-step guide to install OTDS 24.4 in WSL2 using the Rocky Linux 9 (closest to the supported RH9) image created in the previous post. OTDS requires:

  • Java being available on the system (this should be already done from the previous post)
  • An existing PostgreSQL database (we already have this as well from the previous post)
  • An existing/dedicated Tomcat server in the server

Initial Configuration

First we need to create the tablespace as user postgres:

[postgres@desktop ~]$ mkdir -p /var/lib/pgsql/data/db_otds_dat.dat

Next we need to enable the repository for postgres-contrib, as this package contains the required pg_trgm extension:

[root@desktop ~]# dnf install postgresqsudo yum install postgresql16-contribl16-contrib
[root@desktop ~]# yum install postgresql16-contrib

Now, we can create the user, tablespace and database on PostgreSQL via psql:

CREATE USER otds WITH PASSWORD 'dmadmin';

CREATE TABLESPACE otds OWNER otds LOCATION '/var/lib/pgsql/data/db_otds_dat.dat';

CREATE DATABASE otds WITH OWNER = otds ENCODING = 'UTF8' TABLESPACE = otds CONNECTION LIMIT = -1;

grant all privileges on database otds to otds;

Now, login with the new user as we have to enable the pg_trgm extension:

[root@desktop ~]# psql -U otds
Password for user otds:
psql (16.6)
Type "help" for help.

otds=> CREATE EXTENSION pg_trgm;
CREATE EXTENSION
otds=> quit

And now you can simply install OTDS normally using otds as username.

Documentum 24.4 PostgreSQL 16 on Rocky Linux 9.5 (WSL2) Install Guide

This is a step-by-step guide to install Documentum 24.4 in WSL2 using the Rocky Linux 9 (closest to the supported RH9) image with PostgreSQL 16. This versions comes with some changes:

  • “dmadmin” user defaults to unix source, so you’ll need to change it to inline
  • OTDS is mandatory/required on the CS (and can be partially configured from DA), as well for using applications (note the word “using”, you can only login to DA without OTDS)

This requirement raises some questions such as: Are developers required now to have one (or several) licenses to work on local environments? How does this work with contractors (duration of license, using it for “other” customers, etc.) How does this affects performance? (as this validation runs via JMS / OTDSAuthentication servlet for each user login)

  • D2-Config is now (finally) integrated with OTDS
  • Documentum Search is avaliable for on premise (this will be another post)

Initial Configuration

There’s no official image on Microsoft Store so you’ll need to download the container image from the Rocky Linux page. Once this is done, you can import the image:

wsl --import RockyLinux9Dctm244 c:\Users\aldago\rockylinuxSearch244\ d:\dctm\dctm244\Rocky-9-Container-Base.latest.x86_64.tar.xz

After importing the image, we can log in (as root) and start the basic configuration of the server:

[root@desktop ~]# yum -y update
[root@desktop ~]# yum -y install sudo tcl expect libXtst xterm libxcrypt-compat

[root@desktop ~]# adduser dmadmin
[root@desktop ~]# passwd dmadmin
Changing password for user dmadmin.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.

[root@desktop ~]# usermod -aG wheel dmadmin
[root@desktop ~]# su - dmadmin
[dmadmin@aldago-desktop ~]$ sudo vi /etc/wsl.conf
[boot]
systemd=true

PostgreSQL Configuration

First we need to install PostgreSQL:

[root@desktop ~]# yum module -y install postgresql:16
[root@desktop ~]# postgresql-setup --initdb
 * Initializing database in '/var/lib/pgsql/data'
 * Initialized, logs are in /var/lib/pgsql/initdb_postgresql.log

[root@desktop ~]# systemctl start postgresql
[root@desktop ~]# systemctl enable postgresql
Created symlink /etc/systemd/system/multi-user.target.wants/postgresql.service → /usr/lib/systemd/system/postgresql.service.
[root@desktop ~]# systemctl status postgresql

[root@desktop ~]# su - dmadmin
[dmadmin@desktop ~]$ sudo su - postgres
[sudo] password for dmadmin:
[postgres@desktop ~]$ psql
psql (16.6)
Type "help" for help.

postgres=# \password postgres
Enter new password for user "postgres":
Enter it again:
postgres=# exit
[postgres@desktop ~]$ exit
logout

[dmadmin@desktop ~]$ sudo passwd postgres
Changing password for user postgres.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.

Next, we can install phpgadmin for easier administration:

[root@desktop ~]# dnf -y install yum-utils
[root@desktop ~]# rpm -i https://ftp.postgresql.org/pub/pgadmin/pgadmin4/yum/pgadmin4-redhat-repo-2-1.noarch.rpm
warning: /var/tmp/rpm-tmp.CQz9dX: Header V4 RSA/SHA256 Signature, key ID 210976f2: NOKEY
[root@desktop ~]# yum-config-manager --disable pgdg-common
Error: No matching repo to modify: pgdg-common.
[root@desktop ~]# dnf update -y                                                                            
[root@desktop ~]# dnf install pgadmin4 -y

[root@desktop ~]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@desktop ~]# systemctl status httpd

[root@desktop ~]# /usr/pgadmin4/bin/setup-web.sh
Setting up pgAdmin 4 in web mode on a Redhat based platform...
Creating configuration database...
NOTE: Configuring authentication for SERVER mode.
Enter the email address and password to use for the initial pgAdmin user account:
Email address: dmadmin@dmadmin.dmadmin
Password:
Retype password:
You can now start using pgAdmin 4 in web mode at http://127.0.0.1/pgadmin4

And finally, we can configure the odbc connection:

[root@desktop ~]# dnf install https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-x86_64/pgdg-redhat-repo-latest.noarch.rpm
[root@desktop ~]# yum -y install postgresql16-odbc.x86_64 unixODBC.x86_64
[root@desktop ~]# vi /etc/odbc.ini
[MyPostgres]
Description=PostgreSQL
Driver=PostgreSQL
Database=postgres                                                                                                       
Servername=localhost
UserName=postgres
Password=dmadmin
Port=5432                                                                                                               
Protocol=16
ReadOnly=No
RowVersioning=No
ShowSystemTables=No
ShowOidColumn=No                                                                                                        
FakeOidIndex=No
UpdateableCursors=Yes
DEBUG=Yes

[root@desktop ~]# vi /etc/odbcinst.ini
[PostgreSQL]
Description     = ODBC for PostgreSQL
Driver          = /usr/pgsql-16/lib/psqlodbcw.so
Driver64        = /usr/pgsql-16/lib/psqlodbcw.so
Setup64         = /usr/lib64/libodbcpsqlS.so.2
FileUsage       = 1

[root@desktop ~]# su - dmadmin
[dmadmin@desktop ~]$ isql -v MyPostgres
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL>

Documentum Server

First, we need to create the DB folder:

[dmadmin@desktop ~]$ su - postgres
Password:
[postgres@desktop ~]$ mkdir /var/lib/pgsql/data/db_dctm244_dat.dat

The Documentum folders and JDK (stick to the supported 17.0.12 version and remember to remove anon from the disabled algorithms to avoid issues):

[dmadmin@desktop ~]$ sudo mkdir -p /opt/documentum/sw && sudo mkdir -p /opt/documentum/product/24.4
[dmadmin@desktop ~]$ sudo chown -R dmadmin:dmadmin /opt/documentum

Add environment variables to .bash_profile:

[dmadmin documentum]$ vi ~/.bash_profile
DDOCUMENTUM=/opt/documentum
export DOCUMENTUM

DM_HOME=$DOCUMENTUM/product/24.4
export DM_HOME

DM_JMS_HOME=$DOCUMENTUM/tomcat10.1.30
export DM_JMS_HOME

POSTGRESQL_HOME=/usr/pgsql-16
export POSTGRESQL_HOME

JAVA_HOME=/opt/documentum/jdk-17.0.12+7
export JAVA_HOME

PATH=$PATH:$DM_HOME/bin:$POSTGRESQL_HOME/bin:$HOME/.local/bin:$HOME/bin:$JAVA_HOME/bin:$DOCUMENTUM/dba
export PATH

LC_ALL=C
export LC_ALL

LD_LIBRARY_PATH=$POSTGRESQL_HOME/lib:$DM_HOME/bin:/usr/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH

export DM_CRYPTO_MIN_PASSWORD_LENGTH=8

DISPLAY=:0
export DISPLAY

export PS1='[u@h w]$ '

Reserve ports and configure limits.conf:

[dmadmin ~]$ sudo vi /etc/services
dctm234 50000/tcp # dctm 24.4 repo
dctm234_s 50001/tcp # dctm 24.4 repo

[dmadmin ~]$ sudo vi /etc/security/limits.conf
dmadmin – core -1

[dmadmin@desktop /opt/documentum]$ sudo ln -s /usr/lib64/libsasl2.so.3.0.0 /usr/lib64/libsasl2.so.2
[dmadmin@desktop /opt/documentum]$ sudo ln -s /usr/lib64/libssl.so.3 /usr/lib64/libssl.so.10

And now you can simply install content server normally 🙂