Windows


Minimum

Version: 10/11, older Windows versions are not officially supported, but may work.
CPU: Intel/AMD, 64 bits
GPU: Basic Intel graphics cards or Nvidia (e.g. recent GeForce/Quadro/NVS series) graphics cards; AMD graphics cards may work.
Memory: DDR4 memory, 16 GB of RAM, OpendTect itself needs at least 2 GB RAM. Therefore, 16 GB will almost certainly be the absolute minimum.
Storage: Hard Disk

Recommended

Version: 10/11, older Windows versions are not officially supported, but may work.
CPU: Intel/AMD processor with 64 bit support, 3+ GHz multi-core.
Note that OpendTect uses all processors if necessary. The more cores and speed, the better. OpendTect will automatically use multiple threads in many situations. This depends on the type of attribute, display, etc. We put a lot of effort to get time-consuming tasks multi-threaded.
GPU: Nvidia (e.g. recent main-stream up to high-end GeForce series) graphics cards.
Quadro or NVS series cards could give the bit extra you want. In doubt, buy the best GeForce card you can find. When buying a laptop make sure that it has a Nvidia chipset.
Memory: DDR4 or DDR5 memory, on the safe side don't go for less than 32 GB.
Buy as much memory that you can afford and fits in the system. The big clients for example use nothing less than 512 GB.
Storage: SSD is best, other good options are Hard Disk and Network Drive.
This is usually under-valued, but it's often the crucial performance component. SSD disks will give a tremendous boost in performance; essentially, data on SSD disks loads almost as fast as pre-loaded, in-memory data. Performance could be miserable if data needed to stream through (relatively) slow disks and/or networks.

For Machine Learning

Version: 10/11, older Windows versions are not officially supported, but may work.
CPU: Intel, 64 bits for when using Python environment Intelâ„¢ Math Kernel - MKL for Machine Learning using CPU only. AMD, 64 bits should be fine when using the Python Environment with CUDA 11.3 for Machine Learning on the GPU.
Ideally you want the system to be expendable to 4 GPUs. The CPU will need to support all GPUs. Important to look for is how many PCIe lanes the CPU supports and how many PCIe lanes are needed for the system's number of GPUs and M.2 NVMe SSDs. We recommend to get a CPU with at least 8 cores, 16 threads and 40 PCIe lanes.
GPU: Nvidia, GeForce or Quadro series.
The GPU needs to be fast enough and able to fit the model and data batch in memory. When in doubt choose the one with more memory. Other things to look for is the number of CUDA cores, tensor cores and GB memory bandwidth per second. We recommend the following cards:
  • Turing architecture cards (CUDA 10 and later)
    • Nvidia GeForce RTX 2080 Ti with 11 GB DDR6 memory and 4352 CUDA Cores
    • Nvidia Quadro RTX 6000 with 24 GB DDR6 memory and 4608 CUDA Cores
    • Nvidia Quadro RTX 8000 with 48 GB DDR6 memory and 4608 CUDA Cores
  • Ampere architecture cards (CUDA 11.1 and later)
    • Nvidia GeForce RTX 3080 Ti with 12 GB DDR6 memory and 10240 CUDA Cores
    • Nvidia GeForce RTX 3090 with 24 GB DDR6 memory and 10496 CUDA Cores
    • Nvidia A40 with 48 GB DDR6 memory and 10752 CUDA Cores
  • Ada Lovelace architecture cards (CUDA 11.8 and later)
    • Nvidia GeForce RTX 4070 Ti with 12 GB DDR6 memory and 7680 CUDA Cores
    • Nvidia GeForce RTX 4080 with 16 GB DDR6 memory and 9728 CUDA Cores
    • Nvidia GeForce RTX 4090 with 24 GB DDR6 memory and 16384 CUDA Cores
Memory: DDR4 or DDR5 memory, on the safe side don't go for less than 32 GB.
Buy as much memory that you can afford and fits in the system.
Storage: The best choice is M.2 NVMe SSD that is big enough for the data.
The advantage of M.2 NMVe SSD is that it is plugged into the motherboard and is super fast. Other options are SATA SSD, Hard Disk and Network Drive. Performance could be miserable if data needed to stream through (relatively) slow disks and/or networks.

Please note that...

for best performance OpenGL drivers should be up-to-date. For Machine Learning on GPU we provide a Python package with CUDA 11.3. Please see this table on the Nvidia CUDA Toolkit documentation page for the minimum compatible driver version.
the CUDA 10 Python environment is now obsolete and will no longer receive security updates. Users are encouraged to replace it with CUDA 11. Alternatively, you may decide for the CPU-only Python environment.
4K/8K screens are not fully supported yet. This depends on the scaling factor. We are working on a fix.
Please see the FAQ Visualization for a possible workaround.
Windows needs to be updated with the latest updates from Microsoft.

Linux


Minimum

Modern Linux distro: We have tested:
  • RHEL/CentOS 7.2 and higher Certified for RHEL 8 & 9
  • Ubuntu 20.04 and higher
  • OpenSUSE Leap 15.4 and higher
Other distros will probably work, possibly with a small tweak
CPU: Intel/AMD processor with 64 bit support
GPU: Basic Intel Graphics cards or Nvidia (e.g. recent GeForce/Quadro/NVS series) graphics cards; AMD graphics cards may work.
Memory: DDR4 memory, 16 GB of RAM, OpendTect itself needs at least 2 GB RAM. Therefore, 16 GB will almost certainly be the absolute minimum.
Storage: Hard Disk

Recommended

Modern Linux distro: We have tested:
  • RHEL/Rocky Linux 8.0 and higher Certified for RHEL 8 & 9
  • Ubuntu 22.04 and higher
  • OpenSUSE Leap 15.4 and higher
CPU: Intel/AMD processor with 64 bit support, 3+ GHz multi-core.
Note that OpendTect uses all processors if necessary. The more cores and speed, the better. OpendTect will automatically use multiple threads in many situations. This depends on the type of attribute, display, etc. We put a lot of effort to get time-consuming tasks multi-threaded.
GPU: Nvidia (e.g. recent main-stream up to high-end GeForce series) graphics cards.
Quadro or NVS series cards could give the bit extra you want. In doubt, buy the best GeForce card you can find. When buying a laptop make sure that it has a Nvidia chipset.
Memory: DDR4 or DDR5 memory, on the safe side don't go for less than 32 GB.
Buy as much memory that you can afford and fits in the system. The big clients for example use nothing less than 512 GB.
Storage: SSD is best, other good options are Hard Disk and Network Drive.
This is usually under-valued, but it's often the crucial performance component. SSD disks will give a tremendous boost in performance; essentially, data on SSD disks loads almost as fast as pre-loaded, in-memory data. Performance could be miserable if data needed to stream through (relatively) slow disks and/or networks.

For Machine Learning

Modern Linux distro: We have tested:
  • RHEL/Rocky Linux 8.0 and higher Certified for RHEL 8 & 9
  • Ubuntu 22.04 and higher
  • OpenSUSE Leap 15.4 and higher
CPU: Intel, 64 bits for when using Python environment Intelâ„¢ Math Kernel - MKL for Machine Learning using CPU only. AMD, 64 bits should be fine when using the Python Environment with CUDA 11.3 for Machine Learning on the GPU.
Ideally you want the system to be expendable to 4 GPUs. The CPU will need to support all GPUs. Important to look for is how many PCIe lanes the CPU supports and how many PCIe lanes are needed for the system's number of GPUs and M.2 NVMe SSDs. We recommend to get a CPU with at least 8 cores, 16 threads and 40 PCIe lanes.
GPU: Nvidia, GeForce or Quadro series.
The GPU needs to be fast enough and able to fit the model and data batch in memory. When in doubt choose the one with more memory. Other things to look for is the number of CUDA cores, tensor cores and GB memory bandwidth per second. We recommend the following cards:
  • Turing architecture cards (CUDA 10 and later)
    • Nvidia GeForce RTX 2080 Ti with 11 GB DDR6 memory and 4352 CUDA Cores
    • Nvidia Quadro RTX 6000 with 24 GB DDR6 memory and 4608 CUDA Cores
    • Nvidia Quadro RTX 8000 with 48 GB DDR6 memory and 4608 CUDA Cores
  • Ampere architecture cards (CUDA 11.1 and later)
    • Nvidia GeForce RTX 3080 Ti with 12 GB DDR6 memory and 10240 CUDA Cores
    • Nvidia GeForce RTX 3090 with 24 GB DDR6 memory and 10496 CUDA Cores
    • Nvidia A40 with 48 GB DDR6 memory and 10752 CUDA Cores
  • Ada Lovelace architecture cards (CUDA 11.8 and later)
    • Nvidia GeForce RTX 4070 Ti with 12 GB DDR6 memory and 7680 CUDA Cores
    • Nvidia GeForce RTX 4080 with 16 GB DDR6 memory and 9728 CUDA Cores
    • Nvidia GeForce RTX 4090 with 24 GB DDR6 memory and 16384 CUDA Cores
Memory: DDR4 or DDR5 memory, on the safe side don't go for less than 32 GB.
Buy as much memory that you can afford and fits in the system.
Storage: The best choice is M.2 NVMe SSD that is big enough for the data.
The advantage of M.2 NMVe SSD is that it is plugged into the motherboard and is super fast. Other options are SATA SSD, Hard Disk and Network Drive. Performance could be miserable if data needed to stream through (relatively) slow disks and/or networks.

Please note that...

OpendTect may work when using the Nouveau driver, however for best performance the Nvidia driver should be installed. The nouveau driver does not support CUDA.
Gallium3D drivers are not supported.
for best performance OpenGL drivers should be up-to-date. For Machine Learning on GPU we provide a Python package with CUDA 11.3. Please see this table on the Nvidia CUDA Toolkit documentation page for the minimum compatible driver version.
the CUDA 10 Python environment is now obsolete and will no longer receive security updates. Users are encouraged to replace it with CUDA 11. Alternatively, you may decide for the CPU-only Python environment.
low-level GPUs keep showing poor performances through the generations. Shading functionality requires special GPU features, present in the main-stream and high-end GeForce, Quadro and NVS cards. Nevertheless, under Linux, only Nvidia provides drivers capable of running the shading feature. If you can't see any colors on graphic elements, try disabling shading (Utilities > Settings > Look and Feel).
4K/8K screens are not fully supported yet. This depends on the scaling factor. We are working on a fix. Please see the FAQ Visualization for a possible workaround.
Linux 64 bits releases require the libstdc++ library to be present on the system. In the table you can see the minimum libstdc++ library version that is needed:
OpendTect version libstdc++ library needed
6.4 6.0.19 or newer
6.6 6.0.21 or newer
7.0 6.0.28 or newer
Linux distros will need to have the XCB libraries installed. For check and installation instructions please see Installing OpendTect on Linux.
OpendTect is known to work under RHEL, CentOS, Ubuntu, OpenSUSE and other distributions, as well as earlier versions of the main distributions, too. Fedora usage is not recommended - although it may work it's the only distro that regularly fails to work in combination with OpendTect. This is probably because the graphics vendors do not support it well in terms of drivers.
OpendTect Pro 7.0 has been certified for RHEL 8 & RHEL 9

macOS


Minimum

Version: macOS 12 (Monterey)
CPU: mac/Intel or mac/ARM processor with 64-bit support
GPU: Basic Intel, AMD or Apple graphics card, f.i. the Intel HD Graphics 4000.
Memory: 16 GB of RAM
Storage: Hard Disk

Recommended

Version: macOS 13 (Ventura) / 14 (Sonoma)
CPU: mac/Intel or mac/ARM processor with 64-bit support
GPU: Intel, AMD or Apple graphics card
Memory: Don't go for less than 32 GB RAM.
Buy as much memory that you can afford and fits in the system.
Storage: SSD is best. Other good options are Hard Disk and Network Drive.

For Machine Learning

We can provide packages to test Machine Learning on macOS.

Please note that...

mac/PowerPC support is NOT available.
mac/Intel emulation in Rosetta 2 is not supported.
a 3-button mouse is highly recommended.

Cloud


Amazon Web Services (AWS)

OS / AMI: Amazon Linux 2023 is not supported, official AWS documentation says that AL2023 is cloud-centered and optimized for Amazon EC2 usage and currently does not include a graphical or desktop environment.
Architecture: For Linux and Windows: The 64-bit (x86) architecture is supported. Note that the 64-bit (Arm) architecture is not supported.
Instance Type: We tested OpendTect on the following instance types: g3s, g5
GPU: We recommend either creating the instance with an AMI that comes with the NVIDIA driver pre-installed or installing the NVIDIA GRID driver on an instance that comes with a NVIDIA GPU. Please note that the basic Microsoft render driver is not supported.
Display: We recommend to use NICE DCV for the remote display. More information and the downloads for Server and Client can be found on the NICE DCV website.

Please note that...

We have known clients using OpendTect on AWS, Azure and Google Cloud. The requirements are no different compared to the on-prem requirements. Same OS, amount of memory and same graphics requirements apply.