Deep Learning
This extension provides Deep Learning capabilities for execution on CPU and GPU.
[Unsupported Extension Notice] - The Deep Learning extension is not officially supported by Altair RapidMiner. While we're always improving and updating our offerings, we can't guarantee any help or fixes for this extension.
This extension provides operators to create and adapt Deep Learning models using different types of layers. Networks can be executed both on CPU and on GPU. It requires the ND4J Back End Extension for configuration of computational resources.
Version 1.2.1
- Fixed faulty error messages
- Fixed tutorial process with missing sample data
- Upgraded ND4J extension dependency: 1.2.0
Version 1.2.0
Change of concepts: The loss function is no longer configured through the Deep Learning operator, but instead in the new output layer. There is now a dedicated output layer, which comes with an automatic mode.
- Added Autoencoding Operator with an option to suggest the decoder side for certain encoder architectures.
- Added Output Layer with automatic mode: LossFunction moved from Deep Learning Operator to Output Layer Operator.
- Added Activation Layer as an explicit Option.
- Added GELU activation function.
- Added MISH activation function.
- Added padding to convolutional layer operator.
- Added padding to pooling layer operator.
- Added dilation to convolutional layer operator.
Version 1.1.2
- Fixed Belt Table support for the "Import Existing Model" and "Fine-Tune Model" operators.
- Fixed incorrect documentation of the LeNet model in the "Import Existing Model" operator.
Version 1.1.1
- Added support for Belt Tables (the new data structure for tabular data in RapidMiner).
Version 1.1.0
- Added support for transfer learning.
- Added "Import Existing Model" operator to download pre-trained models, or just their architecture from a variety of pre-trained models. This operator also adjust input shapes based on user data.
- Added "Fine-Tune Model" operator to change the architecture of existing models and continue training.
- Added support for non-sequential models to the "Read Keras Model" operator.
- Breaking change: models stored in previous versions, are incompatible with this version. Please contact us if this causes issues.
Version 1.0.1
- Fixed an issue with regression test scoring
- Fixed a bug when updating sequential models
Version 1.0.0:
- Dependencies changed from CUDA 10.0 (and optional cuDNN 7.4) to CUDA 10.1 (and optional cuDNN 7.6)
- Added web-based training monitor
- Added embedding layer
- Added operator to convert text into embedding ID
- Added operator to convert embedding ID into text
- Added simple recurrent layer
- Added pre-flight checks for network integrity with quick fixes
- Created Deep Learning ready docker image(s) (search dockerhub for RapidMiner)
- ExampleSet to Tensor operator now relies on parameters for selecting needed indices instead of roles
- Updated Keras model import to handle all current sequential models created with Tensorflow.Keras
Version 0.9.4:
- Added support for image handling (use the Image Handling Extension for loading images as tensor input)
- Back end handling moved into new extension with additional features (ND4J Back End Extension - automatic dependency)
- Added early stopping mechanisms
- Added weights and biases output
- Fixed label mapping bug in TimeSeries to Tensor operator
Version 0.9.3:
- Added support for many-to-many classification and regression use-cases, as well as many-to-one regression.
- Added ExampleSet to Tensor operator
- Added logging of test scores to history port
- Added lasso (L1) and ridge (L2) regression loss functions
- Added support for macOS Catalina
- Added support for cuDNN in version 7.4
- Most layers are now taking advantage of cuDNN if installed
- Updated back end to DeepLearning4J Beta6
- Updated CUDA dependency to version 10.0 (if you need support for CUDA 9.2, 10.1, 10.2, please contact us)
- Fixed error, when scoring only one Example
- Fixed bug in displaying parameters of the Add Dropout Layer operator
- Dropped support for GPU on macOS
Version 0.9.1:
- Fixed bug during execution on job agents
Version 0.9.0:
- Added Load Keras Model Operator (applying sequential Keras models without python)
- Added Recurrent Network (like LSTM) handling
- Added LSTM Layer
- Added Time-Series to Tensor Operator
- Fix of Anaconda blocking extension loading
- Removed Text to Numbers via Word2Vec
- Changed tensor handling (incompatible with previous tensors)
- Lowered CUDA version requirement from 9.1 to 9.0
Version 0.8.1:
- Fixed bug causing incompatibility with RapidMiner Studio 9.1
Version 0.8.0:
- Deep Learning on ExampleSets with native model handling
- Text Handling using Word2Vec
- Layers:
- fully-connected
- dropout
- batch normalization
- convolutional
- pooling
- global pooling
- GPU usage
- History Port (epoch logging)
- Custom Icons
- No external requirements (except for GPU)
- QuickFixes for switching between regression & classification (loss functions)
- Model Updatability
- Samples Processes (samples/Deep Learning)
Remarks:
- Execution on GPU is currently only available on NVIDIA GPUs in combination with an appropriate CUDA installation (see above which version exactly).
- This extension uses the java library DeepLearning4J (version DL4J-M1.1).
- No support for 32-bit.
Product Details
Version | 1.2.1 |
File size | 349 MB |
Downloads | 81826 (42 Today) |
Vendor | RapidMiner Labs |
Category | Machine Learning |
Released | 2/21/22 |
Last Update | 2/21/22 4:04 PM |
License | AGPL |
Product web site | www.rapidminer.com |
Rating | (0)
|
Comments
Extension version 0.9: (RM9.1) only supports CUDA 9.0 and CUDNN 7.0 EXACTLY. Ensure that CUDA bin directory is in your path before starting RM. Extension version 0.8.0: (RM9.0) only supports CUDA 9.1 and CUDNN 7.1 EXACTLY
The data parsing problem occurs when a 32-bit Version of RapidMiner Studio or Java is used. Please check the 64-bit version.
I only get a "data parsing problem" error even though with all numerical example sets. Is there a fix for this? Thanks...