-
Full lifecycle coverage
Get deep insights into behavior data, feedback data, model training and application launch across the entire AI learning cycle to optimize computing, network, storage, resource scheduling and others.
-
Hardcore AI technology
Self-developed infrastructure technologies enable the AI-oriented computing, storage, and communication; high-performance AI computing architecture supports trillion-dimensional feature processing.
-
Lower computing cost
Reduce the total cost of ownership; 4Paradigm SageOne costs just 1/10 of Google AutoML to handle image classification.

Advantages
Highlights
-
High-performance server configuration
Intel® Optane DC Persistent Memory (DCPMM).
2nd Gen Intel® Xeon® Scalable Processor.
Up to 28-core 56-thread per CPU; Intel® Turbo Boost Technology 2.0, with processor frequency up to 4.4GHz, meeting the performance requirements for feature engineering, model training, model estimation, etc.
-
FE process acceleration
Improve compression efficiency of data-to-disk in the FE process.
Adopt ATX+pz compression instead of CPU+gzip compression, enhancing performance by 10 times.
-
GBDT model training acceleration (FlashGBM)
Speed up GBDT model training.
Adopt automatic parameter tuning technology to automatically optimize the model training hyperparameters.
Up to 19x higher speed with ATX+FlashGBM vs CPU+GBM default parameters.
Up to 2.9x higher speed with ATX+FlashGBM vs CPU+GBM manual tuning.
-
100GE network acceleration
Adopt 100GE/RoCE network and corresponding configuration code to enable remote direct memory access (RDMA) via ethernet, and reduce CPU usage during bulk data transfer, eliminating network I/O bottlenecks in the AI process.
-
High-performance storage tuning
4Paradigm Sage software customized to work with high-performance SSD for improved feature processing, data-to-disk, etc.
Components such as Spark upgraded and optimized based on the standard 4Paradigm Sage software.
-
All-in-one delivery
Provide full-process delivery services from server, network equipment, to software deployment and functional test. Deliver a one-stop solution for hardware preparation, network configuration, resource planning, component compatibility and software deployment to accelerate the overall project cycle.
-
Horizontal scalability
Allow for horizontal linear scaling to expand or contract cluster size flexibly as needed
-
Cluster-based management and operation
Enable centralized and unified management of all-in-one clusters to allow real-time global monitoring and operation via a single interface.
Models and Features
-
Machine Learning Training Engine (Advanced Edition) AR5200
High-dimensional machine learning framework (GDBT)
4Paradigm AI accelerator card
Industry-leading machine learning performance
-
Machine Learning Inference Engine (Advanced Edition) AR5200
Feature calculation engine
Estimation service engine
InfiniCache
Rapid AI Inference
-
Feature Storage Engine (Advanced Edition) AR5400
Unified data governance
In-memory time series database RTIDB
Ultra-low latency online data services
-
Super Machine Learning Training Engine AD8200
CL-AP ultra-high performance processor, with 192-core computing power in a single machine
Optimized new AVX512 instruction set and new CLX AP micro-architecture IO bus utilization
Improved performance for more efficient ultra-high dimensional AI model training
-
Deep Learning Training Engine (Advanced Edition) AR4400
Enterprise-level NVIDIA training accelerator card
Embedded deep learning algorithm libraries
Compatible with mainstream deep learning frameworks such as TF, Caffe, and MXNet
Support multiple tuning modes such as Auto and Expert
Visualized full process model training that supports manual intervention
-
4Paradigm AI Accelerator Card ATX900
Large memory size (64GB DDR4)
Use on-board NVMe SSD storage to secure the customer IP without CPU and in-memory security encryption
High performance, with 6x higher ML training performance than dual Intel 6230 CPU
Business Scenarios
-
Precision marketing
-
Sales forecast
-
Anti-fraud and risk management
-
Anti-money laundering