Welcome

Through incremental integration and independent research and development, build a method library of big data quality control, automatic modeling and analysis, data mining and interactive visualization, form a tool library with high reliability, high scalability, high efficiency and high fault tolerance, realize the integration and sharing of collaborative analysis methods of multi-source heterogeneous, multi-granularity, multi-phase, long-time series big data in three pole environment, as well as high Efficient and online big data analysis and processing.

  • Principal Component Analysis

    Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables (entities each of which takes on various numerical values) into a set of values of linearly uncorrelated variables called principal components.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    2051 2019-10-17 View Details

  • Logistic Regression

    Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary dependent variable.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:

    4431 2019-10-17 View Details

  • Restricted Boltzmann Machine

    A restricted Boltzmann machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    1770 2019-10-16 View Details

  • Random Forests

    Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual tree.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    3551 2019-10-20 View Details

  • Apriori

    Apriori is an algorithm for frequent item set mining and association rule learning over relational databases. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    1134 2019-10-15 View Details

  • Support Vector Machine

    A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data (supervised learning), the algorithm outputs an optimal hyperplane which categorizes new examples.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    1851 2019-10-15 View Details

  • K-means Clustering

    The K-means algorithm identifies k number of centroids, and then allocates every data point to the nearest cluster, while keeping the centroids as small as possible.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    1759 2019-10-18 View Details

  • Ridge Regression

    Ridge regression is a way to create a parsimonious model when thenumber of predictor variables in a set exceeds thenumber of observations, or when a data set hasmulticollinearity.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    1584 2019-10-14 View Details

  • Naive Bayes

    Naive Bayes is a simple technique for constructing classifiers: models that assign class labels to problem instances, represented as vectors of feature values, where the class labels are drawn from some finite set. There is not a single algorithm for training such classifiers, but a family of algorithms.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    1651 2019-10-16 View Details

  • Non-Local Means (NLM)

    Non-Local means is an improved filtering to the traditional neighborhood filtering method. Considering the self similarity of the image, it makes full use of the redundant information in the image and can maintain the details of the image to the greatest extent while denoising.

    19 2022-06-15 View Details