Welcome

Through incremental integration and independent research and development, build a method library of big data quality control, automatic modeling and analysis, data mining and interactive visualization, form a tool library with high reliability, high scalability, high efficiency and high fault tolerance, realize the integration and sharing of collaborative analysis methods of multi-source heterogeneous, multi-granularity, multi-phase, long-time series big data in three pole environment, as well as high Efficient and online big data analysis and processing.

  • Nearest Neighbors Algorithm

    The nearest neighbors algorithm (k-NN) is a non-parametric method used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space. 

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    1598 2019-10-15 View Details

  • Support Vector Machine

    A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data (supervised learning), the algorithm outputs an optimal hyperplane which categorizes new examples.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    2370 2019-10-15 View Details

  • Naive Bayes

    Naive Bayes is a simple technique for constructing classifiers: models that assign class labels to problem instances, represented as vectors of feature values, where the class labels are drawn from some finite set. There is not a single algorithm for training such classifiers, but a family of algorithms.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    2309 2019-10-16 View Details

  • Convolutional Neural Network (CNN)

    Compared with other deep learning structures, convolutional neural network can give better results in image and speech recognition. Compared with other deep and feedforward neural networks, convolutional neural networks need less parameters to estimate, which makes it an attractive deep learning structure.

    585 2022-06-15 View Details

  • Ridge Regression

    Ridge regression is a way to create a parsimonious model when thenumber of predictor variables in a set exceeds thenumber of observations, or when a data set hasmulticollinearity.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    2071 2019-10-14 View Details

  • Apriori

    Apriori is an algorithm for frequent item set mining and association rule learning over relational databases. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    1615 2019-10-15 View Details

  • K-means Clustering

    The K-means algorithm identifies k number of centroids, and then allocates every data point to the nearest cluster, while keeping the centroids as small as possible.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    2097 2019-10-18 View Details

  • Random Forests

    Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual tree.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    4221 2019-10-20 View Details

  • Logistic Regression

    Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary dependent variable.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:

    5057 2019-10-17 View Details

  • Principal Component Analysis

    Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables (entities each of which takes on various numerical values) into a set of values of linearly uncorrelated variables called principal components.

    Installation: online;

    Dependent libraries: sklearn;

    QR code:



    2409 2019-10-17 View Details