silipo-arenas-ml-model-header
Programming

My most recent machine learning model isn’t working. What am I supposed to do?

  •  
  •  
  •  
  •  
  •  
  •  

If you deal with data in general, and machine learning algorithms in particular, you may have experienced the annoyance of a model that refuses to learn the task at hand. You’ve done everything, but the accuracy metric isn’t improving. So, what’s next? What exactly is the issue? Is this an insurmountable problem, or is there a solution you’re overlooking?

First and foremost, do not be concerned. Also the most seasoned machine learning experts have experienced this feeling of being confused and unsure of how to proceed. Artificial Intelligence (AI), data science, or predictive analytics – whatever you want to call it – is more complicated than a set of predefined measures. The mission, the data domain, and the available dataset all influence the choice of appropriate model, successful architecture, and best hyperparameters. A significant amount of manual work is still required to create a viable AI model, which helps to explain why some data scientists are wary of many marketed automated applications that claim to train good data science models.

Let’s see what we can do to get you out of this bind now that we’ve hopefully settled you down. Here are a few tricks we use when model training seems to be slipping away from us.

Please do your homework and double-check that your training technique is free of obvious flaws before unleashing your imagination. Here’s a checklist to make sure you’ve covered all of your bases:

The project’s goal is straightforward and well established, and all stakeholders are on board. This indicates that your project will answer a specific issue.

  • Your data is of sufficient quality.
  • The training set is large enough to accommodate the model architecture you’re attempting to teach.
  • Your preparation isn’t moving at a snail’s pace. You can extract a smaller sample for training if your training collection is too big.
  • For classification issues, the training set adequately describes all classes. If you have a few groups with low representation, you can group them as ‘Others.’ If you have too many groups, you can try grouping them first and then using a clustering algorithm to reduce the number.
  • There is no data leakage between the training and test sets.
  • There are no noisy/empty attributes, too many missing values, or too many outliers in the dataset.
  • If the model needs it, the data has been normalized.
  • Both hyperparameters have been correctly set and optimized, implying that you are familiar with the algorithm. Before using the technique, always study the fundamentals; you’ll need to be able to articulate the procedure and outcomes to the stakeholders.
  • For the problem at hand, the evaluation metric is appropriate.
  • To prevent underfitting, the model has been conditioned for a long time.
  • The data is not overfitted by the model.
  • The performance isn’t great. Be wary of models that are either 100 percent accurate or have no error: this is a red flag that something is wrong with the model; most likely, the dataset contains a variable that is synonymous with the target variable you’re trying to forecast.
  • Finally, are you certain you’ve been stuck? Occasionally, the model works nicely, or at least fairly well, and all we are missing is the next move.
  • If any of the previous errors occur, the issue should be easily detected and a solution implemented to correct the problem.
  • We’ve compiled a compilation of suggestions to assist you with your due diligence checklist and innovative inspiration when the current implementation seems hopeless.

Check for Business Purpose Alignment

It’s something that everybody says, and it’s something that we all repeat. Make sure you understand the business idea before you continue coding like a bulldozer. In different departments, we speak different languages. Before we get started, let’s make sure we understand what’s needed.

Let’s make sure we’re not trying to develop a traffic prediction app when stakeholders are expecting a self-driving car, for example. It should go without saying that, given the amount of time and money available, a traffic prediction app is more likely to be implemented than a self-driving vehicle. However, as previously said, we speak different languages. Let’s make sure our objectives are in sync.

Once we’ve gotten started, we can ask for input on the intermediate milestones on a regular basis. It’s far less frustrating to incorporate improvements in an incomplete application than it is to present the final product to customers who are dissatisfied. This occurs more often than you would expect.

Iterate quickly on the first test models and present the findings early for timely input.

Examine the Data’s Accuracy

If you’re confident that you’re using the right application to meet the needs of your stakeholders, the issue may be with your results. The following move is to examine the data quality. Adequate data is needed for any data science mission.

First and foremost, the data must be reflective of the mission at hand. If the issue is one of classification, all groups must be adequately described with all of their complexities. If the issue is a numerical forecast, the dataset must be wide enough to cover all possible scenarios.If the data quality is inadequate for the model you’re trying to train (dataset is too limited or not general enough), you have two options: revise the market requirements or collect more and more general data.

Second, all data domain dimensions must be represented. Any of the input attributes must contain all required information. Be sure to import and preprocess all of your attributes.

Investigate the use of feature engineering methods to incorporate functionality. Examine the utility of your features and concentrate on them. Can you come up with new features that have a bigger effect on your model?

Next, delete any columns that have too many bugs, too much noise, or too many missing values. They run the risk of influencing the algorithm and resulting in subpar performance. In the age of big data, it’s common to believe that the more data you have, the better, but this isn’t always the case.

Also, make sure that no data from the training set leaks into the assessment set. Look for duplicate records; for time series analysis, implement strict time-based partitioning; and ensure that the data collection logic is followed. After the target variable, some fields are frequently registered. By including such fields, the value of the goal variable is revealed, resulting in an unrealistic assessment of success and an unusable deployment application. We recall a project on predicting car accident fatalities in which the field “run” was only filled for certain races and only after the fatality had occurred. Only some of the potential races were associated with a high probability of death when the algorithm began. Make sure you stay within the parameters of the data collection process, ehm…

Examine the Hyper and Model Size parameters.

The information is useful. The company’s objectives are straightforward. My algorithm is still not working properly. What’s the matter? The time has come to familiarise yourself with the algorithm and fine-tune some of its hyperparameters.

A critical hyperparameter is the size of the machine learning model. A model that is too small results in underfitting, whereas a model that is too big results in overfitting. What is the best way to find the ideal middle ground? Regularization terminology, pruning, dropout methods, and/or a classic validation package can all be used to track progress automatically.

If you’ve decided to use deep learning neural networks for your project, you can choose from a variety of different units, each of which is quite specialised in dealing with a specific type of problem. Recurrent neural units, LSTM units, or GRUs may be more fitting than classic feedforward units when dealing with time and the need for your algorithm to remember the past. When working with images, a cascade of convolutional layers can aid the neural network in extracting better image features. An autoencoder architecture might be an interesting option if you’re dealing with anomalies. And so forth.

There are at least as many activation functions as there are neural units in deep learning neural networks. Make an informed decision. Some activation mechanisms work well in certain cases, while others don’t. For example, despite their popularity in the deep learning community, ReLU activation functions do not perform well in an autoencoder. The autoencoder seems to prefer sigmoids from the past.
Aside from neural networks, the information/entropy measure in decision trees and random forests is a fun hyperparameter to investigate.
A clustering algorithm’s number of clusters can be used to discover various types of data.
And so on for a slew of additional machine learning algorithms. Of course, being able to fine-tune a model’s hyperparameters requires complete knowledge of the algorithm and the parameters that govern it. This leads to the next question: do we know what algorithm we’re using?

Recognize the Algorithm

It has become easier to abuse algorithms as a result of the widespread availability of user-friendly interfaces for data science software. However, don’t be fooled! The simplicity of the GUI does not negate the algorithm’s complexity. We’ve seen a lot of random parameter settings in algorithms that aren’t well known. The best thing to do from time to time is to sit down and learn more about the math behind the core algorithm or its variants.

There is a wealth of learning content available on the internet in the form of courses, videos, and blog posts. Simply put them to use! Stop what you’re doing now and then to devote some time to deepening your knowledge, either for yourself or as a party.

Consider a new thing to test or try out as part of the project planning (a software, a technique, an algorithm, etc.). Make use of the project’s room to gain some additional experience. Internal awareness within the company can thus be extended, and creativity can be sparked. This would be a long-term commitment. Have a list of ideas about how you will use your data in the future, and write down what you would do if you had more time. This backlog can be useful when the project goes well and stakeholders want to learn more about the execution, or when the project goes poorly and you need a list of possible activities to follow up with.

I’ll give you a hint. At the same time as working on the idea, focus on the organisational culture. The people involved in a project are often the most important element, because it is the people who form the company’s internal culture. As a result, make sure you have the right people in your project to build the right culture for your business.

Look for solutions that already exist.

To successfully train a machine learning model, you must first understand the algorithm. You must also use all of the tips and tricks that the algorithm needs or prohibits, such as normalisation, shuffling, encoding, and other related operations. Other data scientists are likely to have used a similar approach in the past, so:

Don’t make the same mistake twice! Look for solutions to similar problems that you can apply to your project. Look for information not just in books and science papers, but also in blogs and code repositories, and speak with colleagues. Make sure you take and reuse ideas from previous projects that worked well. Colleagues, in particular, are a great source of information, tips, and ideas. Be able to reciprocate by sharing your work and accepting constructive criticism. Organize review sessions where your coworkers will contribute their knowledge and imagination.

Don’t restrict yourself to identical task solutions within the same domain. Cross-pollination of fields has a significant benefit. Anomaly detection in mechanical engines and credit card fraud detection also use the same techniques. Look at what has been done in various areas. If you’re focused on fraud prevention, don’t be afraid to join a chemistry webinar. You never know what kind of inspiration you’ll get from anything like that.

What is next?

It’s not enough to train an algorithm in a data science project. There’s a lot more to it than that to ensure that all of the pieces fit together to form a viable solution. We must properly prepare the data before training the algorithm. Even after training and checking the algorithm, we still need to perform a few operations based on the results. Frequently, ventures get caught in limbo, unsure about what to do now that the results are in.

If we develop a churn indicator, we’ll need to take steps, such as campaigns or call centre responses, for customers who are at high risk of churning. After conducting an overview of the influencers and their social media messages, we must devise a strategy for approaching them. If we build a chatbot, we’ll need to figure out how to incorporate it into the final product. If we create an automated translator, we must position it at the end of the speech before moving on to the next person in the conversation. If we develop an anomaly detector, we must schedule it to operate on a regular basis in order to detect mechanical chain issues quickly.

When we get to this point, we must be prepared. We must consider the actions and decisions we will make in light of the project’s findings and be prepared to take the next steps.

Simply come to a halt and take a deep breath.

To train a machine learning model successfully, you need to understand the algorithm, sure. You also need to apply all those tips and tricks required or aversed by the chosen algorithm: normalization, shuffling, encoding, and other similar operations. It is likely that other data scientists have implemented a similar solution before, so:

Do not reinvent the wheel! Search for existing solutions to similar tasks and adapt them to your project. Search not only in books and scientific articles, but also in blogs and code repositories, and talk with colleagues. Make sure you take and reuse successful ideas from previous projects. Colleagues especially are a great source for material, hints, and ideas. Be willing to reciprocate and share your work and take meaningful feedback. Even organize review sessions, where your colleagues can help with their experience and creativity.

Do not limit yourself to solutions to similar tasks in the same domain. There is a great advantage in the cross-pollination of fields. Indeed, anomaly detection in mechanical engines and fraud detection in credit card transactions often share the same techniques. Read up what has been done in different spaces. Do not be afraid to attend a webinar in chemistry, if you are working on fraud detection. You never know the kind of inspiration you might get out of that.

Simply come to a halt and take a deep breath.

If you’ve done everything and your model still won’t learn to an appropriate error level, take a break, get some distance from your job, and come back to it with a fresh perspective. This has proven to be successful for us on several occasions.

Never give up!
Machine learning models are fantastic when they work, but getting them to work is a difficult challenge. It’s a multidimensional challenge, similar to the machine learning problem, to figure out what went wrong and how to fix it. We hope we’ve given you some suggestions about how to get out of the quagmire of a non-improving model.

If you’re confident that the model architecture has been properly implemented, that the training set is sufficiently generic, that the assessment metric is correct, that the model has been trained long enough and still does not overfit the data, and that you’ve done your due diligence on the application and found nothing wrong with it, then the issue must be of a more general nature. Maybe the model isn’t the correct one, or the preprocessing isn’t appropriate for that type of data, or the procedure has some other flaw. It’s time to continue reading and exploring!

Our general advice to you is to keep trying new stuff and not to give up if your model doesn’t work the first few times. You will become an expert after introducing many templates and a few trials and errors, and it will be our turn to learn about your experiences.


  •  
  •  
  •  
  •  
  •  
  •