def find_winner():īut this is most likely what you searched for. In this function i'm trying to find the winner. Turn2 = int(input("Player 2 \nPlease play your move, between values 0-8: ")) While Turn2 not in acceptables_positions: Turn2 = int(input("Player 2 \nPlease play your move: ")) #Change the index value and replace it with Player 1 sign Turn1 = int(input("Player 1 \nPlease play your move, between values 0-8: ")) While Turn1 not in acceptables_positions: #Check if the input values is in the range of 0-8 Turn1 = int(input("Player 1 \nPlease play your move: ")) Print("Select position for your sign between 0 - 8\nYou can check the position board to be sure that your choice is in the place you want")Īcceptables_positions = Player1 = input("Please Choose Only X or O \n").upper() Player1 = input("Please Choose, X or O \n").upper() Print("Position Board \n |\n".format(board,board,board,board,board,board,board,board,board)) Sorry for this, but I couldn't upload the question because it has too much code. When the indexes change with the letter x or o I want to check if they are equal. I check the indexes of my list if are equal, but it seems that is doesn't work. Preprocess in embedded in the file.I'm trying to find the winner in tic-tac-toe game. If you want to make any changes in training the model including using F1CE loss function or using different hyperparameteres, change the related files which in this instance, they are hyperparameteres.py and f1ce_loss.py.įurthermore, the feature extraction is not embedded in the main model and you need to use methods in feature_extraction.py file to add the features at the end of each sample. |_ multilabel: files to train multilabel classifier |_ data: dictionary used to detect mispelled words |_ models: files to create binary classifiers |_ modified datasets: result of dataset modifier notebook |_ main dataset: includes EmoPars and ArmanEmo datasets |_ dataset modifier: notebook used to create datasets using thresholds or removing uncertain samples |_ augmented datasets: datasets with augmented samples |_ augmentation: notebook used for data augmentation Our model reaches a Macro-averaged F1-score of 0.81 and 0.76 on ArmanEmo and EmoPars, respectively, which are new state-of-the-art results in these benchmarks. In addition, we provide a new policy for selecting data from EmoPars, which selects the high-confidence samples as a result, the model does not see samples that do not have specific emotion during training. Moreover, feature selection is used to enhance the models' performance by emphasizing the text's specific features. Throughout this analysis, we use data augmentation techniques, data re-sampling, and class-weights with Transformer-based Pretrained Language Models(PLMs) to handle the imbalance problem of these datasets. In this paper, we evaluate EmoPars and compare them with ArmanEmo. These datasets, especially EmoPars, are suffering from inequality between several samples between two classes. EmoPars and ArmanEmo are two new human-labeled emotion datasets for the Persian language. With the spread of social media, different platforms like Twitter have become data sources, and the language used in these platforms is informal, making the emotion detection task difficult. Detecting emotion can help us in different fields, including opinion mining. Persian Emotion Detection using ParsBERT and Imbalanced Data Handling Approaches AbstractĮmotion recognition is one of the machine learning applications which can be done using text, speech, or image data gathered from social media spaces.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |