Mailbox Quota einsehen und ändern bei Plesk

Bei Plesk Obsidian Web Admin Edition 18.* werden die Subscription-Mailbox Quotas nicht immer richtig im UI angezeigt. Das kann zu Verwirrung führen. Die richtigen Werte können mittels Plesk CLI ermittelt werden:

plesk db "SELECT concat(mail.mail_name,'@',domains.name) AS 'Email address',mn_param.val AS 'Mailbox usage',Limits.value AS 'Mailbox limit' FROM mail LEFT JOIN mn_param ON mail.id=mn_param.mn_id LEFT JOIN domains ON mail.dom_id=domains.id LEFT JOIN Subscriptions ON domains.id=Subscriptions.object_id LEFT JOIN SubscriptionProperties ON Subscriptions.id=SubscriptionProperties.subscription_id LEFT JOIN Limits ON SubscriptionProperties.value=Limits.id WHERE mn_param.param='box_usage' AND Subscriptions.object_type='domain' AND SubscriptionProperties.name='limitsId' AND Limits.limit_name='mbox_quota'"

Weitere Informationen gibt es dazu auch unter https://support.plesk.com/hc/en-us/articles/12377087443607-How-to-get-a-list-of-all-email-accounts-and-their-disk-usage-via-a-command-line-interface-in-Plesk.

Die Quota eines einzelnen EMailaccounts kann man sich mittels

plesk bin mail --info name@domain.example

anschauen. Diese kann auch durch

plesk bin mail -u name@domain.example -mbox_quota 1024M

geändert werden. Details dazu findet man in der Dokumentation unter https://docs.plesk.com/en-US/obsidian/cli-linux/using-command-line-utilities/mail-mail-accounts.39181/

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

VMWare Player, Ubuntu und EFI-Boot

Standardmäßig wird eine neue VMWare-Instanz im VMWare Player als Legacy-BIOS-Boot-Version angelegt. Das führt dazu, daß wenn man Ubuntu direkt installiert, dann wird kein EFI-Bootloader installiert. Oftmals ist das nicht weiter problematisch – in manchen Fällen kann es aber zu Schwierigkeiten kommen. Als ein Beispiel sei https://communities.vmware.com/t5/VMware-Workstation-Pro/How-to-give-focus-to-the-guest-OS-without-a-mouse/td-p/2281123 genannt.

Eine bereits installierte VMWare-Ubuntu-Installation kann nachträglich zu einer EFI-Boot-Instanz umgewandelt werden. Wie das auf Seiten von Ubuntu passiert, ist unter https://help.ubuntu.com/community/UEFI#Creating_an_EFI_System_Partition beschrieben. Im wesentlichen muss man die EFI-Bootpartition (FAT32-formatiert) manuell anlegen.

VMWare will dabei aber “zwischendrin” auch davon überzeugt werden, jetzt per EFI zu starten. Das geht leider nicht auf konventionellem Wege, da das UI von VMWare Player es nicht erlaubt, die EFI-Einstellung vorzunehmen. Stattdessen muss man die VDX-Datei der VMWare-Instanz manuell patchen:

firmware = "efi"

Weitere Hinweise gibt’s auch unter https://www.youtube.com/watch?v=U0ZAmyFxZvE. Dort wird z. B. auch beschrieben, daß man mit bios.bootdelay = 5000 eine 5-sekündige Verzögerung beim BIOS-Screen einbauen kann, damit man besser über ESC bzw. F2 in BIOS springen kann.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Git LFS Probleme analysieren

Wenn man Probleme mit git-lfs hat und z. B. ein Fehler kommt, daß der Hostkey (von github.com) nicht mehr akzeptiert wird, dann kann man mittels Setzen der Umgebungsvariablen

GIT_TRACE=1
GIT_CURL_VERBOSE=1

anzeigen lassen, was bei einem git lfs-Kommando (z. B. git lfs push origin master –all) so alles passiert.

Quelle: https://github.com/git-lfs/git-lfs/issues/2791#issuecomment-352536265

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Tic-Tac-Toe and AI: Stacked Multi-Output Model (Part 6)

In our previous blog, we saw that it is possible to provide multiple outputs, each one for a specific use case. So far, the two binary use cases, winning and winner, and the categorical use case, move, do not have any mutual dependencies between them: Their result are all derived from the original inputs.

Let’s now see, if stacking the layers may help to reduce the model’s complexity whilst still getting the same outputs with the same quality (accuracy = 1.0).

Continue reading ‘Tic-Tac-Toe and AI: Stacked Multi-Output Model (Part 6)’ »
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Tic-Tac-Toe and AI: Wrapping Up the Four Models – Multi-Output (Part 5)

As we have seen in the blog posts before, determining whether a board has a winner, determining which player the winner is, and with which move the winner won the game can all be done using neural networks. These networks only require Dense layers. However, each case has an inherent complexity, so a “minimal” number of units in these layers (and the number of layers) are necessary:

Use CaseLayersDense Layer ConfigurationCount of Parameters
Winning364/64/12813.3k
Winner140460
Move280/12811.7k

This suggests a kind of “complexity ranking” for the three challenges: The task with the highest complexity is the “winning problem”, followed by the “move problem”. Finally, the “winner problem” is the easiest to solve, because it may only be answered accurately with a single small Dense layer.

Using the principle of multi-output, you may ask, whether it is possible to retrieve all three pieces of information within a single model. Well, let us try this:

Continue reading ‘Tic-Tac-Toe and AI: Wrapping Up the Four Models – Multi-Output (Part 5)’ »
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Tic-Tac-Toe and AI: And what about the Winning Move? (Part 4)

After having implemented neural networks for determining whether a Tic-Tac-Toe board has a winner or not, and which player the winner is, it is now time to have a look at which is the winning move.

Looking at this case, you will notice that there are only three possible cases:

  • The winner is clear after the third move (of either “X” or “O”),
  • the winner is determined after the fourth move (of either “X” or “O”), or
  • the winner is determined after the fifth move (only “X” may achieve that state, as there is an odd number of fields available on the board).

Additionally, there is the special case that no winner exists, which we deserved the special identifier of “nine moves” (being the last single digit value). So, in total there are four possible states. There can only be one single state valid at a time (“one-hot case”).

However, this also means that we need to adjust our data preparation:

Continue reading ‘Tic-Tac-Toe and AI: And what about the Winning Move? (Part 4)’ »
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Tic-Tac-Toe and AI: Who is the Winner? (Part 3)

After having determined if a board has a winner using TensorFlow in the previous blog post, let us tackle a very similar question: Who is the winner?

Again this is a binary decision: Either X (“0”) or O (“1”) may win a board. It is also possible that we run into a tie, and therefore no one is a winner. For the sake of simplicity, let’s then still say “0” as a result. If the board really has a winner, we have already found a high-accuracy neural network to decide that in first place.

Using the same imports and setups as before, we again prepare our data. This time, we are interested in the information about the winner for our labels:

Continue reading ‘Tic-Tac-Toe and AI: Who is the Winner? (Part 3)’ »
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Tic-Tac-Toe and AI: A Winning Board (Part 2)

As a first learning task for using AI with Tic-Tac-Toe, let us take the task of determining whether a board is a winning one or not (i.e. either X or O has won the game). We cannot directly tell the neural network what the rules are for an assignment to be winning. Instead, we need to train it by examples. For that we already have prepared some data in a previous post.

The idea is a to train a TensorFlow model with several Dense layers. By varying the model’s configuration, we want to determine how complex (e.g. how many parameters we need) such a model needs to be to fulfill this requirement. Also looking at the accuracy will be an interesting topic.

As an information in advance: The computation were done using TensorFlow 2.13.1 on a Windows WSL2 machine having an NVIDIA Geforce RTX 4060 (8GB) installed. Mixed Precision was not enabled.

As usual you may download the entire example using the following link:

  tictactoetf.zip (9.6 KiB, 188 hits)

Let’s get started…

[continued on the next page]

Continue reading ‘Tic-Tac-Toe and AI: A Winning Board (Part 2)’ »
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Preparations: Tic-Tac-Toe and AI (Part 1)

You might still remember the times when you played Tic-Tac-Toe (a.k.a. noughts and crosses) in your childhood:

Tic-Tac-Toe board with X having won the game.

In a series of blog posts I want to apply neural networks on this well-known game. But before we may do that, we need to do some preparations.

Eventually, we want to teach a neural network to determine:

  • if a board has a winner,
  • who the winner is, and
  • after which move the winner has won the board.

To be able to properly describe such a board, we need to define an order on the board. I decided to take the order “top-left to bottom-right” like this:

You may also apply a different, more sophisticated scheme for defining the position, but let’s keep it simple.

With this nomenclature, we now can describe the board mentioned initially using a set of integers: 1,6,9,5,4,7,3,8,2. Even more interesting this become, if you consider this not being a set but a list (including order). That is important, because a board may have two “winners”, depending on which player came first. Note that

  • by definition, let us assume that X starts the game,
  • every other round, it is the other’s player to move (even positions are ‘O’ moves, odd positions are ‘X’ moves), and
  • that there are always exactly nine positions in total until the board is fully populated (also called an assignment).

Moreover, the list must not have any duplicates: 1,1,1,1,1,1,1,1,2 may also be a list of numbers, but this does not describe a proper Tic-Tac-Toe game, because the position 1 appears more than once. Therefore, the generation of the list represents a system of choosing without repetition. That brings us to another aspect: We first need to determine, if a list of integers is a valid board after all.

[to be continued on the next page]

Continue reading ‘Preparations: Tic-Tac-Toe and AI (Part 1)’ »
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

TensorFlow on GPU: The Memory Hog

By default Tensorflow preallocates GPU memory eagerly. Background for that is that it wants to prevent memory fragmentation. The amount of memory it allocates is around 80% of the memory available. Although running a rather small model with less than 45k of parameters, the monitoring tool nvidia-smi shows this:

Continue reading ‘TensorFlow on GPU: The Memory Hog’ »
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)