Coming Soon!

Data & Projects pages!

In keeping our promise to ourselves to be an "open book" source for our AI music channel projects, we are working on our approach to data recording. Though it is suppose to be randomized by design (we assume), we hope to develop some kind of parameter for creating "similar" AI generated audio tracks in Mubert.com specifically.

So far we think we have discovered ways of producing results within certain boundaries when guiding "text-to-music" through formatting the text request line. Our findings until now will be posted as screenshots, text, and video links on our video channels as well as the website. From here we will be designing web pages that share the parameters we are developing, and how we will be using them to document the AI creations.

For example, we believe that using the text "BPM=(80)" will guide the AI in Mubert.com to create an audio music track at 80 beats per minute. So far, we have been able to produce a number of tracks within 80 BPM, give or take (+/-) 10 beats per minute or so.

We plan to provide text prompts in the corresponding video descriptions that were used to create the audio in each video as well as the text used to create the images.

This website will document as much of the data we can pull from the resulting audio as possible. Playlists will be created sharing categorized music video results for public enjoyment and data sharing.

Though our channels will be focused on the "Lofi" and "Chillfi" music genres, NOT all results will necessarily land in either category. Therefore, we will be creating other channels and or playlists, to share this data as well. This will help to drive viewers to channels sharing our data with the public, where feedback can be gathered and shared with others who may also find this type of collaboration useful.

Mubert.Com Theoretical input structure (API)

This structure in the Python code used in Muberts API from the image on the right is how we will be structuring our inputs from here on.

"mubert_tags_string = 'tribal,action,kids,neo-classic,run 130,pumped,jazz / funk,ethnic,dubtechno,reggae,acid jazz,liquidfunk"

Proposed Tag Structure:

[mood, genre, theme, instrument, run #-- bpm]

While researching prompt structure i came across a github repository that had some code and a video posted with audio samples of the tracks that were produced with some prompt examples.

There i found the the python code for the API with a sample prompt structure in some of the function calls.

To the right is an image of our prompt structure in use. At the bottom of the image is some of the output that was produced. Trying to aim for precise Beats Per Minute (BPM). Unable to reproduce desired output every time with this prompt structure.

This is just one small example of what we are doing and trying to accomplish.