Before I go back to the topic I do have this to say about Unreal., I can't code in those langauges you mention but only code in batch at the moment, As for Unreal Engine, that engine is a workhorse and is a headache (not a delight) to work with. When you set everything all up in the editor the engine don't generate the new code scripts for you to add them into the game engine, it seems to be that you have to create the scripts YOURSELF. retype the same file parameters all over again to configure them. and that's not talking about how long it takes to build its lighting...........
In Unreal 4 you have to build a blueprint now just to get it to display text up on the screen which took the poor guy on youtube about 15 minutes to set up all of the nodes to get Unreal 4 to display txt on the screen. So what if your game had thousands of lines of dialog, do you have to build thousands of these shaders to display it? Because Unreal uses tons of extra calulcations with all of these blueprint nodes, i why it needs top high end machines to meet all its demmands.
What type of programming you do?, have you done Unreal Engine stuff yet? Because I haven't solved my unreal engine code issues yet with the models, trying to get em to animate in the game.. Because I can't solve the engine code issues, instead I been writing the rest of my game in a windows batch file do I'm writing out the storyline and the game elements, its a sci fi rpg combat game that allows you to explore space, use jumpgates, even explore another galaxy, so it won't be limited to just one galaxy, it just turn based at the moment because there's no 3d engine yet in it, I wrote it in batch code so can see how the game is shaping up in the 2d form at least, that at least lets me see how the diaglog runs in the game, before thinking of putting it into a real 3d engine to build the 3d world of it, That's why I'm looking at Unreal engine to see if it can do the job, allow me to have thousands of lines of interactive diaglog as well since its story line driven.
I don't know if Unreal can do the job with the dialog though. because it uses this blueprint node system. but i rather it just read the text strings in instead from a simple text diaglog file.
I think your tune sounds ok, but it stays locked at max volume all the time throughout the piece and the only time it drops in volume and softens is when its near the end of the track and it then just cuts off. Am I reading that right that to download this tune would take up 74 Megs of space on my computer? You must be be joking right, for I would expect a tune like this to weigh in at around only 3-4 megs on here but 74 megs, what are you uploading a raw wav file? Why couldn't the tune have been recorded in OGG or MP3 instead so it gives a much smaller file size so its alot more game friendly.
For In game making we need to use small file sizes. If we use these big file sizes then game rate frame performance is likely to degrade causing jerkiness or pausing during the game especially if your game relies on a 3d engine..
Human speech has got a wide range of different emotions, look on wikipedia under emotions and you see that with the basic emotions, there are also many different kind of groupings of emotions inside multiple groupings and its all these different groupings inside groupings is what makes it complicated to build a naturally human intelligent sounding speech engine in the english language.
So if they can figure out how to synthesis all those different emotional groupings of the words of the english languge, then they might beable to get more naturally english sounding synthesis.
Yes, not all of us have got the financial resources to pay for all the voice acting for our characters so we have to turn to what free software can offer us. Festival is ok if you don't mind typing your text out inside brackets.
if you are trying to build an english speech engine up from scratch with this audaspace with Blender, then you would probably need a dictionary database of words so the synthesis knows how to group the words as our speech is not grouped in letters, but grouped in words and clusters of words when we speak and it also has different up and down pitches in the word also to express a wide range of different emotions and feelings And because there are many different feelings spoken in our speech, there are many different pitch groupings for each word and also for each word group depending on what emotional state we are in at the time when speaking the words. There was this Austalian professor of white who explained it on you tube about how speech synthesis woks, and why speech syntheis is a disaster at the moment in trying to do english with the current traditional methods they been using because the traditional way has only been grouping together words, not been grouping together all the different pitch sound groupings in the words for the wide range of emotions and feelings. That's why in most speech TTS synthesis programs when words are spoken, it sounds flat and motonomous and not sound always natural. Or sounds like a disaster or a trainwreck like with Anna ect or sounds robotic wtihout feeling.
So I don't think the speech synthesis can form clear audible words with just 26 letters of the alphabet, I don't think that's enough to form a speech engine, I think you need all the words and also need all the different pitch groupings, but that's complicated because each word we speak also uses a different pitch grouping depending on what emotional or feeling is being expressed at the time when the word is being spoken..
So when Gandalf of the White says "Mary Went." or "Mary Had" Hes only showing 1 grouping for 1 emotion. not the whole range of them.
There is a huge range of emotional characteristics. HAPPY (Estaticic Joyful, gladnes, cheerful,), NEGATIVE: Anger, fear, sandness, distrust, worry, jealously,
I really didn't like Festival although I have looked at it. Because You had to type everything all out inside brackets with that program to get it to say anything. I Didn't like the syntax it uses to get it to talk. Festival was written by Unix programmers or who use linux, and I don't like UNix Commands with their Paramethsis. I thought in this program you just typed out your words and it speaks like with all the other synthesis programs, but not Festival, in Festival you have to type the whole entire code script line to get it to speak a line of text or save to wav file.
The documentation for the program has been written all wrong. Its been written at the advanced level of the engine, and not at the laymen level. The professors have written it at their level of understanding. The documents has to be explained out in plain simple english without all the heavy unix scripting to confuse those who have no knowledge of unix scripts, or redo the whole program, give it a simple to understand User interfance that is more friendly, because not everybody understands unix script code commands. or the complexisies of the speech engine they built that they tried explaining out in their documentation, In other words to understand all about Festival, you need to have the knolwedge of those professors. So there's a big gap in the learning curve. and this is the BIGGEST flaw in free software and that's why its not user friendly..
And when I installed Festival, the program called Text2wave is NOT been compailed as a text2wave.exe (an Windows Executable, instead the program is called Text2wave.sh. So It is written in Festival Script format, so I can't even run the program because it first has to be compiled into an executable in order for it to run. so the Programmers of Festival should've compiled all the scripts as an windows exe installer instead of expecting us to have to compile the program ourselves. That's just lazy to fail to provide a windows installer for the program in my opinion........
Voice acting is alright for those games that use a little bit of spoken dialog in them, (small projects) under a few hundred lines or a thousand lines is ok if your budget can afford it. But becomes a problem when you got a big project with thousands of lines of dialog like Kotor Games because it gets very expensive when you have to pay out something like $5 bucks a line of text or so for a voice actor to do it for you, instead of doing it through a computer synthesis program.
So for your game character to speak just one paragraph of text, you could easily be paying 100 dollars just to get 10 lines of text done.
So because I have over 15,000 lines of text dialog already in my game script (as my game script is now over 35,000 lines long), I realize that I can't get voice actors to do it for me, because its going to be too costly, So it looks like I have to stick with voice synthesis to do the majority of the dialog for me and that at least allows me to test all the storylines out for the main game characters. So I think only big major publishing companies will have the kind of resources to cover these kind of expenses for the big projects..
Before I go back to the topic I do have this to say about Unreal., I can't code in those langauges you mention but only code in batch at the moment, As for Unreal Engine, that engine is a workhorse and is a headache (not a delight) to work with. When you set everything all up in the editor the engine don't generate the new code scripts for you to add them into the game engine, it seems to be that you have to create the scripts YOURSELF. retype the same file parameters all over again to configure them. and that's not talking about how long it takes to build its lighting...........
In Unreal 4 you have to build a blueprint now just to get it to display text up on the screen which took the poor guy on youtube about 15 minutes to set up all of the nodes to get Unreal 4 to display txt on the screen. So what if your game had thousands of lines of dialog, do you have to build thousands of these shaders to display it? Because Unreal uses tons of extra calulcations with all of these blueprint nodes, i why it needs top high end machines to meet all its demmands.
Cool
What type of programming you do?, have you done Unreal Engine stuff yet? Because I haven't solved my unreal engine code issues yet with the models, trying to get em to animate in the game.. Because I can't solve the engine code issues, instead I been writing the rest of my game in a windows batch file do I'm writing out the storyline and the game elements, its a sci fi rpg combat game that allows you to explore space, use jumpgates, even explore another galaxy, so it won't be limited to just one galaxy, it just turn based at the moment because there's no 3d engine yet in it, I wrote it in batch code so can see how the game is shaping up in the 2d form at least, that at least lets me see how the diaglog runs in the game, before thinking of putting it into a real 3d engine to build the 3d world of it, That's why I'm looking at Unreal engine to see if it can do the job, allow me to have thousands of lines of interactive diaglog as well since its story line driven.
I don't know if Unreal can do the job with the dialog though. because it uses this blueprint node system. but i rather it just read the text strings in instead from a simple text diaglog file.
looks good. can you model fishy cat? I have its head, just not its body,. I don't know how to rig a cat.
I think your tune sounds ok, but it stays locked at max volume all the time throughout the piece and the only time it drops in volume and softens is when its near the end of the track and it then just cuts off. Am I reading that right that to download this tune would take up 74 Megs of space on my computer? You must be be joking right, for I would expect a tune like this to weigh in at around only 3-4 megs on here but 74 megs, what are you uploading a raw wav file? Why couldn't the tune have been recorded in OGG or MP3 instead so it gives a much smaller file size so its alot more game friendly.
For In game making we need to use small file sizes. If we use these big file sizes then game rate frame performance is likely to degrade causing jerkiness or pausing during the game especially if your game relies on a 3d engine..
Do you know anything abot UDK unreal engine by any chance, trying to get my custom models to animate in the game.
Human speech has got a wide range of different emotions, look on wikipedia under emotions and you see that with the basic emotions, there are also many different kind of groupings of emotions inside multiple groupings and its all these different groupings inside groupings is what makes it complicated to build a naturally human intelligent sounding speech engine in the english language.
So if they can figure out how to synthesis all those different emotional groupings of the words of the english languge, then they might beable to get more naturally english sounding synthesis.
Yes, not all of us have got the financial resources to pay for all the voice acting for our characters so we have to turn to what free software can offer us. Festival is ok if you don't mind typing your text out inside brackets.
if you are trying to build an english speech engine up from scratch with this audaspace with Blender, then you would probably need a dictionary database of words so the synthesis knows how to group the words as our speech is not grouped in letters, but grouped in words and clusters of words when we speak and it also has different up and down pitches in the word also to express a wide range of different emotions and feelings And because there are many different feelings spoken in our speech, there are many different pitch groupings for each word and also for each word group depending on what emotional state we are in at the time when speaking the words. There was this Austalian professor of white who explained it on you tube about how speech synthesis woks, and why speech syntheis is a disaster at the moment in trying to do english with the current traditional methods they been using because the traditional way has only been grouping together words, not been grouping together all the different pitch sound groupings in the words for the wide range of emotions and feelings. That's why in most speech TTS synthesis programs when words are spoken, it sounds flat and motonomous and not sound always natural. Or sounds like a disaster or a trainwreck like with Anna ect or sounds robotic wtihout feeling.
So I don't think the speech synthesis can form clear audible words with just 26 letters of the alphabet, I don't think that's enough to form a speech engine, I think you need all the words and also need all the different pitch groupings, but that's complicated because each word we speak also uses a different pitch grouping depending on what emotional or feeling is being expressed at the time when the word is being spoken..
So when Gandalf of the White says "Mary Went." or "Mary Had" Hes only showing 1 grouping for 1 emotion. not the whole range of them.
There is a huge range of emotional characteristics. HAPPY (Estaticic Joyful, gladnes, cheerful,), NEGATIVE: Anger, fear, sandness, distrust, worry, jealously,
I really didn't like Festival although I have looked at it. Because You had to type everything all out inside brackets with that program to get it to say anything. I Didn't like the syntax it uses to get it to talk. Festival was written by Unix programmers or who use linux, and I don't like UNix Commands with their Paramethsis. I thought in this program you just typed out your words and it speaks like with all the other synthesis programs, but not Festival, in Festival you have to type the whole entire code script line to get it to speak a line of text or save to wav file.
The documentation for the program has been written all wrong. Its been written at the advanced level of the engine, and not at the laymen level. The professors have written it at their level of understanding. The documents has to be explained out in plain simple english without all the heavy unix scripting to confuse those who have no knowledge of unix scripts, or redo the whole program, give it a simple to understand User interfance that is more friendly, because not everybody understands unix script code commands. or the complexisies of the speech engine they built that they tried explaining out in their documentation, In other words to understand all about Festival, you need to have the knolwedge of those professors. So there's a big gap in the learning curve. and this is the BIGGEST flaw in free software and that's why its not user friendly..
And when I installed Festival, the program called Text2wave is NOT been compailed as a text2wave.exe (an Windows Executable, instead the program is called Text2wave.sh. So It is written in Festival Script format, so I can't even run the program because it first has to be compiled into an executable in order for it to run. so the Programmers of Festival should've compiled all the scripts as an windows exe installer instead of expecting us to have to compile the program ourselves. That's just lazy to fail to provide a windows installer for the program in my opinion........
Voice acting is alright for those games that use a little bit of spoken dialog in them, (small projects) under a few hundred lines or a thousand lines is ok if your budget can afford it. But becomes a problem when you got a big project with thousands of lines of dialog like Kotor Games because it gets very expensive when you have to pay out something like $5 bucks a line of text or so for a voice actor to do it for you, instead of doing it through a computer synthesis program.
So for your game character to speak just one paragraph of text, you could easily be paying 100 dollars just to get 10 lines of text done.
So because I have over 15,000 lines of text dialog already in my game script (as my game script is now over 35,000 lines long), I realize that I can't get voice actors to do it for me, because its going to be too costly, So it looks like I have to stick with voice synthesis to do the majority of the dialog for me and that at least allows me to test all the storylines out for the main game characters. So I think only big major publishing companies will have the kind of resources to cover these kind of expenses for the big projects..
he knows how to blend his instruments with his effects
Pages