The token stream is then parsed to form a parse tree. This character stream is first tokenized to token stream in a lexer. This process is recursively called over and over again till the entire tree is evaluated and the answer is retrieved. The basic idea is to take the tree and walk through it to and evaluate arithmetic operations hierarchically. Now we have converted the token streams to a parse tree. a + b resembles line 34, which returns a parse tree (‘add’, (‘var’, ‘a’), (‘var’, ‘b’)). Therefore, according to the program above, a = 10 resembles line 22. Here all of them are made into token stream line-by-line and parsed line-by-line. Let’s say you want something like shown below. The parser should also parse in arithmetic operations, this can be done by expressions. Thus, we have build a basic lexer that converts the character stream to token stream. We do the same thing with new line character. Whenever we find “//”, we ignore whatever that comes next in that line.
#Rules of basic programming language free
We are doing a basic programmable script, so let’s just make it with integers, however, feel free to extend the same for decimals, long etc., We can also make comments. Whenever we find digit/s, we should allocate it to the token NUMBER and the number must be stored as an integer. This can be defined by the regular expression \”.*?\”. STRING tokens are string values and are bounded by quotation marks(” “). Then we also create the basic literals like ‘=’, ‘+’ etc., NAME tokens are basically names of variables, which can be defined by the regular expression *. In any programming language, there will be space between two characters. Thus we will need some basic tokens such as NAME, NUMBER, STRING. Let’s make a compiler that makes simple arithmetic operations. Now let’s build a class BasicLexer which extends the Lexer class from SLY.
#Rules of basic programming language how to