lex - Building a lexical Analyzer in Java -


I am currently learning lexical analysis in compiler design. To know how a lexical analyzer actually works, I am trying to build myself. I am planning to make it in Java

The input to the lexical analyzer is a .tex file that is of the following format.

  \ section {script} {intro} \ section {Scope} arbitrary text \ section {relevance} uncontrolled text \ subdivision (profit) arbitrary text \ subsubsection {in real life} \ subdivision {Ingredient} \ end {document}  

The output of the lecture may be a table of contents with a page number in another file.

1 Introduction 1 1.1 Scope 1 1.2 Relevance 2 1.2.1 Benefits 2 1.2.1.1 Real life 2 1.2.2 loss 3 I hope that This problem is within the scope of Language Analysis

Read the .exe file and check '\' and continue to check whether it is actually on search In the sectioning command is set to indicate whether or not a flag variable is the type of sectioning.

I hope the above approach will work for the construction of Lexar. If I have the possibility of lexor scope, how do I add page numbers to the table of contents?

You can actually add them in any way. I recommend to store the content of my .tx file in my tree or map-like structure, then read it in my page number file, and apply it appropriately.

A more archaic option will be a second parser, which parses the file of your first parser and line numbers and connects them appropriately.

It really depends on you as it is a learning curve, try to make it as if someone else wanted to use it. How is this user friendly? If you ever use it in the real world then you can use anything, still good to learn the concept, but can be the reason for the dirty practices!


Comments

Post a Comment

Popular posts from this blog

windows - Heroku throws SQLITE3 Read only exception -

python - rename keys in a dictionary -