jFiles is derived from code written by Dries Driessen.
      It is used here under license.
     Before beginning with explanations of how to use
      jFiles to create and consume JSON, it's worth taking a moment first to
      discuss Clarion data structures, and the use of Extended Name Attributes.
      
      Clarion has 4 primary data structures (GROUP, QUEUE, FILE, VIEW) which
      jFiles uses as the source, and/or destination, of JSON conversions. Indeed
      the primary use case of jFiles is to convert these structures to an JSON
      string (or file on disk), or to import JSON from a string or file into one
      of these structures.
      
      In the past matching JSON to these structures could be difficult because
      JSONhas some features that Clarion does not, and vice versa. Often code
      was necessary in embed points (or derived methods) to add additional
      information to jFiles so it could do the import, or export, as you
      desired. For example formatting a date (a LONG in Clarion) into a
      formatted string in the JSON, or deformating the date when importing the
      JSON.
      
      In 2019 a new approach to feeding information to generic code was 
proposed on ClarionHub. The core of the suggestion
      was to extend the use of the NAME attribute (something Clarion was already
      doing in some places) to act as a more generic extension of the Clarion
      data structures. This was followed up with the release of 
CapeSoft Reflection as a free implementation of
      those ideas in 2021.
      
      jFiles 2 implemented some of these ideas, but jFiles 3 expands on the
      concept. Full support for extended name attributes, using the Reflection
      class, has been added to the class. Spending a few minutes understanding
      this approach will greatly simplify your use of jFiles. jFiles makes
      reading and writing JSON trivial, IF you have the right structures, and
      the correct Extended Name Attributes in place.
      
      
Hint: The Name attribute is limited to 100
      characters long by the Clarion language.
      
      For example; Take a simple Clarion queue
      
      
INVOICEQUEUE Queue
        CUSTOMER         String(100),name('Customer')
        DATE             Long,name('Date')
        PAID             Byte,name('Paid')
        SIGNATURE        String(1024),name('Signature')
                     End
      
      In the above example some important information is already in the Name
      attribute - specifically the case of the tag name to use in the JSON.
      (Hint: in 
Clarion Labels are case insensitive, Names are not.)
      But using extended names, this can be taken further; 
      
      
INVOICEQUEUE Queue
        CUSTOMER         String(100),name('Customer')
        DATE             Long,name('Date | @d6')
        PAID             Byte,name('Paid | attribute')
        SIGNATURE        &StringTheory),name('Signature | StringTheory')
                       End
      
      A full list of the extended attributes supports are;
      
        
          
            | Attribute | Description | 
          
            | [types] Byte, Bfloat 4, Bfloat8, Decimal, pDecimal, Long, Ulong,
              String, Cstring, PString, Signed, Unsigned, Word, Dword, Real,
              Sreal, Short, UShort, | Clarion data types. Typically they do not need to be set, they
              will be detected, but they can be included, and are valid
              attributes. | 
          
            | @Picture | A Clarion (Extended) picture which will be used for formatting
              the JSON when creating, or deformating when importing. Supports Extended Pictures (as supported by
              StringTheory) | 
          
            | Base64 
 | The field will be Base 64 encoded before saving into JSON. When
              loading the field will be base 64 decoded before writing into the
              Clarion structure. 
 | 
          
            | Binary, Bin | Does the same as Base64. Deprecated (but still works.) Rather
              use base64. See Base64 above. | 
          
            | Boolean | JSON values will be set to true or false. Clarion value to be 0
              or 1 to match. | 
          
            | Private | Only applies to creating JSON. The field will not be exported to
              JSON . See also ReadOnly below. | 
          
            | Queue | The field is a reference to another queue type. When exporting,
              the reference is followed, and the data in the queue added to the
              JSON.When importing the child queue is populated. | 
          
            | ReadOnly | Only applies to consuming JSON. The field contents in the
              structure will not be set from the JSON. See also Private
                above. | 
          
            | Rename | The tag name to use in the JSON is different to the External
              Name that is set. In other words this overrides the External Name. | 
          
            | Required | Only applies to creating JSON. Fields that are required are
              included in the export, even if they are blank (or 0). | 
          
            | [types] StringJson, CstringJson,StringTheoryJson | The field in the structure is a string, but contains (valid)
              JSON data. For export it is injected into the JSON "as is". For
              import it is read into the field "as a string". ie the contents of
              this part of the XML document are not separated into different
              fields. | 
          
            | [types] StringPtr, CstringPtr, PStringPtr, StringJsonPtr, CStringJsonPtr
 | The field is a pointer (&string, &cstring, &pstring
              respectively. On export the contents of the field will be
              exported, as for a string. On import the pointer will be NEWed (if
              not set) and the field populated. | 
          
            | [types] StringTheory, 
 | The field in the structure is a reference to a StringTheory
              object. The contents of the object will be used when exporting,
              and when importing the object will be NEWed (if necessary) and
              populated. 
 | 
          
            | [types] StringXML, CstringXML, StringXMLPtr, CstringXMLPtr,
              StringTheoryXML | Treated as a String, Cstring | 
          
            | Table | A pointer to a FILE structure. (&File). Export Only. | 
          
            | View | A pointer to a TYPE structure (&View). Export Only. | 
          
            | JsonName 
 | The tag name to use in the JSON is different to the External
              Name and the Rename that is set. In other words this overrides the
              External Name, and Rename tags. | 
        
      
      
      Hint: The Name attribute is limited to 100
      characters long by the Clarion language.
      
 Setting Attributes at Runtime 
       In some cases it's not possible to set the attributes for the
        structures where the structure is declared. In these situations you can
        set the attributes at runtime. 
        
        Setting these attributes makes use of the Reflection object, which is a
        property of the JsonClass. You can call methods for this object using
        the syntax
        
        
 json.reflection.whatever
        
        The Reflection class is documented at 
https://capesoft.com/accessories/reflectionsp.htm
        .
        
        Most of the reflection methods take a GroupName and a ColumnName.
        Figuring out these names can be tricky, so it's best not to try too hard
        - and simply ask the class to tell you. In other words, before embarking
        on the process of figuring out the correct reflection calls, add this
        line AFTER your call to json.Save, or json.Load.
        
        
 json.Reflection.Walk()
        
        This call sends the list of group names, and column names to 
DebugView++. If you run that utility on your
        computer, and then get to the code that calls Walk, you'll see what the
        Reflection class figured out. And then using the names there you can
        supplement it. For example;
        
        
[rf]FIELD: GroupName=[queue] ColumnName=[date]
          Num=[1] Type=[rf:NotSet] Rename=[] Pic=[] Attributes=[DATE] 
          [rf]FIELD: GroupName=[queue] ColumnName=[time] Num=[2]
          Type=[rf:NotSet] Rename=[] Pic=[] Attributes=[TIME] 
        
        In the above output you can see the group name is 
queue
          and the Column names are 
date and
        
time.
        
        Once you know this you can add calls to the 
json.reflection
        methods AFTER the call to 
json.SetTagCase
        and BEFORE the call to 
json.Save or 
json.Load.
        
        If you want to Override attributes that exist (ie that have been set in
        the field Extended Name) then start with a call to 
        
        
 json.reflection.Parse(groupname,structure)
         json.BuildTableQueue(view) ! only if the structure
          is a view
        
        If you are just going to supplement the field information then you can
        skip the above lines.
        
        There are a number of Set methods you can use, as listed in the table
        below;
        
        
          
            
              | Method | Use | 
            
              | json.reflection.SetAttributes 
 | Sets multiple attributes for a field. This method takes a
                string, exactly as you would use it in the Name attribute for
                the field. json.reflection.SetAttributes('queue','date','date | @d6 |
                  rename(datum)')
 
 | 
            
              | json.reflection.SetAttribute | Sets a single attribute for a field. json.reflection.SetAttributes('queue','date','private')
 
 | 
            
              | json.reflection.SetPicture | Sets the picture for a field json.reflection.SetPicture('queue','date','@d6')
 
 | 
            
              | json.reflection.SetRename | Sets the name of the tag in the output json.reflection.SetRename('queue','date','datum')
 
 | 
            
              | json.reflection.SetType | Sets the type of the field in the output json.reflection.SetType('queue','date','datum')
 
 | 
          
        
        
       For the purposes of this section it is assumed that
      the JSON object is declared as follows; 
      
      json JSONClass 
      
      Loading from a JSON File into a Group,
        Queue or Table
       Loading a JSON file into a structure is a single
        line of code.
        
        json.Load(Structure, JsonFileName,
            <boundary>) 
        
        You can use a Table, Queue or Group using this approach.
        
        If the JSON file contains named objects, then specify the name as the
        boundary parameter. If you omit this parameter, but the JSON you want to
        import is inside a named JSON object, then nothing will be imported.
        
        The fields in the JSON file are matched to fields in your record
        structure. There are three properties which affect this matching. For 
more information on matching see the
        section on 
Field Matching Tips.
        
        After the load is complete two methods will be available for your use;
        
        
GetRecordsInserted()
          GetRecordsUpdated()
        
        These methods return a counter of the records inserted (into the Table
        or Queue) and the number of records updated (in the Table). These
        counters are reset to zero by a call to the 
Start
        method.
        
        When the 
Load method completes it will
        return either 
jf:Ok (0)  or 
jf:ERROR
        (-1). If 
jf:Error is returned then a
        description of the error is in the Error property. A call to the 
ErrorTrap method will have been made as well.
        
        
The use of a JSON Array (ie list)
        structure usually implies that the Clarion structure should also be a
        List, and equally the use of a JSON Object (not a list) structure
        implies a group, or value on the Clarion side. However jFiles
        automatically supports simple object into a List structure (the same as
        a list with one value) and also importing a 
List
          structure into a Clarion Group structure.
        
      
Loading from a JSON String into a
        Group, Queue or Table
       Loading a JSON string into a structure is a single
        line of code. It's the same as for a table, but uses a StringTheory
        object instead of a file name.
        
        str  StringTheory
            code
            json.Load(Structure, str, <boundary>)
        
        
        This is the same as the 
Loading from a File
        method described above, except that the source is a StringTheory string,
        and not a file.
      
Loading a JSON String into the JSON
        object for processing
       The JSONobject can be loaded from a string or file
        using one of these methods; 
        
        json.LoadString(StringTheory)
        
        or 
        
        
json.LoadFile(Filename)
        
        Once the JSON text has been loaded into the object it is automatically
        parsed, and then you can directly inspect the contents of the JSON.
        
        
jsonItem  &JSONClass
          str  StringTheory
            code
            str.SetValue('json text is here')
            json.LoadString(str)
        
        Remember that all JSON files are just a collection of items. Each item
        in turn can be a collection of more items. Once the string is loaded
        into the JSON object the Records method returns the number of items at
        the top level;
        
        
x = json.Records()
        
        Like a queue you can loop through the items in the object using the Get
        method
        
        
loop x = 1 to json.records()
            jsonItem &= json.Get(x)
          end
        
        You can check the name of the item using the Name method
        
        
loop x = 1 to json.records()
            jsonItem &= json.Get(x)
            if jsonItem.Name() = 'Customers'
              ! do something
            end
          end
        
        and of course you can get the value using the GetValue method. So;
        
        
loop x = 1 to json.records()
            jsonItem &= json.Get(x)
            if jsonItem.Name() = 'Customers'
              somevalue = jsonItem.GetValue()
            end
          end
        
        an alternative to looping through the items in the object is to use the
        
GetByName method.
        
        
jsonItem &= json.GetByName('customers')
        
        You can test if you got something back by checking that 
jsonItem
          is not NULL. This is important - using a NULL object will
        result in a GPF
        
        
If not jsonItem &= Null
        
        Once you have a specific item, you can load this into a structure;
        
        
jsonItem.Load(structure)
        
        The field names in your JSON will need to match the fieldnames in your
        structure. See 
Field Matching Tips for
        hits on making the matching process better.
        
        Putting the code together it looks something like this;
        
        
jsonItem &= json.GetByName('customers')
        If not jsonItem &= Null
          jsonItem.Load(structure)
          End
        
        If you have a specific field in the JSON you can extract it using the 
GetValueByName method
        
        
somestring = json.GetValuebyName('surname')
        
      Deformatting incoming JSON Values
       By default the contents of a JSON field will be
        copied into your Clarion field during the LOAD. However the format of
        the JSON field may not be the format you wish to store in your Clarion
        field. For example an incoming date, formatted as 
yyyy/mm/dd
        may need to be deformatted in order to store it in a Clarion LONG field.
        
        
        This can be done by adding the desired Picture to the NAME attribute of
        the field. For example;
        
        
InvoiceQueue Queue
          Date Long,name('Date | @D3')
          End
        
        All 
 StringTheory Extended Pictures are supported. For
        more information on the name attribute see 
Extended
          Name Attributes.
        
        
jFiles 1
         The process above is considerably simpler than the
          one below, and should be used wherever possible. If it is not
          possible, or you are reading old code (and trying to understand it)
          then the jFiles 1 method is described below. 
          
          To do this a method is provided;
          
          json            Class(jsonClass)
            DeformatValue     Procedure (String pName, String
            pValue),STRING,VIRTUAL
                            End
          
          Please note that the name being passed in here, in the pName
            parameter, is the JSON field name, not the Clarion field
          name. Using this name as the identifier it is possible to create a
          Case statement before the parent call, deformatting the value as
          required. For example;
          
          case pName
            of 'DATUM'
              Return deformat(pValue,'@d1')
            of 'TIME'
              Return deformat(pValue,'@t4')
            end
        
      Filtering Records when Loading
       When loading into a Table or Queue it can be useful
        to filter out records which are not desired. 
        This is done by embedding code into the 
ValidateRecord
        method. 
        
        
json             Class(jsonClass)
          ValidateRecord     Procedure(),Long,Proc,Virtual
                           End
        
        If (your code in) this method returns 
 jf:filtered
        then the record is not added to the Table or Queue and the next
        record is processed. If (your code in) this method returns 
jf:outofrange
        then the import is considered complete, and no further records are
        loaded. If (your code in)this method returns 
jf:ok
        then the record is added to the table or queue.
        
        When this method is called the table record, or queue record, has been
        primed with the record that is about to be written.
        
        Example
        
        
json.ValidateRecord Procedure ()
            Code
            If inv:date < date(1,1,2018) then Return jf:filtered.
            Return jf:ok
        
      Match By Number
       While JSON is commonly formatted as "name value
        pairs", it doesn't have to be this way. It could just be a collection of
        arrays, with no names at all. For example say this is the contents of
        the a.json file;
        
        [
              [45,80],
              [30,85],
              [16,4]
          ]
        
        To load this into a structure it's clearly not possible to match the
        json to field names in the structure, rather it needs to import based on
        the position of the value. This can be done by setting the MatchByNumber
          property to true. For example;
        
        aq QUEUE
          tem string(255)
          pressure string(255)
          end
          
          code 
            json.start() 
            json.SetMatchByNumber(true)
            json.load(aq,'somefile.json')
      
      Loading a
        StringTheory object within a Group or Queue
       Consider this JSON structure;
        
        {
              "product":"NetTalk",
              "description":"A really long description goes here",
              "installfile":"base64encoded file install - could be very large"
          }
        
        In this situation one (or more) of the items in the group are of
        indeterminate length. This is clearly a situation for using a
        StringTheory object.The Clarion structure should look like this;
        
        
ProductGroup  Group
          Product         String(100),name('product')
          Description     &StringTheory,name('description | stringtheory')
          InstallFile     &StringTheory,name('installfile
          | stringtheory')
                        End
        
        Note the use of the 
Extended Name
          Attribute.
        You are free to create the StringTheory objects - or you can leave them
        as null (they will be automatically created). 
        However whichever approach you take the objects MUST be disposed of
        after you are done with them.
        
        
        
Example
        
        ProductGroup.Description &= New StringTheory 
          ProductGroup.InstallFile &= New StringTheory  
          
          json.Start()
          json.SetTagCase(jf:CaseAsIs)
          json.Load(ProductGroup,'somefile.json')
          
        If not ProductGroup.Description &= Null 
         
            ProductGroup.Description.Trace()  
          End
          
         
          json.DisposeGroup(ProductGroup)   NOT optional
         
        Tip: As with any pointer in a structure, it may end up being NULL if the
        incoming JSON does not include this field. So you should ALWAYS test the
        pointer before using it. For example;
        
        
      
Loading a Queue within a Group
       Consider this JSON structure;
        
        {
              "error": "validation_error",
              "cause": [
                  { "cause_id": 369,
                    "message": "some message"
                  },
                  { "cause_id": 121,
                    "message": "some other message"
                  },
              ]
          }
        
        It's clearly a group (since the outside brackets are 
{})
        but it contains a list (
cause [] ), or, in
        other words, a queue inside a group.
        
        In Clarion this group can be declared as 
        
        
CauseQueueType     Queue,Type
          cause_id             long,name('cause_id')
          message              string(255),name('message')
                             End
          
          ErrorGroup         Group
          Error                String(100) , Name('Error')
          Cause                &CauseQueueType, Name('cause | queue')
                             End
        
        Note the use of a queue type declaration inside a group declaration
        here. Note the use of the 
Extended
          Name Attribute.
        
        Before using this group, a new queue must be assigned;
        
        
  ErrorGroup.Cause &= new CauseQueueType 
        
        then load it
        
        
  json.SetTagCase(jF:CaseAny)
            json.Load(ErrorGroup, 'whatever.json')
        
        then you can access the queue as any normal queue. For example
        
        
  GET(ErrorGroup.Cause,1)
            Json.Trace(ErrorGroup.Cause.Message)
        
        After using it 
remember to dispose
        the queue
        
        
Dispose(ErrorGroup.Cause)
      Loading a Queue within a Queue
       Consider this JSON structure;
        
        [{
              "Id": 60,
              "Data": {
                  "TotalPaymentAmount": 90.63,
                  "Discounts": [{
                      "DiscountType": "DASH",
                      "Amount": 136.27
                  }, {
                      "DiscountType": "LOTS",
                      "Amount": 210.27
                  }]
              }
          }, {
              "Id": 61,
              "Data": {
                  "TotalPaymentAmount": 90.63,
                  "Discounts": [{
                      "DiscountType": "DASH",
                      "Amount": 325.27
                  }, {
                      "DiscountType": "LOTS",
                      "Amount": 499.27
                  }]
              }
          }, {
              "Id": 63,
              "Data": {
                  "TotalPaymentAmount": 90.63,
                  "Discounts": {
                      "DiscountType": "SPECIAL",
                      "Amount": 525.27
                  } 
              }
          }]
        
        This is a List, containing multiple records.  Inside each record is a
        group (Data) and inside each group is a queue of discounts.
        
        The (simplified) Clarion version of this structure would ideally look
        like this (but as you'll see this is not allowed)
        
        Messages            Queue
          Id                    Long
          Data                  Group
          TotalPaymentAmount      Decimal(10,2)
          Discounts               Queue
          Amount                    Decimal(10,2)
                                  End
                                End
                              End
        
        Clarion does however not allow queues to be declared inside groups, or
        inside other queues. what you are allowed are references to queues. This
        can be tricky to work with to make sure there are no memory leaks -
        fortunately jFiles includes methods to make this safe.
        
        The data declaration looks like this;
        
        DiscountsQueueType  Queue,Type
          DiscountType          String(20),name('DiscountType')
          Amount                Decimal(10,2),name('Amount')
                              End     
          
          Messages            Queue
          Id                    Long,name('Id')
          Data                  Group,name('Data')
          TotalPaymentAmount      Decimal(10,2),name('TotalPaymentsAmount')
          Discounts               &DiscountsQueueType,name('Discounts |
          queue')
                                End
                              End
        
        
        Note the use of the Extended Name Attribute in the declaration of the
        Discounts field.
        
        Because the DiscountsQueueType is in scope
        in your code, it's necessary for your code to NEW
        this type. This is done in the NewPointer method.
        So the declaration looks like this;
        
        json         class(jsonClass) 
          NewPointer     procedure(String pColumnName),Derived
                       end
        
        And the method itself looks like this;
        
        json.NewPointer Procedure(String pColumnName)
          code 
            Case pColumnName 
            Of 'Discounts'  ! note this is teh json tag name, not the label.
          case sensitive.
            MessagesQ.Data.DiscountsQ &= NEW DiscountsQueueType
            End
        
        that's all there is to it. The rest of the load is as normal;
        
          json.Start()
            json.SetTagCase(jf:CaseAsIs) ! CaseAsIs uses the NAME attribute, not
          the LABEL
            json.Load(MessagesQ,str) 
        
        
        Note: Although jFiles includes code to support &SomeGroupType  in
        the class, the Clarion NEW command does not support Groupp Types, so
        effectively &GROUPs in Queues or Groups are not supported.
        
jFiles 1
        An alternative approach is to use two Queues. 
        (This approach is obsolete now, but is included here to understand code
        that already exists) 
        
        MessagesQ           Queue
          Id                    Long,name('Id')
          Data                  Group,name('Data')
          TotalPaymentAmount      Decimal(10,2),name('TotalPaymentAmount')
                                End
                              End
          
          DiscountsQ          Queue  
          MessageId             Long
          DiscountType          String(20),name('DiscountType')
          Amount                Decimal(10,2),name('Amount')
                              End
        
        All the messages go in one queue, and all the discounts go in another.
        The MessageID serves to determine which discounts belong in which queue.
        
        The code to populate these two queues looks like this;
        
        json              Class(JSONClass)
          AddQueueRecord      Procedure (Long pFirst=0),Long,Proc,Virtual
                            End
          oneMess           &jsonClass
          node              &jsonClass
          MessageId         long
          
            CODE
            Free(MessagesQ) 
            Free(DiscountsQ)
          
        
            json.Start()
            json.SetFreeQueueBeforeLoad(False)
            json.SetRemovePrefix(True)
            json.SetReplaceColons(True)
            json.SetTagCase(jf:CaseAsIs)  ! 
            json.LoadString(jsonStr)
            json.Load(MessagesQ)  
          
            
            Loop x = 1 to json.records()
              oneMess &= json.Get(x)
                  node &= onemess.GetByName('Id')
              if not node &= NULL
                MessageId = node.GetValue()  
                ! now get the discounts
                node &= oneMess.GetByName('Discounts',2)  
                if not node &= NULL 
                  node.Load(DiscountsQ)        end 
              end 
            end 
          
        
          json.AddQueueRecord Procedure (Long pFirst=0)
          Q   &Queue
            CODE 
            Q &= self.Q
            If Q &= DiscountsQ 
              DiscountsQ.MessageId = MessageId  
            End 
            Parent.AddQueueRecord(pFirst)
        
        As you can see in the above approach the AddQueueRecord
          is overridden so that the extra field in the DiscountsQ
          is properly primed.
        
        Aside: You may notice that the call to load the DiscountsQ
          uses the node object ( node.Load(DiscountsQ)
        ) but the object being overridden is the  json
        object. Usually this would mean the code would not run, however jFiles
        automatically manages this, as node is a reference to a jFiles object
        inside the object called json, code in the
        json object automatically applies to the
        nodes as well.
        
      
      Importing Recursive Nodes
       Consider the following json;
        
        [{ 
              "id": 1, 
              "Parent": 0, 
              "name": "John Smith", 
              "children": [{ 
                  "id": 2, 
                  "Parent": 1, 
                  "name": "Sally Smith"
               },{
                  "id": 3, 
                  "Parent": 1, 
                  "name": "Teresa Smith"
              }]
          }]
         
        At first glance this looks like a list, but it's not. It's a collection
        of nodes, related to each other, in a tree like pattern. The parent and
        child notes however do have a common structure, which suggests this
        could be loaded into a queue.
        
        TestQ         QUEUE  
          id              LONG,NAME('id') 
          Parent          LONG,NAME('Parent')  
          name            STRING(100),NAME('name')  
                        END  
        
        To import a node structure into a list structure requires the engine to
        walk through the nodes, adding each one to the queue. This is done using
        the LoadNodes method.
        
          json.start()
            json.SetTagCase(jF:CaseAsIs)
            json.LoadFile('whatever.json') 
            json.LoadNodes(TestQ,'children')
        
        One advantage of the JSON above is that it contains a parent field,
        where every child node explicitly links to its parent. In many cases
        though this is not the case. Consider this JSON
        
        [{ 
              "id": 1, 
              "name": "John Smith", 
              "children": [{ 
                  "id": 2, 
                  "name": "Sally Smith"
               },{
                  "id": 3, 
                  "name": "Teresa Smith"
              }]
          }]
        
        Now the identity of the parent is determined by the location of the node
        in the tree. If we move this into a queue then the location is lost. 
        The above structure suggests a queue like this;
        
        TestQ         QUEUE  
          id              LONG,NAME('id') 
          name            STRING(100),NAME('name')  
                        END 
        
        Since the location is being lost an additional field to store the
        "parentId" is required.
        
        TestQ         QUEUE  
          id              LONG,NAME('id') 
          name            STRING(100),NAME('name')  
          ParentID        LONG  
                        END 
        
        
        Then in the call to LoadNodes this field, and the name of the
        identifying field, must be included.
        
          json.start()
            json.SetTagCase(jF:CaseAsIs)
            json.LoadFile('whatever.json') 
            json.LoadNodes(TestQ,'children',TestQ.ParentID,'id')
        
        The above code tells jFiles to use the ID field of one node as the
        ParentID of all child nodes.
        
      
      Pointers
       For known pointer types see;
        
        
        The following approach in this section should only be for other kinds of
        pointers.
        
        Consider a queue or group structure which contains a reference. For
        example;
        
        
SomeQueue   Queue
          Date          Long
          Time          Long
          Notes         &Something
                      End
        
        When data is moved into the 
Notes field
        then a more complicated assignment needs to take place (for each
        incoming record in the JSON).
        
        In order to make this possible code needs to be added to the 
AssignField and 
 GetColumnType methods.
        
        
json          Class(jsonClass)
          GetColumnType   Procedure (*Cstring pGroupName, *Cstring
          pColumnName),Long,DERIVED
          AssignField     Procedure (*Cstring pGroupName,
          *Cstring pColumnName, JSONClass pJson),DERIVED
                        End
        
        The first thing to do is tell the system that a complex assignment needs
        to take place for a specific field. This is done in the 
GetColumnType
          method. In this example;
        
        
json.GetColumnType Procedure (*Cstring pGroupName, *Cstring pColumnName) 
            CODE
            Case pColumnName 
            of 'Notes' 
              return json:Reference
            end
        
        As you can see in the above example, the pGroup parameter is not
        actually needed in the method. It is passed in for your convenience.
        
        
          
          In jFiles 2 the methods were prototyped as 
          
          json.GetColumnType Procedure (*Group pGroup,
            *Cstring pColumnName) 
          AssignField        Procedure (*Group
            pGroup, *Cstring pColumnName, JSONClass pJson)
          
          Those are still supported, but the prototypes described above are now
          preferred.
        
        
        
        You are then responsible for the assignment. The assignment is done as
        follows:
        
        
json.AssignField Procedure (*Cstring
                pGroupName, *Cstring pColumnName, JSONClass
          pJson)
            CODE 
            Case pColumnName 
            of 'Notes' 
              SomeQueue.Notes &= NEW(Something) 
              SomeQueue.Notes = pJson.GetValue()
            End 
            Parent.AssignField(pGroup,pColumnName,pJson)
        
        As you can see in the above code, the column name is tested, and if it
        is the reference column (Notes) then a SomeThing object is created, and
        the value is placed in there.
        
        This is done for each record in the queue. This means that you need to
        be very careful when deleting records from the queue and when freeing
        the queue. And the queue MUST be correctly emptied before ending the
        procedure or a memory leak will occur.
        
        For example;
        
        
FreeSomeQueue Routine
            loop while records(q:Product)
              Get(q:Product,1)
              Dispose(SomeQueue.Notes)
              Delete(q:Product)
            End
        
      Detecting Omitted and NULL values
       When importing into a structure (Group, Queue or
        Table), each field in the structure is primed from the incoming JSON.
        This works perfectly if the JSON contains a field with the same name. 
        
        If the field does not exist in the incoming JSON then the field in the
        structure is cleared, to a blank string or a zero value.
        If the field does exist, but the value is set as null, then again the
        field is cleared or set to zero.
        
        This approach is convenient and simple, but does make it difficult when
        you are working later on with the structure (perhaps a group or queue)
        to identify fields which were explicitly set to blank as distinct from
        fields which were omitted, as distinct from fields set to null.
        
        To overcome this problem jFiles allows you to set specific values for
        omitted strings or numbers, and specific values for null strings and
        null numbers. While this doesn't necessarily solve the problem in all
        cases (you still need to have some values the user cannot use) it will
        be suitable in most cases.
        
        For example;
        
        json.Start()
          json.SetTagCase(jf:CaseAsIs)
          
          json.SetWriteOmittedString(True)
          json.SetOmittedStringDefaultValue('//omitted//')
          json.SetWriteOmittedNumeric(true)
          json.SetOmittedNumericDefaultValue(2415919000)
          
          json.SetNullStringValue('null')
          json.SetNullNumericValue(2415919001)
          
          json.Load(structure,'a.json')
        
        Note that you would need to parse, and manage these values in your
        structure (group, queue, table) but at least you can tell that they have
        been omitted.
        
        The properties for omitted and null are unrelated, you can use one set
        without the other set if you like.
        
      
      Loading a JSON List into a Group
       Typically if you have an incoming List of data you
        would load this into a matching Queue on the program site. It is however
        possible to use a Group structure, and have jFiles load that group
        structure, even if the incoming JSON contains a List structure. 
        
        For example;
        
        Settings   Group
          Server       String(255)
          Port         Long
                     End
        
        this would usually expect JSON like this
        
        { 
              "Server":"capesoft.com",
              "Port":80
          }
        
        However the incoming JSON may occasionally be a list, like this
        
        [{ 
                  "Server":"https://capesoft.com",
                  "Port":80
              },
              { 
                  "Server":"https://capesoft.com",
                    "Port":443
              }
          ]
        
        If the Load method was called with the Group as the destination
        structure then each item in turn will be loaded into the group,
        resulting in the last record in the JSON being left in the group when
        the Load completes.
        
        For each record loaded into the group the ValidateRecord
          method is called. So you can embed code in here if you want to
        iterate through the list. If the method then returns jf:StopHere
        then the loop will terminate and the Group will be left with the current
        contents.
        
        json             Class(jsonClass)
          ValidateRecord     Procedure(),Long,Proc,Virtual
                           End
          
          json.ValidateRecord Procedure ()
            Code
            If inv:date < date(1,1,2018) then Return jf:filtered.
            Return jf:ok
      
      High-Speed Importing to Tables
       Importing JSON to a table is fast, but does take
        time. One of the techniques for speeding up imports is to surround the
        imports with a LOGOUT / COMMIT statement. 
        
        The price of this approach though is that the tables are locked for the
        duration of the import. This means that other users won't be able to
        write to the tables (and may not be able to read from the tables) while
        the import is underway. So the choice is fast imports, with possible
        user blocking, or slower imports, but with no user blocking.
        
        A property exists to turn this feature on (it is off by default).
        
        json.SetLogoutOnImport(x)
        
        Where x is either 1, or 2. If x is 1, then the Logout is attempted. If
        it fails, the import will continue anyway, but as a normal import, not a
        logout/commit import. If it is set to 2, and the logout fails, then the
        import is abandoned, and the method returns JF:ERROR.
        
        For maximum speed it is recommended to commit the transaction every y
        records. A good value for y is "several thousand" and so y defaults to
        5000. you can override this before the import by calling
        
        json.SetFlushEvery(6000)
        
        Setting it to 0 bypasses this feature, and only commits the transaction
        at the end of the entire import.
        
        One side effect of FlushEvery is that a new transaction is started every
        y records. This means is the original Logout failed (and x is 1), then
        it will be retried after y records. So even if the original logout
        fails, it may logout at a later part of the import.
      
      Renaming Fields on Load
      When a queue is nested inside a group or queue, then
        the 
NewPointer method is called so the
        derived procedure can NEW the necessary structure. The name of the
        structure is passed to NewPointer so that it can note the correct
        structure to NEW.
        
        A problem can occur when multiple structures, with the same name, are
        used in multiple places.
        
        
element       Queue
          name            string(100)
          value1          &ValueQueueType,name('value | queue')
          extra           &ExtraQueueType,name('extra | queue')
                        End
          
          ExtraQueueType  Queue,Type
          comment           string(100)
          value2            &ValueQueueType,name('value | queue')
                          End
        
        In this context the passed in value has to be unique so that it can be
        identified in NewPointer. NewPointer needs to distinguish between value1
        and value2.
        Since the name field is passed to NewPointer the structure becomes;
        
        
element         Queue
          name              string(100)
          value1            &ValueQueueType,name('value1 | queue')
          extra             &ExtraQueueType,name('extra | queue')
                          End
          
          ExtraQueueType  Queue,Type
          comment           string(100)
          value2            &ValueQueueType,name('value2 | queue') 
                     End
          
        
        This would have the effect though of breaking the import, since the JSON
        contains value, not value1 or value2. In this situation the RENAME
        attribute can be used. So the structure becomes;
        
        
element         Queue
          name              string(100)
          value1            &ValueQueueType,name('value1 | queue |
          rename(value)')
          extra             &ExtraQueueType,name('extra | queue')
                          End
          
          ExtraQueueType  Queue,Type
          comment           string(100)
          value 2           &ValueQueueType,name('value2 | queue |
          rename(value)') 
                     End
          
        
        See Also : 
Renaming fields on Save
      Encodings
       JSON text is always encoded using one of the unicode
        encoding formats. The default is utf-8, and this is the format which is
        most efficient when passed to jFiles. From 2014 it became the standard
        for JSON files to ONLY be in utf-8 format, however occasionally you may
        encounter a program still creating utf-16 files.
        
        If the incoming JSON text is unicode, but is not in the utf-8 encoding
        (but rather utf-16, big or small endian,) then it is automatically
        converted to utf-8 before processing. This conversion will consume
        memory - which may be important fotextr large . For this reason it may
        be preferable to convert large text to utf-8 before passing it to
        jFiles. At the time of writing this utf-16 text is supported, utf-32
        files are not (yet) supported.
        
        utf-8 overlaps with ASCII for the first 127 characters, so files which
        are (pure) ASCII are also utf-8 encoded, and can be imported without
        effort.
        
        JSON files on disk should not be ANSI encoded. They are not valid JSON
        files if they are. However if the file you receive is ANSI encoded, then
        convert it to utf-8 (using StringTheory) before passing it to jFiles.
        
        jFiles though allows data to flow directly out of Tables, Views, Queues
        and Groups and also allows incoming JSON to be directly inserted into
        Tables, Views and Groups. And this makes it a bit complicated because
        most Clarion programmers do not store their data encoded as utf-8, but
        rather encoded as ANSI with an appropriate CodePage.
        
        In this situation you will need to set the desired code page of the
        database, so the incoming text will automatically be converted to that
        before importing.
      
      Limits
       The following hard, and soft limits are applicable;
        
          - jFiles processes files in memory, and as such is limited by the
            memory available to the process (2 or 3 gigabytes). Since loading
            the file consumes memory, as the JSON tree is constructed, files
            exceeding 500 megabytes are likely to fail.
- The maximum depth of nested structures is 10 000 levels.
- jFiles does not limit the size of numbers, however when importing
            those numbers into clarion numerical data types (Long, Real, Decimal
            etc) the limits of those types will apply.
Strict Importing
       The fundamental JSON specification is simple, and
        described at 
json.org. More formally it is described in 
RFC 7159 .  jFiles creates JSON files which
        strictly conform to this standard. (Any deviation should be considered a
        bug and reported as such.) 
        
        However when reading (parsing) JSON text files, parsers are allowed to
        be flexible and allow incoming text which is not valid JSON. jFiles
        defaults to this lax mode when parsing JSON text. However if strict
        parsing is required, then this can be set using the SetStrict method.
        
        
 json.SetStrict(true)
        
        When in this mode any JSON which is not strictly in accordance with the
        specification will be rejected. As usual, failures will be reported in
        the ErrorTrap method, and the LoadString method will return 
jf:error.
        
        When not in strict mode the following parsing allowances are made;
        
          - Text before or after the JSON is ignored. The JSON part starts
            with either [ or {
            . However it should be noted that if the first two characters are
            not ASCII then the utf-16 detection may fail, so it is strongly
            recommended that in the case of utf-16 encoded JSON, text before the
            JSON part is not included.
- Numbers
            - Numbers can be preceded by a + sign
- Spaces and tab characters inside numbers are ignored
- Numbers that end with E, E+ or E- will be treated as integers
- Numbers can contain leading zeros.
- Numbers can start with a . character
- Lonely - or + characters will be treated as the number 0.
- The integer part of the number can end with the . character.
- Strings
            - Characters 00h through 1fh (aka "control characters") can be
              unencoded.
- Booleans
            - Spaces and tab characters inside booleans are ignored
- Any text starting with T or t or is considered to be a "true"
              boolean. All other values will be considered False. (This mostly
              covers minor misspellings and case issues. Gross misspellings will
              be rejected.)
- Null
            - Spaces and tab characters inside nulls are ignored
- Any text starting with N or n may be accepted as Null. (This
              mostly covers minor misspellings and case issues. Gross
              misspellings will be rejected.)
- Objects
            - trailing commas are ignored
- Arrays
            - trailing commas are ignored
 For the purposes of this section it is assumed that
      the JSON object is declared as follows;
      
      json JSONClass 
      
      Reusing a JSON object
       If you reuse a json object multiple times then
        properties set in one use may inadvertently cascade to the next use. To
        "clean" an object so that it starts fresh, call the Start method. For
        example; 
        
        json.Start()
        
        This will reset the internal properties back to their default values.
      
      Saving a Clarion structure to a JSON File
        on disk
       The simplest way to create a JSON file is simply to
        use the .Save method and an existing Clarion structure.
        
        json.Save(Structure,<FileName>,
            <boundary>, <format>, <compressed>)  
        
        For example
        
        
json.Start()
          json.Save(CustomerTable,'.\customer.json')
        
        or
        
        
json.Start()
          json.Save(CustomerTable,'.\customer.json','Customers')
        
        or
        
        
json.Start()
          json.Save(CustomerTable,'.\customer.json','',json:Object)
        
        You can use a Group, Queue, View or File as the structure holding the
        data being saved. 
        
        The method returns 0 if successful, non-zero otherwise.
        
        The boundary parameter allows you to "name" the records. For example, if
        the boundary parameter is omitted the JSON is
        
        
[ { record }, {record} , ... {record} ]
        
        If the boundary is included then the JSON becomes
        
        
{ "boundary" : [ { record }, {record} , ...
          {record} ] }
          
        The Format property determines if the output is formatted to
        human-readable or if all formatting is removed (to make the file a bit
        smaller). If omitted it defaults to true - meaning that the output is
        human readable. This is recommended, especially while developing as it
        makes understanding the JSON and debugging your code a lot easier.
        
        If the Compressed parameter is omitted, then the default value is false.
        If the Compressed parameter is set to true then the file will be gzip
        compressed before writing it to the disk. 
        
        If the FileName parameter is omitted, or blank, then the json object
        will be populated with the file, but no disk write will take place. You
        can then save the object to a 
StringTheory
          object, or to a 
File, or use it
        in a 
collection later on.
        
      
Saving a Clarion Object to JSON
       A Clarion object can be passed to the jFiles Save
        method. This will save all the properties in the object. 
        
        Using this approach, a snapshot of an object properties is possible, and
        the object could be restored to this state using the LOAD method. It
        should further be noted that the PROTECTED and PRIVATE attributes on
        object properties are not respected by this SAVE - all the properties
        are stored into the JSON regardless of their PRIVATE or protected
        nature. To prevent a property being exported (or imported) see the
        PRIVATE and READONLY 
attributes.
        
        The object has to be a "naked" object declaration. A naked declaration
        is one where no CLASS keycord is used. For example
        
        
str   StringTheory
        
        str  Class(StringTheory)  
             End
        
        If you have a non-naked declaration, then you can create a reference to
        it, and use the reference instead. For example;
        
        
str  Class(StringTheory)  ! this is not a naked
          declaration
               End
          st   &str
            code
            ...
            json.save(st)
      Saving a Clarion structure to a JSON
        String in Memory
       json.Save(Structure,
            StringTheory, <boundary>, <format>) 
        
        This is the same as saving the JSON to a File, except that the second
        parameter is a StringTheory object not a string. 
        For example;
        
          str  StringTheory
            Code
            json.Start()
            json.Save(CustomerTable,str)
          
         For explanation of the 
Boundary and
        
Format parameters see the section 
above.
      
 
      Saving a Clarion Object to JSON
       A Clarion object can be passed to the jFiles Save
        method, however the object has to be a "naked" object declaration. A
        naked declaration is one where no CLASS keycord is used. For example
        
        str StringTheory
        
        str Class(StringTheory) 
         End
        
        If you have a non-naked declaration, then you can create a reference to
        it, and use the reference instead. For example;
        
        str Class(StringTheory) ! this is not a naked
          declaration
          End
          st &str
          code
          ...
          json.save(st)
      
      Constructing JSON Manually
       In some cases constructing the correct Clarion
        structure may be difficult, or the structure itself may not be known at
        compile time.  In these situations you can use the low-level Add method
        to simply construct the JSON manually.
        
        The Add method takes 3 parameters, the name, the value, and the type of
        the item. In turn it returns a pointer to the node created. Using this
        pointer allows you to embed inside nodes as you create them. For
        example;
        
        {
            "company" : "capesoft",
            "location" : "cape town",
            "phone" : "087 828 0123",
            "product" : [
              { "name" : "jfiles" },
              { "quality" : "great" },
              { "sales" : "strong" }
            ]
          }
        
        In the above JSON there is a simply group structure, followed by a list
        containing a variable number of name/value pairs.
        The code to create the above could be written as;
        
        Json JSONClass 
          Product &JSONClass 
          KeyValue &JSONClass
          
          code
            json.start()
            json.add('company','capesoft')
            json.add('location','cape town')
            json.add('phone','087 828 0123')
            Product &= json.add('product','', json:Array) 
            KeyValue &= Product.add('','',json:Object)
            KeyValue.add('name','jfiles')
            KeyValue &= Product.add('','',json:Object)
            KeyValue.add('quality','great')
            KeyValue &= Product.add('','',json:Object)
            KeyValue.add('sales','strong')
        
        Of course this is just an example. Using the Add method, and the fact
        that it returns the node added, allow you to construct any JSON you
        like.
        
        That said, this method can be tedious, and making use of Clarion
        structures is often easier to manage in the long run. 
        
      
      Storing Multiple Items in a JSON
        object
       The Save methods described above are perfect for
        creating a simple JSON structure based on a simple Clarion Data Type.
        However there are times when you will need to create a single JSON
        object which contains multiple different elements (known as a
        Collection.)
        
        collection &JSONClass
        
        The collection is created using the CreateCollection method. 
        
        collection &=
          json.CreateCollection(<boundary>)
        
        If the boundary is omitted then a default boundary ("subSection")
        will be used.
        
        
        [Aside: You do not need to dispose the Collection object - that will be
        done for you when the json object disposes.]
        
        You can then use the Append method to add
        items to the collection. There are a number of forms of the Append
          Method.
        
        Append (File, <Boundary>)
          Append (Queue, <Boundary>)
          Append (View, <Boundary>)
          Append (Group, <Boundary>)
        
        As with the Save methods the Boundary parameter is optional and can be
        omitted. If the parameter is omitted then a default object name will be
        used.
        
        You can also 
        
        Append (Name, Value, <Type>)
        
        to add a single name value pair to the collection. The type is the JSON
        type of the Value. It should be one of
        
        json:String   EQUATE(1) 
          json:Numeric  EQUATE(2)
          json:Object   EQUATE(3)
          json:Array    EQUATE(4)
          json:Boolean  EQUATE(5)
          json:Nil      EQUATE(6)
        
        if the Type parameter is omitted then the default
          json:string is used.
        
        Here is a complete example;
        
        json        JSONClass
          collection  &JSONClass
            code
            json.Start()
            collection &= json.CreateCollection('Collection')
            collection.Append(Customer,'Customers') 
            collection.Append(Queue:Browse:1)
            collection.Append(MemoView)
        
        Once you have created your collection you can save it to Disk or String
        using the techniques described below.
      
      Saving a JSON Object to Disk
       After you have constructed the JSON object to your
        satisfaction, you may want to store it as a file. This can be done using
        the SaveFile method. For example; 
        
        json.SaveFile('filename',format)
        
        If the format parameter is set to true, then the file will be formatted
        with line breaks, and indentation (using tabs), suitable for a person to
        read.
        If the format parameter is false (or omitted) then the file will be kept
        as small as possible by leaving out the formatting.
        
      
      Saving a JSON Object to
        StringTheory
       Internally the JSON is stored as a collection of
        objects. To use the result in a program it must be turned into a String
        and stored in a StringTheory object. This is done by passing a
        StringTheory object to the SaveString method.
        For example;
        
        str  StringTheory
            code
            json.SaveString(str,format)
        
        Once in the StringTheory object it can then be manipulated, compressed,
        saved or managed in any way you like.
        
        If the format parameter is set to true, then the string will be
        formatted with line breaks, and indentation (using tabs), suitable for a
        person to read.
        If the format parameter is false (or omitted) then the string will be
        kept as small as possible by leaving out the formatting.
      
      Arrays
       JSON supports Arrays. They look something like this
        
        
        {"phone": [ "011 111 2345","011 123 4567"]}
        
        The square brackets indicate that the field ("phone") contains a list of
        values, ie an array.
        
        Clarion also supports arrays, using the DIM attribute. So creating
        fields like the one above is very straight-forward
        
        PhoneGroup  Group
          phone         string(20),DIM(5)
                      End
            code
            json.Save(PhoneGroup)
        
        This would result in JSON that looks like this
        
        {
            "PHONE" : ["012 345 6789","023 456 7890"]
          }
        
        Empty items in the array, which come after the last set value, are
        suppressed. In the above example, only the first two array values were
        set, so only 2 values were included in the output. Position in the array
        is preserved. If phone[1] and phone[3] were set, but phone[2] was left
        blank then the output would read
        
        {
            "PHONE" : ["012 345 6789","","023 456 7890"]
          }
        
        Items are considered empty if the field is a string, and contains no
        characters, of if the field is a number and contains a 0.
        
        Clarion supports multi-dimensional arrays. These are written out as if
        they were a single dimension array.
        For example;
        
        Matrix[1,1] = 1010
          Matrix[2,1] = 2010
          Matrix[1,2] = 1020
          Matrix[2,2] = 2020
        
        becomes
        
        "MATRIX" : [1010,1020,2010,2020]
        
        Note that variables of DIM(1) are not
        allowed. To be an array the dimension value must be greater than 1.
        
        It is possible (and valid) for an entire JSON file to consist of a
        single array.
        
        [1,2,3,4,5]
        
        In this case there is neither a field name, nor an array or object
        wrapper around the value.
        
        dude long,dim(5) 
            code
            dude[1] = 12
            dude[2] = 22
            dude[3] = 43
            json.Start() 
            json.SetType(json:Null)  
            json.AddArray('',dude)   
        
        this results in
        
        [12,22,43]
        
        As noted earlier, trailing empty values (0 if a number, blank if a
        string) are removed.
        
      
      Blobs
       Creating JSON files from Tables or Views that
        contain Blobs are supported, but unfortunately coverage can vary a bit
        based on the driver in use.
        Tables
         Creating JSON using any of the Save(Table...) ,
          Add(Table...) or Append(Table...) methods is supported. 
          If the table has memo or blob fields then these are included in the
          export, and there is nothing specific you need to do.
          
          If you wish to suppress MEMOs and BLOBs when saving a table set the NoMemo property to true. For example
          
          json.start()
            json.SetNomemo(True)
            json.save(...)
        
        Views
         Clarion Views behave differently when using
          TopSpeed or SQL drivers. So if you need to save a BLOB with a VIEW
          then read this section carefully.
          
          For all drivers, VIEWs can contain BLOB fields. For example;
          
          ViewProduct View(Product)
                          Project(PRO:Name)
                          Project(PRO:RRP)
                          Project(PRO:Description) 
                        End
          
TopSpeed
          The TopSpeed driver is able to detect the BLOB fields in the VIEW, and
          so there's no extra code for you to do.
          
          Note 1: If the NoMemo property is set to
          true then the BLOB will be suppressed
          even if it is in the VIEW.
          
          Note 2 : The shorthand method of projecting all fields in a table, by
          projecting no fields, does not include BLOB or MEMO fields. If you
          want to PROJECT MEMOS or BLOBs then you must PROJECT it (and all other
          fields) explicitly.
          SQL
          The SQL drivers are unable to detect BLOB fields in the VIEW
          structure. The BLOBS are still populated, but the explicit method to
          determine if the BLOB is in the VIEW or not, does not work for SQL
          Drivers.
          
          jFiles adopts the following work-arounds to this issue.
          
          a) [Default behavior]. When looping through the VIEW all the BLOB
          fields are checked for content. If the value is not blank then it is
          included in the output. In other words BLOBs with content are
          exported, BLOBs without content are not included. In most cases this
          will likely be sufficient as JSON allows fields to be excluded when
          they are blank. 
          
          b) If all the BLOBs from the Table are included in the VIEW then you
          can set the property  ExportBlobsWithView to
          true. If this value is true then all the
          BLOBs will be included in each JSON record. If they are blank (or not
          included in the VIEW) then they will be included in the JSON as blank.
          
          
          So in order to export BLOBs with VIEW records set the property ExportBlobsWithView to true.  For example;
          
          ViewProduct View(Product)
                           Project(PRO:Name)
                           Project(PRO:RRP)    
                           Project(PRO:Description)
            ! This is a blob
                        End
            
            json.Start()
            json.SetExportBlobsWithView(True)
            json.Save(ViewProduct)
          
          Note1: If the NoMemo property is set to
          true then the memos and blobs will not be included even if ExportBlobsWithView
            is set to true.
      Creating Nested JSON structures from a View
       jFiles 2 introduces the ability to create nested
        JSON structures from a VIEW declaration.  Consider this VIEW;
        
        Invoice_view View(Invoice)
                          Project(INV:ID) 
                          Project(INV:Date)
                          Project(INV:Paid)
                          Join(LIN:InvKey,INV:ID) ,INNER 
                              Project(LIN:ID)
                              Project(LIN:Quantity)
                              Project(LIN:Price)
                              Join(PRO:Key,LIN:Product) ,INNER 
                                  Project(PRO:Name)
                              End
                          End 
                       End
        
        This View exports all the details for an invoice, including the invoice
        details, the lineitem details, and the product details.
        
        The ideal output for a view like this is something like this;
        
        
{ "Invoices":[{
              "ID":1,
              "Date":"2019/7/1",
              "Paid":1,
              "Line Items":[{
                  "ID":1001,
                  "Quantity":5,
                  "Price":100,
                  "Product":{
                      "Name":"jFiles"
                  }
              }]
          }]}
                  
        
        With jFiles 2 creating a nested structure like this directly from the
        view is possible. There are only two extra bits of code required. The
        example code looks like this;
        
        
json.start() 
          json.setTagCase(jf:caseAsIs) 
          json.SetNestViewRecords(true) ! [1]
          json.setViewBoundary(Invoice_view,LineItems,'Line
          Items',jf:OneToMany)  ! [2]
          json.setViewBoundary(Invoice_view,Product,'Product',jf:ManyToOne)      
          ! [2]
          json.save(Invoice_view,str,'Invoices')
        
        [1] The first extra line of code, this sets the 
NestViewRecords
        property and this tells jFiles that you want a nested VIEW structure.
        
        [2] The calls to 
SetViewBoundary tell
        jFiles a little bit more about the JOIN itself, and allows you to
        specify the child tag as well.
        
      
Creating Nested JSON structures from a
        Group or Queue
      
         Importing and Exporting Complex Group and Queue
          structures is simple. Make use of the 
capesoft.com/jfilescode page, paste your sample
          JSON into there, and the equivalent Clarion structures, and code, will
          be generated there for you. 
        Clarion Group and Queue structures can contain complex field types.
        Thanks to 
Extended Name Attribute
        support it is now possible to send complex groups and queues to JSON,
        without having to embed code in derived methods. For example;
        
        
CustomerQueue     Queue,pre(cust)
          Name                 string(100),name('Name')
          Invoices             &InvoiceQueueType,name('Invoices| Queue')
                            End
          
        
        InvoiceQueueType  Queue,Type
            InvoiceNumber       Long,name('InvoiceNumber') 
            
                  End
            Code
             json.start()
            json.SetTagCase(jf:CaseAsIs)
            json.Save(CustomerQueue,str)
        
        In the above example a Queue is inside another queue - each record in
        the parent contains a complete queue. In the JSON output the internal
        queue is nested inside the parent record.
        
        You can do the same thing with a Group, for example exporting two
        different tables at the same time;
        
        
backup          Group
          customers         &File,name ('customers | table')
          products          &File,name ('products | table')
                          End
          
          json.start()
          json.SetTagCase(jf:CaseAsIs)
          json.Save(backup,'backup.json')
        
       
      Creating Nested JSON structures
       This section follows on from the 
          Storing Multiple Items in a JSON Object section above.
        
        Another form of the Append method exists, which allows you to start a
        new collection within your collection.
        
        
Append(<Boundary>)
        
        This starts a new collection inside an existing collection. To use this,
        first you need to declare a pointer to this collection;
        
        
subItem   &JSONClass
        
        Then (after doing the 
CreateCollection call
        and so on) you can do
        
        
subItem &=
          Collection.Append('VersionInformation')
        
        and after that do as many 
subItem.Appends
        as you like.
        
        This nesting can continue to as many levels as you like.
        
        Here is a complete example;
        
        
json        JSONClass
          collection  &JSONClass
          subItem     &JsonClass
            code
            json.Start()
            collection &= json.CreateCollection('Collection')
            collection.Append(Customer,'Customers') 
            collection.Append(Queue:Browse:1)
            collection.Append(MemoView)
            subItem &= Collection.Append('VersionInformation')
            subItem.Append('Version','6.0.3')
            subItem.Append('Build',1234,json:numeric)  
      Formatting Field Values on Save
       Up to now all the exporting of the fields has
        resulted in the raw data being stored in the JSON file. In some cases
        though it is preferable to export the data formatted in some way so that
        it appears in the JSON as a more portable value. For example in Clarion
        Dates are stored as a LONG, but if the data needs to be imported into
        another system then displaying the date as 
yyyy/mm/dd
        might make the transfer a lot easier. 
        
        This can be done by adding the desired Picture to the NAME attribute of
        the field. For example;
        
        
InvoiceQueue Queue
          Date           Long,name('Date | @D3')
                       End
        
        All 
 StringTheory Extended Pictures are supported. For
        more information on the name attribute see 
Extended
          Name Attributes.
        
        In some cases it's not possible to put (or change) the Extended Name in
        the structure. For example you may have a dictionary that can't be
        altered, or you might be using a VIEW. In that case you can make use of
        the reflection.setPicture method.
        
        
   Json.start()       
             Json.SetTagCase(jF:CaseLower)
             Json.SetNestViewRecords(true)
             Json.Reflection.SetPicture('view|2','date','@d6')
             Json.Save(CustomersView,'customerSales.json','customers',true)
             Json.Reflection.Walk()  
        
        In the above example, the json.reflection.SetPicture method is called
        before the call to Save. 
        
        
TIP: In order to determine the correct
        groupname, and column name, the json.reflection.walk command was called
        after the call to save, as a debugging technique.
        
        In some cases the output has to be formatted beyond the options offered
        by Clarion pictures. In this case code can be embedded into the
        FormatValue method.
        
 FormatValue Method
         Embed in the FormatValue method
          in your json class. The method is declared as;
          
          json.FormatValue PROCEDURE (String
            pGroupName,String pName, String pValue, *LONG pLiteralType),String
          
          Note the LiteralType parameter. If you are changing the type of the
          data (for example, changing the DATE from a Numeric to a String) then
          you need to change the LiteralType value as well. The value of this
          parameter should be one of
          
          json:String   EQUATE(1) 
            json:Numeric  EQUATE(2)
            json:Object   EQUATE(3)
            json:Array    EQUATE(4)
            json:Boolean  EQUATE(5)
            json:Null     EQUATE(6)
          
          As the Groupname and Name of the field is passed into the method, it
          is straight-forward to create a simple CASE statement formatting the
          fields as required. This code is embedded before the parent call. Also
          note that the value in pName is the JSON field name - not the Clarion
          field name. And this value is case sensitive.
          
          case pName
            of 'DATUM'
              pLiteralType = json:string
              Return clip(left(format(pValue,'@d1')))
            of 'TIME'
              pLiteralType = json:string 
              Return clip(left(format(pValue,'@t4')))
            end
          
          In the above case the Datum and Time fields are formatted, all other
          fields are left alone and placed in the file "as is".
        
      Renaming Fields on Save
       When exporting JSON from a structure the External
        Name of each field is used as the "Tag" : name in the JSON. For example
        
        xQueue       Queue
          field1         string(255),Name('Total')
                       End
        
        results in JSON like this;
        
        
{"Total" : "whatever"}
        
        Ideally the external Name attribute of the field contains the correct
        value for the tag. 
        There are times however when you need to override this. You can do this
        by adding an attribute to the NAME attribute. The RENAME and JSONNAME
        attributes are supported. For example;
        
        
xQueue       Queue
          field1         string(255),Name('Total | JsonName(Extra Total)')
          field2         string(255),Name('SubTotal | Rename(Sub Total)')
                       End
        
        For more information on the name attribute see 
Extended
          Name Attributes. See also 
Renaming
          Fields on Load.
        
 
          
          The process above is considerably simpler than the one below, and
          should be used wherever possible. If it is not possible, or you are
          reading old code (and trying to understand it) then the jFiles 1
          method is described below. 
          
          In jFiles 1 this is done by embedding code into the 
AdjustFieldName
          method, AFTER (or BEFORE) the PARENT call.
          
          Example
          
          
json.AdjustFieldName PROCEDURE (StringTheory
            pName, Long pTagCase)
              CODE
              PARENT.AdjustFieldName (pName,pTagCase)
              case pName.GetValue()
              of 'Total'
                pName.SetValue('Totalizer')
              End 
          
          Note that the field name in the above CASE statement is a
          case-sensitive match. If you need a case insensitive match then UPPER
          or LOWER both the CASE and OF values.
          
          The Parent call performs the replaceColons and removePrefix work. So,
          if you put the CASE before the parent call, then the field will come
          in "Clarion Style", if after the parent call then "JSON Style". It is
          up to you which side of the Parent call you put your code onto. 
 
      Saving Nested Structures -
        another approach (not recommended)
      
          
          
          Consider the following JSON;
          
          
{  "customer" : {
                    "Name" : "Bruce",
                    "Phone" : "1234 567 89",
                    "Invoices" : [
                        {
                            "InvoiceNumber" : 1,
                            "LineItems" : [
                                {
                                    "Product" : "iDash",
                                    "Amount" : 186.66
                                }
                            ]
                        },
                        {
                            "InvoiceNumber" : 2,
                            "LineItems" : [
                                {
                                    "Product" : "Runscreen",
                                    "Amount" : 179.75
                                }
                            ]
                        }
                    ]
                }
            }
          
          This is constructed from a Group (Customer information) which contains
          a Queue (of Invoices) and each invoice contains a Queue of Line
          Numbers. It's worth pointing out that the line items queue is a simple
          JSON form of a queue, and the Invoice Queue is again just a JSON form
          of a queue.
          
          The Clarion structures for the above are as follows;
          
          
CustomerGroup   Group
            Name              string(50),name('Name')
            Phone             string(50),name('Phone')
                            End
            
            InvoicesQueue   Queue
            InvoiceNumber     Long,name('InvoiceNumber')
                            End
            
            LineItemsQueue  Queue
            Product           String(50),name('Product')
            Amount            Decimal(8,2),name('Amount')
                             End
          
          In this case the structures are a Group and Queues, but you could also
          use Views or Tables if you wanted to.
          
          In order to achieve the result three jFiles objects are used;
          
          
CustomersJson Class(JSONClass)
            AssignValue     PROCEDURE (JSONClass pJson,StringTheory pName,|
                                        *Group pGroup,*Long pIndex,Long
            pColumnOffset),VIRTUAL
                          End
            InvoicesJson  Class(JSONClass)
            AssignValue     PROCEDURE (JSONClass pJson,StringTheory pName, |
                                       *Group pGroup,*Long pIndex,Long
            pColumnOffset),VIRTUAL
                          End 
            LineItemsJson JSONClass
          
          As you can see two of the classes (the ones that have children) will
          have some override code in the 
 AssignValue method.
          (More on that in a moment.)
          
          For the purposes of this example, the code for populating the
          structures is omitted.
          
          The basic code to generate the JSON file looks like this;
          
          
            CustomersJson.Start()
            CustomersJson.SetTagCase(jF:CaseAsIs)
            CustomersJson.Save(CustomerGroup,'customer.json','customer',true)
          
          In order to include the InvoicesQueue inside the group some code is
          added to the 
CustomersJson.AssignValue
          method. The code looks like this;
          
          
CustomersJson.AssignValue PROCEDURE (JSONClass
            pJson,StringTheory pName,|
                                                 *Group pGroup,*Long pIndex,
            Long pColumnOffset)
              Code
              PARENT.AssignValue (pJson,pName,pGroup,pIndex,pColumnOffset)
              If pName.GetValue() = 'Phone'
                do PrimeInvoicesQueue 
                InvoicesJson.Start()
                InvoicesJson.SetTagCase(jF:CaseAsIs)
                InvoicesJson.Save(InvoicesQueue, ,'Invoices')  
                pJson.AddCopy(InvoicesJson, ,true) 
              End
          
          There are a few interesting things to note in the above code.
          
          a) Notice it's checking for the JSON tag
 'Phone'
          as it appears in the JSON file. This is simply the position in which
          the Invoice queue will be injected. As it is after the parent call, it
          will come after the 
Phone field in the
          JSON file. If it was before the parent call it would come before the 
Phone field. If it came before the parent
          call, and the parent was not called at all, then the 
Phone
            field would be excluded from the JSON.
          
          b) The code to Prime the Queue, and Save that Queue to the 
InvoicesJson
            object is standard code as described earlier in this
          document. Notice the omitted parameter in the call to 
.Save.
          
          c) The 
AddCopy call is where the magic
          happens. This adds a copy of the 
 InvoicesJson object
          into the 
 CustomersJson object at the
          position specified by the passed in parameter 
pJson.
          
          
          d) The parameters 
pGroup, 
pIndex
            and 
 pColumnOffset are not
          useful in your embed code, they are used in the call to the parent
          method.
          
          As this example covers three layers of JSON, the technique is repeated
          for the 
InvoicesJson  object. It too has
          an 
AssignValue 
          method, and it uses similar code to inject the 
LineItems
            at that point;
          
          
InvoicesJson.AssignValue PROCEDURE (JSONClass
            pJson,StringTheory pName,|
                                               *Group pGroup,*Long pIndex,Long
            pColumnOffset)
              CODE
              PARENT.AssignValue (pJson,pName,pGroup,pIndex,pColumnOffset)
              if pName.GetValue() = 'InvoiceNumber'
                do PrimeLineItemsQueue
                LineItemsJson.Start()
                LineItemsJson.SetTagCase(jF:CaseAsIs)
                LineItemsJson.Save(LineItemsQueue, ,'LineItems')
                pJson.AddCopy(LineItemsJson)
              End
         
        Saving Nested Structures - yet
          another approach (not recommended)
        
           Consider the following JSON;
            
             [ { "_id" : "7123098", 
                  "accountName" : "Charlies Plumbing", 
                  "mainContact" : "", 
                  "mainPhone" : "", 
                  "accountLogins" : [ 
                    { "loginName" : "Administrator", 
                      "loginPwd" : "secret", 
                      "loginHistory" : [ 
                        { "loginDate" : "2017/07/17", 
                        "loginTime" : "16:27" 
                        }, 
                        { "loginDate" : "2017/07/18", 
                          "loginTime" : "15:26" 
                        } 
                      ] }, 
                    { "loginName" : "Operator", 
                      "loginPwd" : "1234", 
                      "loginHistory" : [ 
                        { "loginDate" : "2017/07/17", 
                          "loginTime" : " 8:15" 
                        }, 
                        { "loginDate" : "2017/07/18", 
                          "loginTime" : "15:51" 
                        } 
                      ] 
                    } 
                  ], 
                  "accountContacts" : [ 
                   { "contactName" : "Beatrice", 
                     "contactPosition" : "CEO" 
                  }, 
                   { "contactName" : "Timothy", 
                     "contactPosition" : "Sales" 
                   } 
                 ] 
               } 
              ] 
            This is a highly nested structure. It is an AccountsQueue, which in
            turn contains a Logins Queue and a Contacts Queue. The Logins Queue
            contains a Login History queue.
            
            Here is the Accounts queue declaration;
            
            
AccountsQueue   Queue 
              id                string(20),name('_id')
              accountName       string(255),name('accountName') 
              mainContact       string(255),name('mainContact')
              mainPhone         string(255),name('mainPhone')
              accountLogins     &accountLoginsQueue,name('accountLogins')
              accountContacts  
              &accountContactsQueue,name('accountContacts')
                              End
            
            The Contacts queue declaration
            
            
accountContactsQueue Queue,type
              contactName             string(100),name('contactName')
              contactPosition         string(100),name('contactPosition')
                                   End
            
            The Logins queue
            
            
accountLoginsQueue Queue,type
              loginName             string(100),name('loginName')
              loginPwd              string(100),name('loginPwd') 
              loginHistory          &loginHistoryQueue,name('loginHistory')
                                 End
            
            and finally the History queue
            
            
loginHistoryQueue Queue,type
              loginDate           string(10),name('loginDate')
              loginTime           string(10),name('loginTime')
                                End
            
            Populating nested queues has to be done carefully. The queue
            pointers are allocated using the NEW statement whenever a record is
            created. Here is a single record added to the Accounts queue (with
            various child queue entries added as well.)
            
            
clear(AccountsQueue)
              AccountsQueue.accountLogins &= new(accountLoginsQueue) 
              AccountsQueue.accountContacts &= new(accountContactsQueue)
              
            AccountsQueue.id = '7123098'
              AccountsQueue.AccountName = 'Charlies Plumbing'
              
             AccountsQueue.accountLogins.loginName =
              'Administrator'
              AccountsQueue.accountLogins.loginPwd = 'secret'
              AccountsQueue.accountLogins.loginHistory &= new
              loginHistoryQueue
              AccountsQueue.accountLogins.loginHistory.loginDate =
              format(today()-1,@d10)
              AccountsQueue.accountLogins.loginHistory.loginTime =
              format(random(360000*8, 360000*18),@t1)
              add(AccountsQueue.accountLogins.loginHistory)
              
              AccountsQueue.accountLogins.loginHistory.loginDate =
              format(today(),@d10)
              AccountsQueue.accountLogins.loginHistory.loginTime =
              format(random(360000*8, 360000*18),@t1)
              add(AccountsQueue.accountLogins.loginHistory)
              
              add(AccountsQueue.accountLogins)
              
              AccountsQueue.accountLogins.loginName = 'Operator'
              AccountsQueue.accountLogins.loginPwd = '1234'
              AccountsQueue.accountLogins.loginHistory &= new
              loginHistoryQueue
              
              AccountsQueue.accountLogins.loginHistory.loginDate =
              format(today()-1,@d10)
              AccountsQueue.accountLogins.loginHistory.loginTime =
              format(random(360000*8, 360000*18),@t1)
              add(AccountsQueue.accountLogins.loginHistory)
              
              AccountsQueue.accountLogins.loginHistory.loginDate =
              format(today(),@d10)
              AccountsQueue.accountLogins.loginHistory.loginTime =
              format(random(360000*8, 360000*18),@t1)
              add(AccountsQueue.accountLogins.loginHistory)
              
              add(AccountsQueue.accountLogins)
              
            
              AccountsQueue.accountContacts.contactName = 'Beatrice'
              AccountsQueue.accountContacts.contactPosition = 'CEO'
              add(AccountsQueue.accountContacts)
              AccountsQueue.accountContacts.contactName = 'Timothy'
              AccountsQueue.accountContacts.contactPosition = 'Sales'
              add(AccountsQueue.accountContacts)
              
              Add(AccountsQueue) ! save the queue record.
            
            Sending even a complex structure like to this to JSON is relatively
            easy.
            
            First the JSON object is declared. You can do this in code, or let
            the extension template declare it for you. Notice the 
AddByReference
              method, that will be fleshed out in a moment.
            
            
json             Class(JSONClass)
              AddByReference     PROCEDURE (StringTheory pName,JSONClass
              pJson),VIRTUAL
                               End 
            
            Secondly the json object is called as normal;
            
            
json.Start()
              json.SetTagCase(jF:CaseAsIs)
              json.SetColumnType('accountLogins',jf:Reference)  json.SetColumnType('loginHistory',jf:Reference)
              json.SetColumnType('accountContacts',jf:Reference)
              json.Save(AccountsQueue,'json.txt')
            
            Notice the extra calls to 
SetColumnType.
            These tell the class that these fields are reference values, and so
            need to be saved separately. 
            
            The final step is to flesh out the AddByReference method. When the
            class encounters one of these reference fields it calls the 
AddByReference
              method. The code in there looks something like this;
            
            
json.AddByReference PROCEDURE (StringTheory
              pName,JSONClass pJson)
                CODE
                case pName.GetValue()
                of 'accountLogins'
                  pJson.Add(AccountsQueue.accountLogins)
                of 'accountContacts'
                  pJson.Add(AccountsQueue.accountContacts)
                of 'loginHistory' 
                  pJson.Add(AccountsQueue.accountLogins.loginHistory)
                end
                PARENT.AddByReference (pName,pJson)
            
            Remember the tag names are case sensitive so be careful entering
            them here.
            
            
Disposing Nested Queues
             This section has nothing to do with jFiles,
              but since the above example shows how to load a nested Queue
              structure, it's probably worth covering the Disposal of nested
              queue structures here. If disposal is not done correctly then a
              memory leak will occur. 
              
              The key lines to worry about in the above code are;
              
              AccountsQueue.accountLogins &=
                new(accountLoginsQueue) 
                AccountsQueue.accountContacts &= new(accountContactsQueue)
              
              and
              
               AccountsQueue.accountLogins.loginHistory
                &= new loginHistoryQueue
              
              These lines are creating queues on the fly, and each call to new
              MUST have a matching call to dispose. When deleting a row from
              AccountsQueue or AccountsQueue.AccountLogins (and that includes
              deleting all rows) the child queues themselves must first be
              disposed. It's important to manually do this before the procedure
              ends - it will not be done automatically.
              
              the basic idea is to loop through the queue, deleting the sub
              queues as you go.
              
              For example;
              
              Loop While Records(AccountsQueue)
                  Get(AccountsQueue,1)
                  Loop while records(AccountsQueue.accountLogins)
                    Get(AccountsQueue.accountLogins,1)
                    Free(AccountsQueue.accountLogins.loginHistory)
                    Dispose(AccountsQueue.accountLogins.loginHistory)
                    Delete(AccountsQueue.accountLogins)
                  End
                  Free(AccountsQueue.accountLogins)
                  Dispose(AccountsQueue.accountLogins)
                  Free(AccountsQueue.accountContacts)
                  Dispose(AccountsQueue.accountContacts)
                  Delete(AccountsQueue)
                End
               
           
       
     When creating a JSON file, or loading a JSON file into
      a structure, it is necessary to match the field names in the JSON file
      with the field names in your structure. There are properties which assist
      in making a good match.
      
      These properties can be set to default values via the global extension,
      the local extension, or can be set in embed code before the object is
      used. These properties are not reset by a call to 
json.Start().
      
      
Note that all the properties should be
      set using their 
SET method, and retrieved
      using their 
GET method. For example setting
      the 
RemovePrefix property is done using the
      
SetRemovePrefix(whatever) method. And it can
      be retrieved using the 
GetRemovePrefix()
      method.
      
RemovePrefix
       Clarion structures allow for the use of prefixes,
        which then form part of the field name. If this property is set when you
        create JSON then the prefix (and colon) are omitted from the JSON and
        only the "name" part of the fieldname is used.
        
        IF you are importing JSON, and the JSON was created by another entity,
        then it's likely the fields in the JSON are not prefixed. In that case
        you should set this property to true as well, so that the matcher
        matches on names-without-prefixes. 
      PrefixChars
       In Clarion a colon (:) character is used to separate
        the prefix from the field name. Incoming JSON may be using an alternate
        character (often an underscore(_) ) to separate the prefix from the rest
        of the name.
        
        To complicate the issue colons, and underscores, are valid characters in
        Clarion field names, table names, and prefixes. If you do have colons or
        underscores in the name then that brings MaxPrefixLengthInJSON into
        play.
      
      MaxPrefixLengthInJSON 
       To make identifying a prefix easier, it can be
        helpful to tell jFiles about the length of any expected prefix. So if
        the length of all your prefixes are say 3 characters, then you should
        set this value to 4. (3 for the prefix, plus one for the separator.) Any
        separators in the string AFTER this length will not be treated as a
        prefix separator.
      ReplaceColons
       Colons are a legal character in Clarion field names.
        However in most languages they are not. Therefore to create JSON which
        is portable into other systems it may be necessary to replace any colons
        with some other character (or characters) - most usually an underscore
        character. If you are including the prefix in the name then this setting
        becomes doubly important.  The default value of this property is true.
      ReplacementChars
       The default replacement character for a colon is an
        underscore character. However if you wish to replace it with some other
        combination of characters then you can set this property to whatever you
        like, up to 10 characters long.
      
      TagCase
       JSON is explicitly case sensitive. When creating
        JSON you can control the case using the TagCase
        property. Valid values for this property are;
        
        jf:CaseUpper
          jf:CaseLower
          jf:CaseAsIs
          jf:CaseAny
        
        As the equates suggest, CaseUpper forces
        all the tags to be uppercase, CaseLower forces
        all the tags to be lower case, and CaseAsIs uses
        the case as set in the fields External Name. (If there is no External
        Name for a field then Upper case is used.)
        CaseAny is only used on a Load.
        It matches incoming node names to local field names regardless of case.
      
      Labels vs Names
       In Clarion fields (fields in a table, queue or
        group, or just variables by themselves) have a 
label, which is
        the identifier in column 1 of the source code.
        This is not the 
name of the field (although they are often
        called "
Field Names". The 
Name of a field only exists
        if you have set the 
,Name property for the
        field. Since Clarion is a case insensitive language all labels are seen
        as UPPER case by the compiler. 
        
         If you are unclear on this please see 
 ClarionHub.
        
        So when importing make sure you understand this point, especially when
        setting the 
TagCase property as mentioned
        above.
      
 The JSON object can be thought of as a tree. The root
      JSON object contains other JSON objects, and those ones contain other ones
      and so on. This is a very elegant approach to the code, but it does have
      one drawback - code embedded in the methods of the root object (ie the
      object in your procedure) does not get called when a method on one of the
      child objects is called.
      
      This means that embedding code in most of the methods will not be useful
      because it will not execute when you expect it to. However some methods
      will execute and are suitable for adding embed code. They are;
      
      AddByReference
        AddQueueRecord
        AdjustFieldName
        AssignField
        AssignMissingField
        AssignValue
        DeformatValue
        ErrorTrap
        FormatValue
        InsertFileRecord
        SetColumnType
        Trace
        UpdatefileRecord
        ValidateField
        ValidateRecord
       
      In addition many methods are called only by your program so are suitable
      for embedding. They are
      
      Start
        CreateCollection
        Save  (any form), SaveFile, SaveString
        Load  (any form), LoadFile, LoadString
        Append (any form)
       
      
    
     Run the supplied installation file. 
     This product is supplied as source files that are
      included in your application. There are no additional files for you to add
      to your distribution.