Showing posts with label Table. Show all posts
Showing posts with label Table. Show all posts

Tuesday, October 28, 2014

Create C# Class Code From a DataTable using CodeDOM



Note by author:

   Since writing this, I have expanded on this idea quite a bit. I have written a lightweight ORM class library that I call EntityJustWorks.

   The full project can be found on
GitHub or CodePlex.


   EntityJustWorks not only goes from a class to DataTable (below), but also provides:

Security Warning:
This library generates dynamic SQL, and has functions that generate SQL and then immediately executes it. While it its true that all strings funnel through the function Helper.EscapeSingleQuotes, this can be defeated in various ways and only parameterized SQL should be considered SAFE. If you have no need for them, I recommend stripping semicolons ; and dashes --. Also there are some Unicode characters that can be interpreted as a single quote or may be converted to one when changing encodings. Additionally, there are Unicode characters that can crash .NET code, but mainly controls (think TextBox). You almost certainly should impose a white list: string clean = new string(dirty.Where(c => "abcdefghijklmnopqrstuvwxyz0123456789.,\"_ !@".Contains(c)).ToArray()); 
PLEASE USE the SQLScript.StoredProcedure and DatabaseQuery.StoredProcedure classes to generate SQL for you, as the scripts it produces is parameterized. All of the functions can be altered to generate parameterized instead of sanitized scripts. Ever since people have started using this, I have been maintaining backwards compatibility. However, I may break this in the future, as I do not wish to teach one who is learning dangerous/bad habits. This project is a few years old, and its already showing its age. What is probably needed here is a total re-write, deprecating this version while keep it available for legacy users after slapping big warnings all over the place. This project was designed to generate the SQL scripts for standing up a database for a project, using only MY input as data. This project was never designed to process a USER'S input.! Even if the data isn't coming from an adversary, client/user/manually entered data is notoriously inconsistent. Please do not use this code on any input that did not come from you, without first implementing parameterization. Again, please see the SQLScript.StoredProcedure class for inspiration on how to do that.


So far I have posted several times on the DataTable class. I have shown how to convert a DataTable to CSV or tab-delimited file using the clipboard, how to create a DataTable from a class using reflection, as well as how to populate the public properties of a class from a DataTable using reflection. Continuing along these lines, I decided to bring the DataTable-To-Class wagons around full-circle and introduce a class that will generate the C# code for the class that is used by the DataTableToClass<T> function, so you don't have to create it manually. The only parameter required to generate the C# class code is, of course, a DataTable.

The code below is rather trivial. It uses CodeDOM to build up a class with public properties that match the names and data types of the data columns of the supplied DataTable. I really wanted the output code to use auto properties. This is not supported by CodeDOM, however, so I used a little hack or workaround to accomplish the same thing. I simply added the getter and setter code for the property to the member's field name. CodeDOM adds a semicolon to the end of the CodeMemberField statement, which would cause the code not to compile, so I added two trailing slashes "//" to the end of the field name to comment out the semicolon. The whole point of creating auto properties was to have clean, succinct code, so after I generate the source code file, I clean up the commented-out semicolons by replacing every occurrence with an empty string. The main disadvantage of this 'workaround' is that the code cannot be used to generate a working class in Visual Basic code. I do have proper CodeDOM code that does not employ this workaround, but I prefer the output code to contain auto-properties; auto-generated code is notorious for being messy and hard to read, and I did not want my generated code to feel like generated code.


Below is the DataTableToCode function, its containing class and its supporting functions. The code is short, encapsulated, clean and commented, so I will just let it speak for itself:

public static class DataTableExtensions
{
   public static string DataTableToCode(DataTable Table)
   {
      string className = Table.TableName;
      if(string.IsNullOrWhiteSpace(className))
      {   // Default name
         className = "Unnamed";
      }
      className += "TableAsClass";
      
      // Create the class
      CodeTypeDeclaration codeClass = CreateClass(className);
      
      // Add public properties
      foreach(DataColumn column in Table.Columns)
      {
         codeClass.Members.Add( CreateProperty(column.ColumnName, column.DataType) );
      }
      
      // Add Class to Namespace
      string namespaceName = "AutoGeneratedDomainModels";
      CodeNamespace codeNamespace = new CodeNamespace(namespaceName);
      codeNamespace.Types.Add(codeClass);
      
      // Generate code
      string filename = string.Format("{0}.{1}.cs",namespaceName,className);
      CreateCodeFile(filename, codeNamespace);
      
      // Return filename
      return filename;
   }
   
   static CodeTypeDeclaration CreateClass(string name)
   {
      CodeTypeDeclaration result = new CodeTypeDeclaration(name);
      result.Attributes = MemberAttributes.Public;
      result.Members.Add(CreateConstructor(name)); // Add class constructor
      return result;
   }
   
   static CodeConstructor CreateConstructor(string className)
   {
      CodeConstructor result = new CodeConstructor();
      result.Attributes = MemberAttributes.Public;
      result.Name = className;
      return result;
   }
   
   static CodeMemberField CreateProperty(string name, Type type)
   {
      // This is a little hack. Since you cant create auto properties in CodeDOM,
      //  we make the getter and setter part of the member name.
      // This leaves behind a trailing semicolon that we comment out.
      //  Later, we remove the commented out semicolons.
      string memberName = name + "\t{ get; set; }//";
      
      CodeMemberField result = new CodeMemberField(type,memberName);
      result.Attributes = MemberAttributes.Public | MemberAttributes.Final;
      return result;
   }
   
   static void CreateCodeFile(string filename, CodeNamespace codeNamespace)
   {
      // CodeGeneratorOptions so the output is clean and easy to read
      CodeGeneratorOptions codeOptions = new CodeGeneratorOptions();
      codeOptions.BlankLinesBetweenMembers = false;
      codeOptions.VerbatimOrder = true;
      codeOptions.BracingStyle = "C";
      codeOptions.IndentString = "\t";
      
      // Create the code file
      using(TextWriter textWriter = new StreamWriter(filename))
      {
         CSharpCodeProvider codeProvider = new CSharpCodeProvider();
         codeProvider.GenerateCodeFromNamespace(codeNamespace, textWriter, codeOptions);
      }
      
      // Correct our little auto-property 'hack'
      File.WriteAllText(filename, File.ReadAllText(filename).Replace("//;", ""));
   }
}


An example of the resulting code appears below:

namespace AutoGeneratedDomainModels
{
   public class CustomerTableAsClass
   {
      public CustomerTableAsClass()
      {
      }
      public string FirstName   { get; set; }
      public string LastName    { get; set; }
      public int  Age           { get; set; }
      public char Sex           { get; set; }
      public string Address     { get; set; }
      public string Birthdate   { get; set; }
   }
}

I am satisfied with the results and look of the code. The DataTableToCode() function can be a huge time saver if you have a large number of tables you need to write classes for, or if each DataTable contains a large number of columns.

If you found any of this helpful, or have any comments or suggestions, please feel free to post a comment.

Saturday, September 27, 2014

Resize form to match the contents of DataGridView



Sometimes the sole purpose of a Form is to display a DataGridView. In that case, you probably want the Form to automatically resize to the size of the contents in the DataGridView. I've seen solutions that loop through all the rows and add up the height, but this is ugly, and usually does not take into account margins, padding, DividerHeight and row header padding. There must be a better way...

My strategy is to temporarily undock the DataGridView, set AutoSize to true, then capture the DataGridView's Size at that point, then restore the Dock and AutoSize property. Then use the captured size to resize the Winform:


// Within the Form class
private void AutoSizeFormToDataGridView()
{
 Size contentsSize = GetDataGridViewContentsSize();
 this.ClientSize = contentsSize;
}

protected Size GetDataGridViewContentsSize()
{
 DockStyle dockStyleSave = dataGridView1.Dock;
 dataGridView1.Dock = DockStyle.None;
 dataGridView1.AutoSize = true;
 
 Size dataContentsSize = dataGridView1.Size;
 
 dataGridView1.AutoSize = false;
 dataGridView1.Dock = dockStyleSave;
 return dataContentsSize;
}


Or alternatively you can define this as an extension method:

public static Size GetContentsSize(this DataGridView dataGrid) { //...


Enjoy!

Friday, September 26, 2014

DataTable or DataGridView to CSV or HTML file using Clipboard



It turns out that DataGridView.GetClipboardContent() returns all the selected cells of a DataGridView as a type DataObject, which is conveniently consumed by the Windows.Forms.Clipboard class, as well as other WYSIWYG editors from Microsoft. From this you can set the Clipboard, then get the clipboard various data formats, including:
- Comma separated value
- Tab separated value
- HTML

So instead of looping though columns and then rows, you can output the entire DataGridView as a CSV file in just 3 lines of code! (4 if you count setting the ClipboardCopyMode, which can be set in the Form Builder.

Here is the code:

void DataGridViewToCSV(string Filename)
{
   bool allowAddRows = dataGridView1.AllowUserToAddRows;
   bool rowHeadersVisible = dataGridView1.RowHeadersVisible;
   dataGridView1.AllowUserToAddRows = false;
   dataGridView1.RowHeadersVisible = false;

   // Choose whether to write header. You will want to do this for a CSV file.
   dataGridView1.ClipboardCopyMode = DataGridViewClipboardCopyMode.EnableAlwaysIncludeHeaderText;
   // Select the cells we want to serialize.
   dataGridView1.SelectAll(); // One could also use DataGridView.Rows[RowIndex].Selected = true;

   // Save the current state of the clipboard so we can restore it after we are done
   IDataObject objectSave = Clipboard.GetDataObject();
   // Copy (set clipboard)
   Clipboard.SetDataObject(dataGridView1.GetClipboardContent());
   // Paste (get the clipboard and serialize it to a file)
   File.WriteAllText(Filename,Clipboard.GetText(TextDataFormat.CommaSeparatedValue));
   // Restore the current state of the clipboard so the effect is seamless
   if(objectSave != null)
   {
      Clipboard.SetDataObject(objectSave);
   }
   dataGridView1.AllowUserToAddRows = allowAddRows;
   dataGridView1.RowHeadersVisible = rowHeadersVisible;
}

Some improvements

For a tab-delimited file, use the TextDataFormat.Text enum in your call to Clipboard.GetText(). You can also output your DataGridView as HTML by using TextDataFormat.Html instead of TextDataFormat.CommaSeparatedValue, but there is extra header data you have to parse out:

   string result = Clipboard.GetText(TextDataFormat.CommaSeparatedValue);
   result = result.Substring( result.IndexOf("") );

Notes:
- An object must be serializable for it to be put on the Clipboard.

Sunday, August 3, 2014

Set Public Properties of C# class from a DataTable using reflection



Note by author:

   Since writing this, I have expanded on this idea quite a bit. I have written a lightweight ORM class library that I call EntityJustWorks.

   The full project can be found on
GitHub or CodePlex.


   EntityJustWorks not only goes from a class to DataTable (below), but also provides:



Security Warning:
This library generates dynamic SQL, and has functions that generate SQL and then immediately executes it. While it its true that all strings funnel through the function Helper.EscapeSingleQuotes, this can be defeated in various ways and only parameterized SQL should be considered SAFE. If you have no need for them, I recommend stripping semicolons ; and dashes --. Also there are some Unicode characters that can be interpreted as a single quote or may be converted to one when changing encodings. Additionally, there are Unicode characters that can crash .NET code, but mainly controls (think TextBox). You almost certainly should impose a white list:
string clean = new string(dirty.Where(c => "abcdefghijklmnopqrstuvwxyz0123456789.,\"_ !@".Contains(c)).ToArray());

PLEASE USE the SQLScript.StoredProcedure and DatabaseQuery.StoredProcedure classes to generate SQL for you, as the scripts it produces is parameterized. All of the functions can be altered to generate parameterized instead of sanitized scripts. Ever since people have started using this, I have been maintaining backwards compatibility. However, I may break this in the future, as I do not wish to teach one who is learning dangerous/bad habits. This project is a few years old, and its already showing its age. What is probably needed here is a total re-write, deprecating this version while keep it available for legacy users after slapping big warnings all over the place. This project was designed to generate the SQL scripts for standing up a database for a project, using only MY input as data. This project was never designed to process a USER'S input.! Even if the data isn't coming from an adversary, client/user/manually entered data is notoriously inconsistent. Please do not use this code on any input that did not come from you, without first implementing parameterization. Again, please see the SQLScript.StoredProcedure class for inspiration on how to do that.




In this post I showed how to create a DataTable where the column names and types matched the properties of a class. In this post, we work the opposite direction and start with a Data-First approach. Given an SQL Database, we can easily convert a query to a DataTable using System.Data's SqlDataAdapter.Fill method. Now, given a DataTable, I show you here how to use Reflection to populate a class's public properties from a DataRow in a DataTable (or a List<> of classes, one from each DataRow in the DataTable) where the ColumnName matches the name of the public property in the class exactly (case-sensitive).
If the DataTable has extra columns that don't match up to a property in the class, they are ignored. If the DataTable is missing columns to match a class property, that property is ignored and left at the default value for that type (since it is a property). If you desire the ColumnName/PropertyInfo.Name matching behavior to be case insensitive, simply modify the line that compares the two strings (PropertyInfo.Name and DataColumn.ColumnName) to include a call to String.ToUpper() or String.ToLower() for each name.

If you paying close attention, or have ever attempted this kind of thing before, you are probably thinking to yourself that the most laborious (and error-prone) process is going to be creating the C# classes plus their many auto-properties that have to match the columns of a table, all manually. Well, take solace in the fact that I already thought of this and created a solution to generate C# class object code files from a DataTable using CodeDOM. It even implements a little hack to generate the properties as auto-properties (something not supported by CodeDOM) for clean, compact code that isn't bloated with private backing fields, and full getter/setter implementation.
Ultimately, the goal is to have a full, end-to-end, class-to-DataTable-to-SQL and back-again class library solution. Something like a poor-man's Entity Framework, or minimum-viable ORM. So stay alert for the next piece that will bring these wagons 'round full-circle: Automatic generation of SQL CREATE, INSERT INTO, and UPDATE scripts from a DataTable, which was generated from a C# class object, which can be generated from a DataTable, which can be generated by a SQL Database, which can be... well you get the idea.

This code has been tested and is a a little more robust than some of the equivalent samples I have been finding on StackOverflow (such as being able to handle properties of type Nullable<>. However there probably exists some conditions or use cases that I have not thought of, so please feel free to leave a comment if you find a way I can improve this class or have a feature request. In the next paragraph, I describe what the code is doing, or if you don't care, you can jump straight to the code below it. Enjoy.

How it works: Fist we get a list of PropertyInfo from the class. This will effectively be a list of properties in that class that we will want to fill. PropertyInfo exposes the Name property and the SetValue method, which takes an object and a value as parameters.
    We are going to make three nested loops to do this (one for each DataRow, one for each PropertyInfo and one for each DataColumn) and return a List of classes, each one filled out from a single row in the DataTable. It is possible to fill out one class provided a DataTable and row index in only two nested loops, and this post will provide that example too.
    For each row in DataTable.Rows, we will need to loop through each property (to fill them) and then loop through each DataTable's DataColumn and match the PropertyInfo.Name to the DataColumn.ColumnName. We then call the PropertyInfo's SetValue method. This function will take advantage of generics so that we can pass in any class as a parameter.

Here is the code:

public static class Helper
{
   public static class Table
   {
      /// <summary>
      /// Fills the public properties of a class from the first row of a DataTable
      ///  where the name of the property matches the column name from that DataTable.
      /// </summary>
      /// <param name="Table">A DataTable that contains the data.</param>
      /// <returns>A class of type T with its public properties matching column names
      ///      set to the values from the first row in the DataTable.</returns>
      public static T ToClass<T>(DataTable Table) where T : class, new()
      {
          T result = new T();
          if (Validate(Table))
          {  // Because reflection is slow, we will only pass the first row of the DataTable
              result = FillProperties<T>(Table.Rows[0]);
          }
          return result;
      }
       
      /// <summary>
      /// Fills the public properties of a class from each row of a DataTable where the name of
      /// the property matches the column name in the DataTable, returning a List of T.
      /// </summary>
      /// <param name="Table">A DataTable that contains the data.</param>
      /// <returns>A List class T with each class's public properties matching column names
      ///      set to the values of a diffrent row in the DataTable.</returns>
      public static List<T> ToClassList<T>(DataTable Table) where T: class, new()
      {
          List<T> result = new List<T>();
          
          if (Validate(Table))
          {
              foreach(DataRow row in Table.Rows)
              {
                   result.Add(FillProperties<T>(row));
              }
          }
          return result;
      }
       
      /// <summary>
      /// Fills the public properties of a class from a DataRow where the name
      /// of the property matches a column name from that DataRow.
      /// </summary>
      /// <param name="Row">A DataRow that contains the data.</param>
      /// <returns>A class of type T with its public properties set to the
      ///      data from the matching columns in the DataRow.</returns>
      public static T FillProperties<T>(DataRow Row) where T: class, new()
      {
          T result = new T();
          Type classType = typeof(T);
          
          // Defensive programming, make sure there are properties to set,
          //   and columns to set from and values to set from.
          if(    Row.Table.Columns.Count < 1
              || classType.GetProperties().Length < 1
              || Row.ItemArray.Length < 1)
          {
              return result;
          }
          
          foreach (PropertyInfo property in classType.GetProperties())
          {
              foreach(DataColumn column in Row.Table.Columns)
              {
                  // Skip if Property name and ColumnName do not match
                  if(property.Name != column.ColumnName)
                      continue;
                  // This would throw if we tried to convert it below
                  if(Row[column] == DBNull.Value)
                      continue;
                  
                  object newValue;
                  
                  // If type is of type System.Nullable, do not attempt to convert the value
                  if (IsNullable(property.PropertyType))
                  {
                      newValue = Row[property.Name];
                  }
                  else
                  {   // Convert row object to type of property
                      newValue = Convert.ChangeType(Row[column], property.PropertyType);
                  }
                  
                  // This is what sets the class properties of the class
                  property.SetValue(result, newValue, null);
              }
          }
          return result;
      }
       
      /// <summary>
      /// Checks a DataTable for empty rows, columns or null.
      /// </summary>
      /// <param name="DataTable">The DataTable to check.</param>
      /// <returns>True if DataTable has data, false if empty or null.</returns>
      public static bool Validate(DataTable DataTable)
      {
          if (DataTable == null) return false;
          if (DataTable.Rows.Count == 0) return false;
          if (DataTable.Columns.Count == 0) return false;
          return true;
      }
       
      /// <summary>
      /// Checks if type is nullable, Nullable<T> or its reference is nullable.
      /// </summary>
      /// <param name="type">Type to check for nullable.</param>
      /// <returns>True if type is nullable, false if it is not.</returns>
      public static bool IsNullable(Type type)
      {
          if (!type.IsValueType) return true; // ref-type
          if (Nullable.GetUnderlyingType(type) != null) return true; // Nullable<T>
          return false; // value-type
      }
   }
}

Wednesday, April 9, 2014

An Implementation of a One-Way Hashing Algorithm using Substitution and Permutation




        This program is a one-way hashing algorithm that can be used for hashing passwords or verifying a file's integrity. Like other ciphers, it has rounds. While this particular implementation was designed to overwrite some of the information required to reverse the rounds operations to be asymmetric, the same core 'substitution' algorithm could be used to make a symmetric stream cipher by taking a different approach.
        Each round proceeds as follows: First a scrambling or mangling (sounds violent) of a substitution table is performed before substituting the plain-text (or last round's output) with values from the table chosen by the index corresponding to the plain-text character's equivalent byte value. The substituted-text is then converted into a List of bits (bool) and the list then split in half into two lists of bits. One of the lists of bits is optionally flipped (reversed) based on the value of the first bit of that list. The value of the middle bit of the other list is used to determine which list is to lead during the (next step) bit shuffling. Starting with the selected list, the bits of the two lists are interleaved (shuffled) making one list again. The bits are converted back into bytes via an extension method and used as the input for the next round.
        The substitution/cipher block step uses a variation of the RC4 cipher with the key-scheduling algorithm dropped in favor of a pseudo-random-appearing, but evenly distributed table with each number from 0 to 256 appearing exactly once. This table is deterministically populated by an initial value, IV, which is some number that is both greater than 256 and co-prime to it. Starting with zero, set each byte in the table to the value plus IV, modulus 256, in order. Then, to 'prime' the table/block, iterate over the table several thousand times with the RC4 algorithm, discarding the byte stream (in the case of RC4). In the case of my particular RC4 variant implementation, the result byte is put into the table at an index that congruent to 256 modulus the current iteration count multiplied by some co-prime of 256, in this case, 331.
        While many people believe that RC4 is well on its way to being broken, it is my hope that by using a smarter key-scheduling function with a different (larger) block size and maybe some bit country line-dancing (shuffling) will make a variant of RC4 that we all can have confidence in again. Hmm, there is already an RC5 and RC6. Maybe a good name would be RC4.1 or RC4.Awesome. RC4 and seven thirteenths? R2C2 perhaps? No, I got it: The stream cipher formally known as RC4. RC4Realz or RCL33t? Anyways, this article is not about symmetric algorithms, but rather asymmetric ones, so read on...

        Cryptography is one of my interests. I love the math behind it and I think the mechanics behind RSA are just so elegant. I often daydream that one day I will be able to come out with something incredibly clever and useful. For the past several months, I have been working on implementing several different concepts of cryptography in C#. I have wrote several classes that do one specific type of job, such as transposition, permutation or substitution. I have been toying with combinations these different methods into 'rounds', with each round getting its input from the output of the previous round.  In this post, I am proposing a one-way hashing algorithm that uses a combination of bit-shuffling (permutation) and a substitution table with logic that mangle the table based on the current state of the table. It is possible to make an asymmetric algorithm from this idea, but that is not the aim of this particular tool.

        This algorithm still has some areas that need improving. First of all, its rather slow, so hashing large files is not practical. The number of rounds has a large baring on its hash time. A smaller number of rounds is recommended for anything beyond a paragraph. The current default is 18. This is rather arbitrary, as I have not done any testing to find the optimal number of rounds. Anyone who uses less than 4 rounds will start to see some repeating characters, not to mention make their implementation more susceptible to cryptanalysis. Sometimes a lengthy hashing time can be desirable, such as a proof of work concept, or to make it more difficult to brute force. For a password-length input at 1024 rounds, it takes almost a full second.


        Another area for improvement is the permutation step; is basically just shuffles the bits by dividing the bit array in half and interleaving the bits (think a deck of cards). While there is some variation in the order, based off the input, a much better bit shuffling would probably be an expansion or compression style of permutation. See this link to Wikipedia for an visual depiction.


        Well that about sums it up, so now on to the code! Take note at how I used extension methods to take care of converting a BitArray and a Byte array into a string, and back again. This makes the code very readable and not take up nearly as many lines by reusing code. The code is pretty heavily commented, so I will just let it explaining itself:





public class AJRHash
{
 byte[] table;
 // Unsure the optimum setting for rounds.
 // No less that 4. 1024 takes around 1 second for password-length hashes.
 // Use lower setting for large hashes
 static int rounds = 18;
 static int minimumLength = 10;
 static int tableSize = 256; // Because we are using bytes
 // By choosing a co-prime to 256, we ensure we get an evenly distributed table
 static int[] coprimesOf256 = { 4489, 2209, 1369 }; // Changing the co-primes will change the hash
 
 public AJRHash()
 {
  table = new byte[tableSize];
  InitializeTable();
 }
 
 public string Hash(string Input)
 {
  // Pad the input to minimum length
  if(Input.Length<minimumLength)
  {
   List<byte> padding = new List<byte>();
   
   // Copy the input first
   if(Input.Length>0)
    padding.AddRange(Input.GetBytes());
   
   // Make up the diffrence with nulls
   int diffrence = minimumLength - Input.Length;
   while(diffrence>0)
   {
    padding.Add(0);
    diffrence--;
   }
   Input = padding.ToArray().GetString();
  }
  
  byte[] permutation = Input.GetBytes();
  byte[] substitution = new byte[Input.Length+1];
  
  int counter = rounds;
  while(counter>0)
  {
   substitution = Substitution(permutation);
   permutation  = Permutation_Interleave(substitution);
   counter--;
  }

  return Convert.ToBase64String(permutation);
 }
 
 void InitializeTable()
 {
  int prime = coprimesOf256[0];
  
  byte index = 0;
  for (int i = 0; i < tableSize; i++)
  {
   table[index] = (byte)i;
   
   unchecked // The large prime will just roll over. Essentially just modular arithmetic
   { // By choosing a co-prime to 256, we ensure we get an evenly distributed table
    index += (byte)prime;
   }
  }
  
  // Finish initializing the table by shuffling it
  ScrambleTable(prime);
 }
 
 void ScrambleTable(int iterations)
 {
  byte i = 0;
  byte j = 0;
  byte k = 0;
  byte l = 0;
  
  byte iteration = 0;
  int counter = iterations;
  
  // Just roll over on overflow
  unchecked
  {
   // Some initial values
   j=table[table.Length/2];
   i=table[j];

   // Use the current state of the table to determine how it is changed
   while(counter>0)
   {
    i++;
    l=(byte)(table[i]+1);
    j=(byte)(j+l+1);
    
    table[i] = (byte)(j + 3);
    table[j] = (byte)(l - 1);
    
    k = (byte)( (table[i] + table[j]));
    l = (byte)(k % 255);
    
    table[k] = (byte)(table[k]+1);
    
    k = table[l];
    l = table[iteration];
    
    table[(byte)(iteration*331)] = (byte)(k ^ l);
    
    counter--;
    iteration++;
   }
  }
 }
 
 byte[] Substitution(byte[] input)
 {
  // Shuffle the table by an ammount unique to the input
  ScrambleTable(input.Length*input[0]);
  
  // Apply input against the substitution table
  List<byte> result = new List<byte>();
  foreach(byte b in input)
  {
   result.Add( table[(int)b] );
  }
  
  return result.ToArray();
 }
 
 // TODO: Add more robust bit shuffling/permutation such as compression or expansion
 byte[] Permutation_Interleave(byte[] input)
 {
  // Convert input into an array of bits
  BitArray temp = new BitArray(input);
  List<bool> bits = temp.GetList();
  
  // Ensure we have an even number of bits
  if(bits.Count%2 != 0)
   bits.Add(false);
  
  // Split the input up into two arrays of bits
  List<bool[]> split = bits.Split();
  
  // Make sure we have at least two arrays to interleave
  if(split.Count<2)
   throw new Exception("Input is too short.");
  
  bool[] one = split[0];
  bool[] two = split[1];
  
  // Ensure both arrays are the same length
  if(one.Length != two.Length)
   throw new Exception("Lengths must match.");
  
  // If the first bit of two is 1, reverse it.
  //   This makes the deterministic shuffling more difficult to reverse 
  if(two[0])
   Array.Reverse(two);
  
  // If the 'middle' bit of one is 1, take from two first.
  //   This makes the deterministic shuffling more difficult to reverse
  bool SecondFirst = false;
  if(one[one.Length/2])
   SecondFirst = true;
  
  // Interleave the two arrays
  int counter = 0;
  List<bool> result = new List<bool>();
  while(counter<one.Length)
  {
   // Two possible ways to order (decided above)
   if(SecondFirst)
   {
    result.Add(two[counter]);
    result.Add(one[counter]);
   }
   else
   {
    result.Add(one[counter]);
    result.Add(two[counter]);
   }
   counter++;
  }
  
  return new BitArray(result.ToArray()).GetBytes();
 }
}



I also wrote a few extension methods to help with all the tedious converting of data types. Extension methods can help keep you code tidy, readable. Don't forget to add these extension methods, or the above code wont compile:




public static class ExtentionMethods
{
 public static byte[] GetBytes(this string Input)
 {
  if(Input == null) return new byte[0];   
  return Encoding.UTF8.GetBytes(Input);
 }
 
 public static string GetString(this byte[] Input)
 {
  if(Input == null) return string.Empty;   
  return Encoding.UTF8.GetString(Input);
 }
 
 public static string GetString(this BitArray Input)
 {
  if(Input == null) return string.Empty;   
  return Encoding.UTF8.GetString(Input.GetBytes());
 }
 
 public static byte[] GetBytes(this BitArray Input)
 {
  if(Input == null) return new byte[0];   
  int lengthInBytes = (Input.Length%8==0) ? Input.Length/8 : (Input.Length/8)+1;
  
  byte[] result = new byte[lengthInBytes];
  Input.CopyTo(result,0);
  return result;
 }
 
 public static bool[] GetArray(this BitArray Input)
 {
  if(Input == null) return new bool[0];   
  return Input.GetList().ToArray();
 }
 
 public static List<bool> GetList(this BitArray Input)
 {
  if(Input == null) return new List<bool>();
  
  List<bool> result = new List<bool>();
  foreach(bool b in Input)
  {
   result.Add(b);
  }   
  return result;
 }
 
 public static List<bool[]> Split(this List<bool> Input)
 {
  if(Input == null) return new List<bool[]>();
  
  int midpoint = (Input.Count/2);
  List<bool> result1 = new List<bool>();
  List<bool> result2 = new List<bool>();
  
  int counter = 0;
  foreach(bool b in Input)
  {
   if(counter<midpoint)
   {
    result1.Add(b);
   }
   else
   {
    result2.Add(b);
   }    
   counter++;
  }
  
  List<bool[]> result = new List<bool[]>();
  result.Add(result1.ToArray());
  result.Add(result2.ToArray());
  return result;   
 }
}

Tuesday, July 16, 2013

Convert a Class or List of Class to a DataTable, using reflection.




Note by author:

   Since writing this, I have expanded on this idea quite a bit. I have written a lightweight ORM class library that I call EntityJustWorks.

   The full project can be found on
GitHub or CodePlex.


   EntityJustWorks not only goes from a class to DataTable (below), but also provides:


Security Warning:
This library generates dynamic SQL, and has functions that generate SQL and then immediately executes it. While it its true that all strings funnel through the function Helper.EscapeSingleQuotes, this can be defeated in various ways and only parameterized SQL should be considered SAFE. If you have no need for them, I recommend stripping semicolons ; and dashes --. Also there are some Unicode characters that can be interpreted as a single quote or may be converted to one when changing encodings. Additionally, there are Unicode characters that can crash .NET code, but mainly controls (think TextBox). You almost certainly should impose a white list:
string clean = new string(dirty.Where(c => "abcdefghijklmnopqrstuvwxyz0123456789.,\"_ !@".Contains(c)).ToArray());

PLEASE USE the SQLScript.StoredProcedure and DatabaseQuery.StoredProcedure classes to generate SQL for you, as the scripts it produces is parameterized. All of the functions can be altered to generate parameterized instead of sanitized scripts. Ever since people have started using this, I have been maintaining backwards compatibility. However, I may break this in the future, as I do not wish to teach one who is learning dangerous/bad habits. This project is a few years old, and its already showing its age. What is probably needed here is a total re-write, deprecating this version while keep it available for legacy users after slapping big warnings all over the place. This project was designed to generate the SQL scripts for standing up a database for a project, using only MY input as data. This project was never designed to process a USER'S input.! Even if the data isn't coming from an adversary, client/user/manually entered data is notoriously inconsistent. Please do not use this code on any input that did not come from you, without first implementing parameterization. Again, please see the SQLScript.StoredProcedure class for inspiration on how to do that.




    This class uses generics to accepts a class type, and uses reflection to determine the name and type of the class's public properties. With that, a new DataTable is made and the DataColumnCollection is fleshed out. Then you can add rows to the DataTable by passing instances of the class with it's property fields containing values.

    Finally, we serialize the DataTable to an XML file, save it's Schema, then load it all back in again as a proof of concept.


Usage example:

List<Order> orders = new List<Order>();

// Fill in orders here ...
// orders.Add(new Order());

// Convert class to DataTable
DataTable ordersTable = ClassListToDataTable(orders);

// Set DataGrid's DataSource to DataTable
dataGrid1.DataSource = ordersTable;


Here is the Code:

public static DataTable ClassToDataTable<T>() where T : class
{
    Type classType = typeof(T);

    List<PropertyInfo> propertyList = classType.GetProperties().ToList();
    if (propertyList.Count < 1)
    {
        return new DataTable();
    }

    string className = classType.UnderlyingSystemType.Name;
    DataTable result = new DataTable(className);

    foreach (PropertyInfo property in propertyList)
    {
        DataColumn col = new DataColumn();
        col.ColumnName = property.Name;

        Type dataType = property.PropertyType;

        if (IsNullable(dataType))
        {
            if(dataType.IsGenericType)
            {
                dataType = dataType.GenericTypeArguments.FirstOrDefault();
            }
        }
        else
        {   // True by default
            col.AllowDBNull = false;
        }

        col.DataType = dataType;

        result.Columns.Add(col);
    }

    return result;
}

public static DataTable ClassListToDataTable<T>(List<T> ClassList) where T : class
{
   DataTable result = ClassToDataTable<T>();
   
   if(result.Columns.Count < 1)
   {
      return new DataTable();
   }
   if(ClassList.Count < 1)
   {
      return result;
   }
   
   foreach(T item in ClassList)
   {
      ClassToDataRow(ref result, item);
   }
   
   return result;
}

public static void ClassToDataRow<T>(ref DataTable Table, T Data) where T : class
{
    Type classType = typeof(T);
    string className = classType.UnderlyingSystemType.Name;

    // Checks that the table name matches the name of the class. 
    // There is not required, and it may be desirable to disable this check.
    // Comment this out or add a boolean to the parameters to disable this check.
    if (!Table.TableName.Equals(className))
    {
        return;
    }

    DataRow row = Table.NewRow();
    List<PropertyInfo> propertyList = classType.GetProperties().ToList();

    foreach (PropertyInfo prop in propertyList)
    {
        if (Table.Columns.Contains(prop.Name))
        {
            if (Table.Columns[prop.Name] != null)
            {
                row[prop.Name] = prop.GetValue(Data, null);
            }
        }
    }
    Table.Rows.Add(row);
}

public static bool IsNullable(Type Input)
{
    if (!Input.IsValueType) return true; // Is a ref-type, such as a class
    if (Nullable.GetUnderlyingType(Input) != null) return true; // Nullable
    return false; // Must be a value-type
}

Here is an example of how to serialize a DataTable to XML, and load it back again

string filePath = "order1.xml";
string schemaPath = Path.ChangeExtension(filePath,".xsd");

ordersTable.WriteXml(filePath);
ordersTable.WriteXmlSchema(schemaPath);

// Load
DataTable loadedTable = new DataTable();
loadedTable.ReadXmlSchema(schemaPath);
loadedTable.ReadXml(filePath);

// Set DataGrid's DataSource
dataGrid1.DataSource = dataTable;


The full project and source code for EntityJustWorks can be found on GitHub and CodePlex.

Thursday, June 27, 2013

XML Serializable Dictionary, Tuple, and Object




Serializing a data class to an XML file using XmlSerializer is very useful. However, some of the most useful data classes in .NET are not serializable. Dictionary and Tuple most notably. If you looking for a blog post on how to make a Dictionary that accepts duplicate keys by storing the values values with identical keys in a List, please see this blog post.

The below SerializableDictionary class works by inheriting from IXmlSerializable, which requires you implement the following three methods:
* GetSchema() - Remember, you should always return null for this function.
* ReadXml(XmlReader reader)
* WriteXml(XmlWriter writer)
(Read about IXmlSerializable on MSDN)

Here is the code to serialize a dictionary or serialize a tuple:

namespace XMLSerializableDictionary
{
    using System;
    using System.Collections.Generic;
    using System.Runtime.Serialization;
    using System.Xml;
    using System.Xml.Schema;
    using System.Xml.Serialization;

     [Serializable]
    [XmlRoot("Dictionary")]
    public class SerializableDictionary<TKey, TValue>
        : Dictionary<TKey, TValue>, IXmlSerializable
    {
        private const string DefaultTagItem = "Item";
        private const string DefaultTagKey = "Key";
        private const string DefaultTagValue = "Value";
        private static readonly XmlSerializer KeySerializer =
                                        new XmlSerializer(typeof(TKey));

        private static readonly XmlSerializer ValueSerializer =
                                        new XmlSerializer(typeof(TValue));

        public SerializableDictionary() : base()
        {
        }

        protected SerializableDictionary(SerializationInfo info, StreamingContext context)
                : base(info, context)
        {
        }

        protected virtual string ItemTagName
        {
            get { return DefaultTagItem; }
        }

        protected virtual string KeyTagName
        {
            get { return DefaultTagKey; }
        }

        protected virtual string ValueTagName
        {
            get { return DefaultTagValue; }
        }

        public XmlSchema GetSchema()
        {
            return null;
        }

        public void ReadXml(XmlReader reader)
        {
            bool wasEmpty = reader.IsEmptyElement;

            reader.Read();

            if (wasEmpty)
            {
             return;
            }

            try
            {
                while (reader.NodeType != XmlNodeType.EndElement)
                {
                    reader.ReadStartElement(this.ItemTagName);
                    try
                    {
                        TKey tKey;
                        TValue tValue;

                        reader.ReadStartElement(this.KeyTagName);
                        try
                        {
                            tKey = (TKey)KeySerializer.Deserialize(reader);
                        }
                        finally
                        {
                            reader.ReadEndElement();
                        }

                        reader.ReadStartElement(this.ValueTagName);
                        try
                        {
                            tValue = (TValue)ValueSerializer.Deserialize(reader);
                        }
                        finally
                        {
                            reader.ReadEndElement();
                        }

                        this.Add(tKey, tValue);
                    }
                    finally
                    {
                        reader.ReadEndElement();
                    }

                    reader.MoveToContent();
                }
            }
            finally
            {
                reader.ReadEndElement();
            }
        }

        public void WriteXml(XmlWriter writer)
        {
            foreach (KeyValuePair<TKey, TValue> keyValuePair in this)
            {
                writer.WriteStartElement(this.ItemTagName);
                try
                {
                    writer.WriteStartElement(this.KeyTagName);
                    try
                    {
                        KeySerializer.Serialize(writer, keyValuePair.Key);
                    }
                    finally
                    {
                        writer.WriteEndElement();
                    }

                    writer.WriteStartElement(this.ValueTagName);
                    try
                    {
                        ValueSerializer.Serialize(writer, keyValuePair.Value);
                    }
                    finally
                    {
                        writer.WriteEndElement();
                    }
                }
                finally
                {
                    writer.WriteEndElement();
                }
            }
        }
    }
}


The idea behind the serializable tuple is we just make our own Tuple that stores the items by declaring the properties to represent them with their generic T type. If you are not used to working with generics, this can be a little strange. T1, T2 and T3 are just placeholders for the type that is to be determined by the calling function, or the function above that if the calling function uses generics too.

And a serializable tuple:

public class SerializableTuple<T1,T2,T3>
{
 public T1 Item1 { get; set; }
 public T2 Item2 { get; set; }
 public T3 Item3 { get; set; }
 
 public static implicit operator Tuple<T1,T2,T3>(SerializableTuple<T1,T2,T3>  st)
 {
  return Tuple.Create(st.Item1,st.Item2,st.Item3);
 }

 public static implicit operator SerializableTuple<T1,T2,T3>(Tuple<T1,T2,T3> t)
 {
  return new SerializableTuple<T1,T2,T3>() {
   Item1 = t.Item1,
   Item2 = t.Item2,
   Item3 = t.Item3
  };   
 }
 
 public SerializableTuple()
 {
 }
}


And finally, a generic object serializer and deserializer:


public static class XML
{
   public static class Serialize
   {
      public static void Object(string Filename, object obj)
      {
         using (StreamWriter streamWriter = new StreamWriter(Filename))
         {
            XmlSerializer xmlSerializer = new XmlSerializer(obj.GetType());
            xmlSerializer.Serialize(streamWriter, obj);
         }
      }
   }

   public static class DeSerialize
   {
      public static string Generic<T>(T data)
      {
         if (data == null)
            return string.Empty;

         string content = string.Empty;
         using (MemoryStream memoryStream = new MemoryStream())
         {
            XmlSerializer serializer = new XmlSerializer(typeof(T));
            serializer.Serialize(memoryStream, data);

            memoryStream.Seek(0, SeekOrigin.Begin);
            using (StreamReader reader = new StreamReader(memoryStream))
            {
               content = reader.ReadToEnd();
            }
         }
         return content;
      }

      public static object Object(string Filename, Type type)
      {
         object result = null;
         using (TextReader reader = new StringReader(Filename))
         {
            XmlSerializer serializer = new XmlSerializer(type);
            result = serializer.Deserialize(reader);
         }
         return result;
      }
   }
}

And perhaps after you serialize your data to an XML file, you would like to generate a schema XML file from it:
  
void XmlToSchema(string FileName)
{
 XmlReader xmlReader = XmlReader.Create(FileName);
 XmlSchemaSet schemaSet = new XmlSchemaSet();
 XmlSchemaInference schemaInfer = new XmlSchemaInference();
 schemaSet = schemaInfer.InferSchema(xmlReader);

 string outFilename = Path.ChangeExtension(FileName,".xsd");
 using(Stream streamOut = new FileStream(outFilename,FileMode.Create) )
 {
  TextWriter textWriter =  new StreamWriter(streamOut);    
  foreach (XmlSchema s in schemaSet.Schemas())
  {
   s.Write(textWriter );
  }
  textWriter .Close();
 }
}