Rijndael / AES (128 bit) in VB.net, PHP, Python

Being able to transport encrypted data is important in some of my projects at work. One-way hashes using MD5 usually suffice for most encryption purposes but Symmetric Encryption algorithms are important for encrypting and then decrypting data. For this, we use the Rijndael and AES algorithm in a few different languages. Here is what we do for VB.net, PHP, and Python. I also take no responsibility for misuse of this code.

Note: You will see a string replace which is necessary for passing encrypted data through a URL.

VB.net:

' Keys required for Symmetric encryption / decryption
Dim rijnKey As Byte() = {&H1, &H2, &H3, &H4, &H5, &H6, &H7, &H8, &H9, &H10, &H11, &H12, &H13, &H14, &H15, &H16}
Dim rijnIV As Byte() = {&H1, &H2, &H3, &H4, &H5, &H6, &H7, &H8, &H9, &H10, &H11, &H12, &H13, &H14, &H15, &H16}

Function Decrypt(S As String)
If S = "" Then
Return S
End If
' Turn the cipherText into a ByteArray from Base64
Dim cipherText As Byte()
Try
' Replace any + that will lead to the error
cipherText = Convert.FromBase64String(S.Replace("BIN00101011BIN", "+"))
Catch ex As Exception
' There is a problem with the string, perhaps it has bad base64 padding
Return S
End Try
'Creates the default implementation, which is RijndaelManaged.
Dim rijn As SymmetricAlgorithm = SymmetricAlgorithm.Create()
Try
' Create the streams used for decryption.
Using msDecrypt As New MemoryStream(cipherText)
Using csDecrypt As New CryptoStream(msDecrypt, rijn.CreateDecryptor(rijnKey, rijnIV), CryptoStreamMode.Read)
Using srDecrypt As New StreamReader(csDecrypt)
' Read the decrypted bytes from the decrypting stream and place them in a string.
S = srDecrypt.ReadToEnd()
End Using
End Using
End Using
Catch E As CryptographicException
Return S
End Try
Return S
End Function


Function Encrypt(S As String)
'Creates the default implementation, which is RijndaelManaged.
Dim rijn As SymmetricAlgorithm = SymmetricAlgorithm.Create()
Dim encrypted() As Byte
Using msEncrypt As New MemoryStream()
Dim csEncrypt As New CryptoStream(msEncrypt, rijn.CreateEncryptor(rijnKey, rijnIV), CryptoStreamMode.Write)
Using swEncrypt As New StreamWriter(csEncrypt)
'Write all data to the stream.
swEncrypt.Write(S)
End Using
encrypted = msEncrypt.toArray()
End Using

' You cannot convert the byte to a string or you will get strange characters so base64 encode the string
' Replace any + that will lead to the error
Return Convert.ToBase64String(encrypted).Replace("+", "BIN00101011BIN")
End Function

PHP:

$rijnKey = "\x1\x2\x3\x4\x5\x6\x7\x8\x9\x10\x11\x12\x13\x14\x15\x16";
$rijnIV = "\x1\x2\x3\x4\x5\x6\x7\x8\x9\x10\x11\x12\x13\x14\x15\x16";
function Decrypt($s){
global $rijnKey, $rijnIV;

if ($s == "") { return $s; }

// Turn the cipherText into a ByteArray from Base64
try {
$s = str_replace("BIN00101011BIN", "+", $s);
$s = base64_decode($s);
$s = mcrypt_decrypt(MCRYPT_RIJNDAEL_128, $rijnKey, $s, MCRYPT_MODE_CBC, $rijnIV);
} catch(Exception $e) {
// There is a problem with the string, perhaps it has bad base64 padding
// Do Nothing
}
return $s;
}

function Encrypt($s){
global $rijnKey, $rijnIV;

// Have to pad if it is too small
$block = mcrypt_get_block_size(MCRYPT_RIJNDAEL_128, 'cbc');
$pad = $block - (strlen($s) % $block);
$s .= str_repeat(chr($pad), $pad);

$s = mcrypt_encrypt(MCRYPT_RIJNDAEL_128, $rijnKey, $s, MCRYPT_MODE_CBC, $rijnIV);
$s = base64_encode($s);
$s = str_replace("+", "BIN00101011BIN", $s);
return $s;
}

Python 2.7:
You must first install the pyCrypto package which gives you access to AES functions. If you are using Windows, you can go to http://www.voidspace.org.uk/python/modules.shtml#index to download the pyCrypto binary.


from Crypto.Cipher import AES
import base64
import os

key = "\x01\x02\x03\x04\x05\x06\x07\x08\x09\x10\x11\x12\x13\x14\x15\x16"
iv = "\x01\x02\x03\x04\x05\x06\x07\x08\x09\x10\x11\x12\x13\x14\x15\x16"
text = "10"
replace_plus = "BIN00101011BIN"

# the block size for the cipher object; must be 16, 24, or 32 for AES
BLOCK_SIZE = 16

def repeat_to_length(string_to_expand, length):
return (string_to_expand * ((length/len(string_to_expand))+1))[:length]

pad = lambda s: s + (BLOCK_SIZE - len(s) % BLOCK_SIZE) \
* chr(BLOCK_SIZE - len(s) % BLOCK_SIZE)
unpad = lambda s : s[0:-ord(s[-1])]

# one-liners to encrypt/encode and decrypt/decode a string
# encrypt with AES, encode with base64
def EncodeAES(s):
c = AES.new(key, AES.MODE_CBC, iv)
s = pad(s)
s = c.encrypt(s)
s = base64.b64encode(s)
return s.replace("+", replace_plus)

def DecodeAES(enc):
c = AES.new(key, AES.MODE_CBC, iv)
enc = enc.replace(replace_plus, "+")
enc = base64.b64decode(enc)
enc = c.decrypt(enc)
return unpad(enc)

# encode a string
encoded = EncodeAES(text)
print 'Encrypted string:', encoded

# decode the encoded string
decoded = DecodeAES(encoded)
print 'Decrypted string:', decoded

Removing the BuddyPress Admin Bar

I recently started developing a BuddyPress Theme/Plugin using the WordPress 3.4.2 and BuddyPress 1.6.1.

To remove the Admin Bar up at the top, you can create a file called bp-custom.php which gets thrown into wp-content/plugins. The nice thing about bp-custom.php is that you don’t need to activate it or add any special code to make it run as long as BuddyPress is activated.

So to remove the Admin Bar create bp-custom.php and add the following code:

<?php
function remove_bp_adminbar(){
show_admin_bar(false);
}
add_action('after_setup_theme','remove_bp_adminbar');
?>

After updating and refreshing the page, your admin bar should be removed. But, be careful because this can cause issues if you don’t have alternative links to log out in place.

Cosine Similarity in MS SQL

Cosine similarity measures the angle between two vectors and can be used to perform similarity between text strings. In the following code, the two input strings are vectorized and the similarity is returned as a floating point value between 0 and 1.


SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: Adam Maus (http://adammaus.com)
-- Create date: 2012-06-20
-- Description: Cosine Similarity: http://en.wikipedia.org/wiki/Cosine_similarity
-- Determines the angle between two vectors
-- =============================================
CREATE FUNCTION [dbo].[cosine_distance]
(
@s nvarchar(4000), @t nvarchar(4000)
)
RETURNS float
AS
BEGIN
-- Create an array of letter frequencies using the unicode of @s and @t
DECLARE @sLet table(letter int, freq int) -- Pretend these are vectors
DECLARE @tLet table(letter int, freq int)
DECLARE @i int, @j int
DECLARE @c int -- The current character
-- Create the arrays
SET @i = 1
WHILE @i <= LEN(@s) BEGIN -- Use the UNICODE values for the character but not necessary SET @c = UNICODE(SUBSTRING(@s, @i, 1)) + 1 -- Determine whether this character is in the vector already SELECT @j = COUNT(letter) FROM @sLet WHERE letter = @c IF @j > 0 BEGIN
UPDATE @sLet SET freq += 1 WHERE letter = @c
END ELSE BEGIN
INSERT INTO @sLet VALUES (@c, 1)
END
-- We want to keep @tLet consistent with @sLet so insert a 0 for that letter
SELECT @j = COUNT(letter) FROM @tLet WHERE letter = @c
IF @j = 0 BEGIN
INSERT INTO @tLet VALUES (@c, 0)
END

SELECT @i += 1
END
SET @i = 1
WHILE @i <= LEN(@t) BEGIN SET @c = UNICODE(SUBSTRING(@t, @i, 1)) + 1 SELECT @j = COUNT(letter) FROM @tLet WHERE letter = @c IF @j > 0 BEGIN
UPDATE @tLet SET freq += 1 WHERE letter = @c
END ELSE BEGIN
INSERT INTO @tLet VALUES (@c, 1)
END

-- We want to keep @sLet consistent with @tLet so insert a 0 for that letter
SELECT @j = COUNT(letter) FROM @sLet WHERE letter = @c
IF @j = 0 BEGIN
INSERT INTO @sLet VALUES (@c, 0)
END

SELECT @i += 1
END
-- Compute the similarity
-- Declare the numerator for the similarity
DECLARE @numer float
SET @numer = 0
SELECT @numer += s.freq * t.freq FROM @sLet s LEFT JOIN @tLet t ON s.letter = t.letter
-- Declare the norm values and calculate the denominator for the similarity
DECLARE @sNorm int, @tNorm int
SET @sNorm = 0
SET @tNorm = 0
SELECT @sNorm += freq * freq FROM @sLet
SELECT @tNorm += freq * freq FROM @tLet
DECLARE @denom float
SET @denom = SQRT(@sNorm) * SQRT(@tNorm)
RETURN (@numer) / (@denom+1) -- The +1 eliminates the possibility 0 = @denom
END

To “install” this script using Microsoft SQL Server Management Studio, go to your database, and open Programmability > Functions and right-click on Scalar-valued Functions to add a new function.

Multidimensional Scaling

Multidimensional Scaling (MDS) is a linear embedding method used when we only know the pairwise distances between data points. For linear systems, MDS works well with as little as 10 points and the system is 2 dimensional. For this program, you choose the window size or embedding for the system and MDS identifies what the true embedding should be. The following program uses Logistic map data when r = 3.2 but you can change r to see how well MDS works as the map bifurcates and eventually becomes chaotic and nonlinear at 4. MDS breaks down at this point and the embedding defaults to just 1 over the embedding you choose.

# Requires Numpy and Scipy libraries

from numpy import *
import scipy.linalg
import random

r=3.2 # parameter on Logistic map
n = 10 # the number of timeseries points to use
d = 40 # your embedding choice

# Create a time series
def timeseries(number,throw, d):
  xList = []
  dataX0 = []
  t = 0
  while t < d:
    xList.append(.1)
    t += 1
  while t <= number+throw+d:
    x = r * xList[t-1] * (1 - xList[t-1]) # Logistic Map
    xList.append(x)
    if t > throw:
      y = xList[t-1]
      dataX0.append(x)
    t = t + 1
  return dataX0

# Construct an n x n centering matrix
# The form is P = I - (1/n) U where U is a matrix of all ones
def centering_matrix(n):
  P = eye(n) - 1/float(n) * ones((n,n))
  return P

def create_data(number, d):
  dataX0 = timeseries(number, 500, d)
  # Create a sliding window of size d
  data = []
  i = d
  while i < len(dataX0):
    data.append([])
    j = i - d
    while j < i:
      data[len(data)-1].append(dataX0[j])
      j = j + 1
    i = i + 1
  return data

def main():
  print "n =",n, ": number of timeseries points used"
  print "d =",d, ": embedding"

  P = centering_matrix(n)
  # Create the data with the number of dimensions
  data = create_data(n, d)
  X = pairwise_distances(data)
  A = -1/2.0 * P * X * P
  # Calculate the eigenvalues/vectors
  [vals, vectors] = scipy.linalg.eig(A)
  # Sort the values
  vals = sort(vals)
  embedding = 0
  for x in vals:
    if x > 10**(-10):
      embedding += 1
  print "mds embedding =",embedding

# Compute the pairwise distance between vector x and y
def metric(x1, y1):
  d = 2
  summ = []
  i = 0
  while i < len(x1):
    # in this case use euclidean distance
    summ.append((x1[i] - y1[i])**d)
    i = i + 1
  return sum(summ) ** (1 / float(d))

# Return a matrix of pairwise distances
def pairwise_distances(data):
  distances = []
  i = 0
  while i < len(data):
    distances.append([])
    x1 = data[i]
    j = 0
    while j < len(data):
      y1 = data[j]
      distances[i].append(metric(x1, y1)**2)
      j = j + 1
    i = i + 1
  return distances

main()

Applications of Particle Swarm Optimization

Nonconvex Search Spaces
The Rastrigin function from the post  In-depth details of the algorithm is a nonconvex function and therefore has a nonconvx search space. Convexity is extremely important in optimization algorithms because it has nice properties involving gradients that can make optimization guaranteed. In a space like the Rastrigin function, particle swarm optimization is able to deal with the local minima and in many cases finds the global optimum.

Integer or Discontinuous Search Spaces
In a similar vein, integer search spaces are difficult for traditional optimization algorithms. In problems that involve integer variables, the search space is discontinuous and gradient information is rarely effective. Particle swarm optimization does not require the space to be continuous but precautions need to be taken to position particles exactly on specific values. For more information see, “An improved PSO algorithm for solving non-convex NLP/MINLP problems with equality constraints” by Yiqing et al.

http://www.sciencedirect.com/science/article/pii/S0098135406001281

Neural Networks
One could treat the neural network weight space as a high dimensional particle swarm optimization search space. In this application of PSO, particles could be a swarm of neural networks attempting to find the lowest error on some classification or regression task. See “Particle Swarm Optimization of Neural Network Architectures and Weights” by Carvalho et al.

http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5
%2F4344004%2F4344005%2F04344074.pdf
%3Farnumber%3D4344074&authDecision=-203

Support Vector Machines (and Regression)
For classification and regression tasks using Support Vector Machines, the user has the ability to choose a few hyperparameters that control the kernel function, the cost associated with failing to correctly classify a training item, the loss function parameters, etc. Traditionally, the grid search has been used since the search space is rarely the same between problems and unlikely to be convex. Since the search space is continuous there is a combinatorial explosion as the number of hyperparameters increases. Particle swarm optimization could be used to find the optimal set of hyperparameters by creating particles that search a space of various values for each of the hyperparameters while attempting to produce the best error on the data. To learn more, see “Particle swarm optimization for parameter determination and feature selection of support vector machines” by Lin et al.

http://www.sciencedirect.com/science/article/pii/S0957417407003752

Multi-Objective Optimization
In the spirit of optimization problems, multi-objective programs involve optimizing programs with multiple objective functions where objective functions are potentially in conflict with one another. In these problems, particle swarm optimization can be used to find a good trade-off between the different objective functions. See “Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art” by Reyes-Sierra et al.

http://www.softcomputing.net/ijcir/vol2-issu3-paper5.pdf

This post is part of a series:

  1. An overview of Particle Swarm Optimization Algorithm
  2. In-depth details of the algorithm
  3. More applications of Particle Swarm Optimization

In-depth details of Particle Swarm Optimization

In the first part of this series on Particle Swarm Optimization (PSO), I posted an general overview of the algorithm and an example of how the algorithm searches for the minimum of a parabola. In this post, I explain an example of the algorithm constructed in Python. I will end with an example problem, called the Rastrigin function, a function that researchers use to test their optimization algorithms on.

The full Python code that I will be discussing is located here. It may be helpful to open this code since I will only be discussing specific portions of the code. If you want to run the code, please save the page as a .py (Python) file.

The file is set up in the following way:
– global variables and user-defined parameters
– function F(): the function being optimized, in the example file, I use the Rastrigin function
– function main(): the function that constructs the swarm and attempts to optimize F
– class Particle: the functions and local variables associated with each Particle

The function main, constructs the swarm and iterates for a user-defined number of trials (cmax). The best velocity, position, and error are saved and output at the end of the program. It is not necessary to use the number of trials to signal the end of optimization. One could also stop the optimization if the best error achieves a certain value or if all of the particles are below some error bound.

During optimization, particles’ behavior are determined by the following set of equations:

The first equation vij(t) determines the velocity of particle i in dimension j for time step t. Particles adjust their trajectory in the search space according to their local neighborhood’s best position x**, their own best position x*, and their previous velocity. The first term of the velocity characterizes how much the previous velocity affects the new velocity with w set to 1 in the code. In the code, a dampening parameter is included that can be used to slow particles as optimization progresses. The second term or “cognitive” term with c1r1 pull particles back to their best previous position. The third term or “social” term with c2r2 pushes particles together so they fly collaboratively through space. The second equation xij(t) determines the position of particle i in dimension j for time step t. New positions are determined by the previous position and the current velocity.

In the Python code, the particle class randomly initializes the positions and velocities (the InitPosition(self) and InitVelocity(self) functions) for each particle within some search space. After each time step in the optimization, the UpdateVelocity(self) and UpdatePosition(self) functions are called to construct new velocity and position vectors for each particle.

Within the main function, the particles’ positions are evaluated after each update as the swarm moves towards the optimal position in the space.

The following graphs are from an example trial of the algorithm run on the 1-D Rastrigin function:

PSO variations are often tested on the Rastrigin function because of how simple changes to the parameter d can extend the function to higher dimensions. Additionally, the number of deep local optima is quite large, as seen in the graph below. The global optimum is at x=0. For these results, the following parameters were used:

# The dimension of the function
num_dimensions = 1
# The number of particles in the swarm
num_particles = 5

# Bounds on the positions and velocities
v_max = 5
v_min = -5
p_min = -10
p_max = 10
# The number of updates to do
cmax = 1000
# The amount to dampen the velocity after each update
dampener = 1
dampen_rate = 1
Initial Swarm Positions (the green dots)
After 250 steps
After 250 steps
After 500 updates
After 500 updates
After 750 steps
After 750 steps, one particle ends up very far from the rest of the swarm, possibly because the particle ends up overshooting where the swarm is
After 1000 steps

As you can see, the swarm does not necessarily converge to the global optimum (x = 0) but they are all somewhat close. If you measured the best position, at least one particle has encountered the optimal position during optimization but the particle did not end up settling down because of how the parameters, the dampening factor, wc1, and c2 were chosen. One of the simplest ways to improve the algorithm is to allowing the velocity to decay (w < 1 or dampening factor) which should help all of the particles converge to an optimum.

Of course, this is just one example of how to set up the algorithm. Many researchers have experimented with portions of the algorithm trying to find the best way to optimize a function. As we will see later, changes to the particle’s equations and the distributions used to select r1 and r2 are just a few open problems in PSO.

This post is part of a series:

  1. An overview of Particle Swarm Optimization Algorithm
  2. In-depth details of the algorithm
  3. More applications of Particle Swarm Optimization

An Overview of Particle Swarm Optimization

This will start a series on the Particle Swarm Optimization algorithm.

The following topics will be covered:

  1. An overview of Particle Swarm Optimization Algorithm
  2. In-depth details of the algorithm
  3. More applications of Particle Swarm Optimization

Particle Swarm Optimization (PSO)

This algorithm is often used to optimize functions in rather unfriendly non-convex, non-continuous search spaces. The idea behind the algorithm involves a swarm of particles flying through a space both collaboratively and independently.

Instead of talking about particles, it is helpful to imagine that the swarm of particles is actually a flock of birds flying through a mountain range. Their goal is to find the best nesting site within this ‘search space’. The ideal nesting site has few predators, plenty of food sources, and many resources for building sturdy nests. Instead of the continuous motion that we often see in flying birds, each of the birds updates where it is going to head and how fast after each ‘turn’. So each of the birds makes a decision based on the birds around them and then they all move at the same time. This is repeated until some sort of stopping criterion has been satisfied (or the best nesting location has been found).

The following set of illustrations show how a swarm could find the minimum of a parabola.

The function we are trying to find the minimum for. In this case f(1.0) = 0.

We randomly place 5 particles within the search region. The best performing particle so far is seen in green at about 1.25.

All of the particle look at their own position and their neighbors and update their positions and velocities. Often they end up moving towards the best performing particle from the previous step (now seen in blue). The new best performing particle (in green) is close to 0.85.

With the next update, the particles start converging to the same positions and overlap slightly. The new best particle is close to 1.0.

Almost all of the particles converge to the correct answer in this step. However, further iterations may be necessary to determine if the correction minimum has been achieved.

How is this useful?

We can extend this example to high dimensional spaces such as a 100 dimensional paraboloid. Or the weight space for a neural network where each particle becomes a neural network that is looking for the best way to fit a set of data. Other examples could include Support Vector Machines or even the optimal choice of crops for a growing season. The applications are nearly endless.

In the next section, we will go over how the algorithm actually works and an example involving the optimization of a function.

This post is part of a series:

  1. An overview of Particle Swarm Optimization Algorithm
  2. In-depth details of the algorithm
  3. More applications of Particle Swarm Optimization

Rewiring the RSS icon link for Platform

Step-by-step instructions for rewiring where your RSS link icon points to within the Platform theme (by PageLines) for WordPress:

Requirements:
1) WordPress
2) Platform theme by PageLines

Instructions
1) Navigate to the directory where the theme is stored on your server, usually wp-content > themes > platform
2) Download functions.php
3) Add the following line of code to the end of the file:

add_filter( 
    'pagelines_branding_rssurl',
    create_function('', 'return "http://feeds.feedburner.com/AdamMaus";') 
);

4) Upload functions.php and make sure that you have enabled the RSS link in the PageLines Settings under Header and Nav.
5) As of 2012-06-21, for the latest version of Platform, download includes > core.init.php and add the filter code at the very bottom.

Estimating Lyapunov Spectra of ODEs using Python

Wolf et al. (1985) outlined an algorithm that estimates the Lyapunov spectra of systems whose equations are known using local Jacobian matrices and Gram-Schmidt orthonormalization. Python code is available for Wolf’s algorithm and discrete maps and their inverted counterparts.  I have adapted this code to estimate Lyapunov spectra for continuous-time systems like the Lorenz attractor and Rossler attractor. Additionally, Python code is available to generate time series for ordinary differential equations. Lyapunov spectrum code is also available on Clint Sprott’s website.

Lorenz Attractor

Source Code:

import math, operator, random
h = 0.01
cmax = 1000     #   number of iterations to perform
choice = input("Which system would you like?\n (1) Rossler \n (2) Lorenz\n")
while str(choice) != "1" and str(choice) != "2":
  print("Please select 1 or 2")
  choice = input("Which system would you like?\n (1) Rossler \n (2) Lorenz\n")
print "\n",
a = 0.2
b = 0.2
c = 5.7

def derivs(x, xnew, n):
  if choice == 1:
    return Rossler(x, xnew, n)
  else:
    return Lorenz(x, xnew, n)

def Rossler(x, xnew, n):
  # Nonlinear Rossler Equations
  xnew[1]  = -x[2]-x[3]
  xnew[2]  = x[1] + a * x[2]
  xnew[3]  = b + x[3] * (x[1] - c)
  # Linearized Rossler Equations
  xnew[4]  = -1*x[7]-x[10]
  xnew[5]  = -1*x[8]-x[11]
  xnew[6]  = -1*x[9]-x[12]
  xnew[7]  = x[4] + a*x[7]
  xnew[8]  = x[5] + a*x[8]
  xnew[9]  = x[6] + a*x[9]
  xnew[10] = x[3]*x[4] + x[1]*x[10] - c*x[10]
  xnew[11] = x[3]*x[5] + x[1]*x[11] - c*x[11]
  xnew[12] = x[3]*x[6] + x[1]*x[12] - c*x[12]
  return [x, xnew]

def Lorenz(x, xnew, n):
  # Nonlinear Lorenz Equations
  xnew[1]  = 10 * (x[2] - x[1])
  xnew[2]  = -1*x[1] * x[3] + 28 * x[1] - x[2]
  xnew[3]  = x[1] * x[2] - 8/3.0 * x[3]
  # Linearized Lorenz Equations
  xnew[4]  = -10 * x[4] + 10 * x[7]
  xnew[5]  = -10 * x[5] + 10 * x[8]
  xnew[6]  = -10 * x[6] + 10 * x[9]
  xnew[7]  = 28*x[4]-x[3]*x[4] - x[7] - x[1]*x[10]
  xnew[8]  = 28*x[5]-x[3]*x[5] - x[8] - x[1]*x[11]
  xnew[9]  = 28*x[6]-x[3]*x[6] - x[9] - x[1]*x[12]
  xnew[10] = x[2]*x[4] + x[1]*x[7] - 8/3.0 * x[10]
  xnew[11] = x[2]*x[5] + x[1]*x[8] - 8/3.0 * x[11]
  xnew[12] = x[2]*x[6] + x[1]*x[9] - 8/3.0 * x[12]
  return [x, xnew] 

def timeseries(cmax):
  X0 = []
  Y0 = []
  Z0 = []
  xList = []
  yList = []
  zList = []
  changeInTime = h
  # Initial conditions
  if choice == 1:
    # Rossler
    X0.append(0.01)
    Y0.append(0.01)
    Z0.append(0.01)
  else:
    # Lorenz
    X0.append(0)
    Y0.append(1)
    Z0.append(0)
  t = 0
  while len(xList) <= cmax:
    [x,y,z] = Rk4o(X0, Y0, Z0, h, len(X0))
    X0.append(x)
    Y0.append(y)
    Z0.append(z)
    if 200 < t:
      xList.append(x)
      yList.append(y)
      zList.append(z)
    changeInTime += h
    t = t + 1
  return [xList, yList, zList]

def f(x,y,z):
  if choice == 1:
    dxdt = -y-z
  else:
    dxdt = 10 * (y - x)
  return dxdt

def g(x,y,z):
  if choice == 1:
    dydt = x + a * y
  else:
    dydt = 28 * x - y - x*z
  return dydt

def e(x,y,z):
  if choice == 1:
    dzdt = b + z * (x - c)
  else:
    dzdt = x * y - 8/3.0 * z
  return dzdt

def Rk4o(xList, yList, zList, h, t):
  k1x = h*f(xList[t-1],yList[t-1], zList[t-1])
  k1y = h*g(xList[t-1],yList[t-1], zList[t-1])
  k1z = h*e(xList[t-1],yList[t-1], zList[t-1])

  k2x = h*f(xList[t-1] + k1x/2,yList[t-1] + k1y/2, zList[t-1] + k1y/2)
  k2y = h*g(xList[t-1] + k1x/2,yList[t-1] + k1y/2, zList[t-1] + k1y/2)
  k2z = h*e(xList[t-1] + k1x/2,yList[t-1] + k1y/2, zList[t-1] + k1y/2)

  k3x = h*f(xList[t-1] + k2x/2,yList[t-1] + k2y/2, zList[t-1] + k2y/2)
  k3y = h*g(xList[t-1] + k2x/2,yList[t-1] + k2y/2, zList[t-1] + k2y/2)
  k3z = h*e(xList[t-1] + k2x/2,yList[t-1] + k2y/2, zList[t-1] + k2y/2)

  k4x = h*f(xList[t-1] + k3x/2,yList[t-1] + k3y/2, zList[t-1] + k3y/2)
  k4y = h*g(xList[t-1] + k3x/2,yList[t-1] + k3y/2, zList[t-1] + k3y/2)
  k4z = h*e(xList[t-1] + k3x/2,yList[t-1] + k3y/2, zList[t-1] + k3y/2)

  x = xList[t-1] + k1x/6 + k2x/3 + k3x/3 + k4x/6
  y = yList[t-1] + k1y/6 + k2y/3 + k3y/3 + k4y/6
  z = zList[t-1] + k1z/6 + k2z/3 + k3z/3 + k4z/6
  return [x,y,z]

n = 3           #   number of variables in nonlinear system
nn=n*(n+1)      #   total number of variables (nonlinear + linear)
m = 0
x = []
xnew = []
v = []
ltot = []
znorm = []
gsc = []
A = []
B = []
C = []
D = []
i = 0
while i <= nn:
  x.append(0)
  xnew.append(0)
  v.append(0)
  A.append(0)
  B.append(0)
  C.append(0)
  D.append(0)
  i = i + 1

i = 0
while i <= n:
  ltot.append(0)
  znorm.append(0)
  gsc.append(0)
  i = i + 1

irate=10      #   integration steps per reorthonormalization
io= 100       #   number of iterations between normalization

#   initial conditions for nonlinear maps
#   must be within the basin of attraction

# Generate a random transient before starting the initial conditions
i = 1
while i <= n:
  v[i] = 0.001
  i = i + 1

transient = random.randint(n,100000)
# Generate the initial conditions for the system
[tempx,tempy,tempz] = timeseries(transient)

v[1] = tempx[len(tempx)-1]
v[2] = tempy[len(tempy)-1]
v[3] = tempz[len(tempz)-1]

i = n+1
while i <= nn:  #   initial conditions for linearized maps
  v[i]=0        #   Don't mess with these; they are problem independent!
  i = i + 1

i = 1
while i <= n:
  v[(n+1)*i]=1
  ltot[i]=0
  i = i + 1
#print "v = ",v
t=0
w = 0
while (w < cmax):
  j = 1
  while j <= irate:
    i = 1
    while i <= nn:
      x[i]=v[i]
      i = i + 1
    [x, xnew] = derivs(x, xnew, n)
    i = 1
    while i <= nn:
      A[i] = xnew[i]
      x[i] = v[i] + (h*A[i]) / 2.0
      i = i + 1
    [x, xnew] = derivs(x, xnew, n)
    i = 1
    while i <= nn:
      B[i] = xnew[i]
      x[i] = v[i] + (h*B[i]) / 2.0
      i = i + 1
    [x, xnew] = derivs(x, xnew, n)
    i = 1
    while i <= nn:
      C[i] = xnew[i]
      x[i] = v[i] + h*C[i]
      i = i + 1
    [x, xnew] = derivs(x, xnew, n)
    i = 1
    while i <= nn:
      D[i] = xnew[i]
      v[i] = v[i] + h*(A[i] + D[i] + 2*(B[i] + C[i]))/6.0
      i = i + 1
    t = t + h
    j = j + 1

  #construct new orthonormal basis by gram-schmidt:
  znorm[1]=0  #normalize first vector

  j = 1
  while j <= n:
    znorm[1]=znorm[1]+v[n*j+1]**2
    j = j + 1

  znorm[1] = math.sqrt(znorm[1])

  j = 1
  while j <= n:
    v[n*j+1]=v[n*j+1]/znorm[1]
    j = j + 1

  #generate new orthonormal set:
  j = 2
  while j <= n:
    k = 1
    while k <= j-1:
      gsc[k]=0
      l = 1
      while l <= n:
        gsc[k]=gsc[k]+v[n*l+j]*v[n*l+k]
        l = l + 1
      k = k + 1

    k = 1
    while k <= n: #construct a new vector
      l = 1
      while l <= j-1:
        v[n*k+j]=v[n*k+j]-gsc[l]*v[n*k+l]
        l = l + 1
      k = k + 1

    znorm[j]=0     #calculate the vector's norm

    k = 1
    while k <= n: #construct a new vector
      znorm[j]=znorm[j]+v[n*k+j]**2
      k = k + 1

    znorm[j]=math.sqrt(znorm[j])

    k = 1
    while k <= n: #normalize the new vector
      v[n*k+j] = v[n*k+j] / znorm[j]
      k = k + 1

    j = j + 1

  k = 1
  while k <= n: #update running vector magnitudes
    if znorm[k] > 0:
      ltot[k] = ltot[k] + math.log(znorm[k])
    k = k + 1

  m = m + 1
  if m % io == 0 or w == cmax-1:  #normalize exponent and print every io iterations
    lsum=0
    kmax=0
    k = 1
    while k <= n:
      le = ltot[k] / t
      lsum = lsum + le
      if lsum > 0:
        lsum0 = lsum
        kmax = k
      k = k + 1
  w = w + 1
if choice == 1:
  print "Rossler:"
else:
  print "Lorenz:"
print n, "LEs = "
lsum=0
kmax=0
k = 1
while k <= n:
  le = ltot[k] / t
  lsum = lsum + le
  if lsum > 0:
    lsum0 = lsum
    kmax = k
  print le
  k = k + 1

Modelling Chaotic Systems using Support Vector Regression and Python

Similar to Neural Networks, Support Vector Machines (SVM) are powerful modelling techniques in machine learning that can be applied to a variety of tasks. SVMs are primarily used to classify data while one of its variants, Support Vector Regression (SVR), can be used for time series analysis. In this post, we will perform SVR on chaotic time series using common Python libraries and a simple wrapper program.

The following libraries will be required to use the script I will demonstrate using Python 2.7:

  1. Numpy
  2. Scipy
  3. Scikits.Learn (a toolbox that hosts a variety of machine learning algorithms and requires Numpy and Scipy)
  4. MatPlotLib (used for graphing and visualization)
  5. Other libraries such as SetupTools may be required depending on your system

You can download these from their websites or if you are using Windows “Unofficial Windows Binaries for Python Extension Packages

With the packages installed, we can begin to build a program that fits an SVR model to chaotic data. First, we define a time series. In this case, I will use the Delayed Henon Map with a delay of 4, for more information please see my posts on that system.

The model will be embedded in a d dimension space.

def TimeSeries(cmax, transient, d):
  return DelayedHenon(cmax, transient, d)
def DelayedHenon(cmax, transient, d):
  temp = []
  rF = [] # Features
  rT = [] # Targets
  dX = []
  i = 0
  while i < d:
    dX.append(0.1)
    i = i + 1
  while i <= cmax + transient:
    x = 1 - 1.6 * dX[i-1] ** 2 + 0.1 * dX[i-4]
    if i > transient-d:
      temp.append(x)
    if i > transient:
      rT.append(x)
    dX.append(x)
    i = i + 1
  rF = SlidingWindow(temp, d)
  return {"F":rF, "T":rT}

I added a function that formats the data properly so it can be used by the SVR Fit function:

def SlidingWindow(x, d):
  i = d
  y = []
  while i < len(x):
    temp = []
    j = 1
    while j <= d:
      temp.append(x[i - d + j])
      j = j + 1
    y.append(temp)
    i = i + 1
  return y

We then define the model, note that SVR has a few user defined parameters which can be chosen to fit the data better.

  clf = svm.SVR(kernel='rbf', degree=d, tol=tolerance, gamma=gam, C=c, epsilon=eps)

In this case, you can choose the following parameters to produce a model with an error near 0.0035 for 1024 points taken from the time series. In practice, you can create a training algorithm to find these values.

  gam = 0.117835847192
  c = 1.02352954164
  eps = 0.00163811344978
  tolerance = 0.00539604526663
  d = 5
  cmax = 1024

Finally, we create functions to fit the time series and plot the results:

def Fit(cmax, d, tolerance, gam, c, eps):
  data = TimeSeries(cmax, random.randint(0,10000), d)
  test = TimeSeries(cmax, random.randint(0,10000), d)

  x = data["F"]
  y = data["T"]

  sout = test["T"]

  clf = svm.SVR(kernel='rbf', degree=d, tol=tolerance, gamma=gam, C=c, epsilon=eps)

  try:
    m = clf.fit(x, y)
  except:
    return [1000, clf, sout, sout]

  mout = clf.predict(test["F"])

  # Determine the error for the system
  err = 0
  i = 0
  while i < len(mout):
    err += (mout[i] - sout[i])**2
    i = i + 1
  err = math.sqrt(err / (len(mout)+.0))

  return [err, clf, mout, sout]

def Plotter(err, mout, sout, SaveGraph):
  plt.title("SVR: Err=" + str(err))
  plt.scatter(sout[0:len(sout)-2], sout[1:len(sout)-1], s=1, marker='o', edgecolors='none')
  plt.scatter(mout[0:len(mout)-2], mout[1:len(mout)-1], s=1, marker='o', edgecolors='none', facecolor='r')

  plt.show()
  if SaveGraph:
    try:
      plt.savefig("svr")
    except:
      # Do Nothing
      SaveGraph = SaveGraph
  plt.clf()

Here is the final program after putting all of the functions together (and moving some stuff around) with example output:

#!/usr/bin/env python
from scikits.learn import svm
import random
import math
import matplotlib.pyplot as plt

d = 5       # Degree of the SVR model
cmax = 1024 # Number of points to run

def TimeSeries(cmax, transient, d):
  return DelayedHenon(cmax, transient, d)

def main():
  global cmax, d, perturbation
  gam = 0.117835847192
  c = 1.02352954164
  eps = 0.00163811344978
  tolerance = 0.00539604526663
  # Create and Fit a model to data, we just need to create support vectors
  [err, svr, mout, sout] = Fit(cmax, d, tolerance, gam, c, eps)
  Plotter(err, mout, sout, False)
  print "error: ", err
def DelayedHenon(cmax, transient, d):
  temp = []
  rF = [] # Features
  rT = [] # Targets
  dX = []
  i = 0
  while i < d:
    dX.append(0.1)
    i = i + 1
  while i <= cmax + transient:
    x = 1 - 1.6 * dX[i-1] ** 2 + 0.1 * dX[i-4]
    if i > transient-d:
      temp.append(x)
    if i > transient:
      rT.append(x)
    dX.append(x)
    i = i + 1
  rF = SlidingWindow(temp, d)
  return {"F":rF, "T":rT}
def Fit(cmax, d, tolerance, gam, c, eps):
  data = TimeSeries(cmax, random.randint(0,10000), d)
  test = TimeSeries(cmax, random.randint(0,10000), d)

  x = data["F"]
  y = data["T"]

  sout = test["T"]

  clf = svm.SVR(kernel='rbf', degree=d, tol=tolerance, gamma=gam, C=c, epsilon=eps)

  try:
    m = clf.fit(x, y)
  except:
    return [1000, clf, sout, sout]

  mout = clf.predict(test["F"])

  # Determine the error for the system
  err = 0
  i = 0
  while i < len(mout):
    err += (mout[i] - sout[i])**2
    i = i + 1
  err = math.sqrt(err / (len(mout)+.0))

  return [err, clf, mout, sout]

def Plotter(err, mout, sout, SaveGraph):
  plt.title("Delayed Henon Map and SVR Model")
  p1 = plt.scatter(sout[0:len(sout)-2], sout[1:len(sout)-1], s=1, marker='o', edgecolors='none')
  p2 = plt.scatter(mout[0:len(mout)-2], mout[1:len(mout)-1], s=1, marker='o', edgecolors='none', facecolor='r')
  plt.legend([p2, p1], ["Model (Err="+ str(round(err,4))+")", "Map"], loc=8)

  plt.draw()
  plt.show()
  if SaveGraph:
    try:
      plt.savefig("svr")
    except:
      # Do Nothing
      SaveGraph = SaveGraph
  plt.clf()

def SlidingWindow(x, d):
  i = d
  y = []
  while i < len(x):
    temp = []
    j = 1
    while j <= d:
      temp.append(x[i - d + j])
      j = j + 1
    y.append(temp)
    i = i + 1
  return y

def WriteFile(err, BestTol, BestGam, BestC, BestEps, s):
  # Write the model, sensitivities to a file
  f = open("sensitivities", "w")
  f.write("Model Parameters:\n")
  f.write("d=" + str(d) + "\n")
  f.write("gam=" + str(BestGam) + "\n")
  f.write("c=" + str(BestC) + "\n")
  f.write("eps=" + str(BestEps) + "\n")
  f.write("tolerance=" + str(BestTol) + "\n")
  f.write("Model Data:\n")
  f.write("err=" + str(err) + "\n")
  f.write("s=" + str(s) + "\n")
  f.close()
main()

Example Output:

err:  0.00381722720161
Example Output

To fit data taken from other systems, simply change the Timeseries function.