Adding disorder

Adding disorder

The purpose of this section is to provide a simple overview of the different types of disorder that can be added to KITE tight-binding calculations. The general character of our disorder implementation is one of the main features of KITE. To achieve this generality, the implementation follows a basic structure: the user specifies the disorder pattern to be included (that can be constricted to one unit cell or can connect neighboring unit cells) and the disorder pattern is reproduced randomly inside the sample, according to a defined concentration and statistical distribution..

After defining a lattice with the procedure explained in Getting Started), we can add disorder to our system. Usually, disorder can be modeled either as a modification of onsite potentials appearing on the lattice sites or as a combination of onsite potential and bond disorder. Hence, KITE allows the user to select between the two types of disorder by choosing between predefined classes in the python interface. The interface provides two different classes of disorder: * Disorder – onsite disorder with three possible statistical distributions * StructuralDisorder – generic structural disorder, the combination of onsite potential and bond disorder.

Onsite disorder

Disorder adds randomly generated onsite terms at the sites of a desired sublattice based on a certain statistical distribution:

  • Gaussian;
  • Uniform;
  • Deterministic.

Beside the type of statistical distribution, we can select a sublattice type in which the disorder will appear, the mean value and the standard deviation of the selected distribution. To include onsite disorder following a given statistical distribution, we build the lattice and use the following procedure:

disorder = kite.Disorder(lattice) # define an object based on the lattice
disorder.add_disorder('A', 'Gaussian', 0.1, 0.1) # add Gaussian distributed disorder at all sites of a selected sublattice

In a single object it is possible to select multiple sublattices, each of one with different disorder distributions following the rule disorder.add_disorder('sublattice', 'type', mean, std) :

disorder.add_disorder('A', 'Gaussian', 0.1, 0.1)
disorder.add_disorder('B', 'Uniform', 0.2, 0.1)
disorder.add_disorder('C', 'Deterministic', 0.1)

In the case of deterministic disorder, the standard deviation is not set. These quantities are in the same units as the ones specified in the rest of the configuration file.

After defining the desired disorder, it can be added to the configuration file as an additional parameter in the config_system function:

kite.config_system(..., disorder=disorder)

A complete example that calculates the density of states of graphene with different on-site disorder distributions for each sublattice can be seen here:

with the resulting density of states:

image

Structural disorder

StructuralDisorder class adds the possibility of selecting between two different structural disorder types; vacancy, randomly distributed with a certain concentration in sites of a selected sublattice, and a more generic structural disorder which is a combination of onsite terms and bond disorder (also distributed with a certain concentration).

Vacancy disorder

The vacant site distribution can be selected from a single sublattice with a concentration defined in a parent object:

struc_disorder = kite.StructuralDisorder(lattice, concentration=0.2) 

struc_disorder.add_vacancy('B') # add a vacancy to a selected sublattice 

IMPORTANT: to distribute the vacancies in both sublattices, one needs to add the vacancies on each sublattice as a separate object of the class StructuralDisorder (unless you one precisely the same pattern of disorder in both sublatices)

struc_disorder_A = kite.StructuralDisorder(lattice, concentration=0.1)
struc_disorder_A.add_vacancy('A')
struc_disorder_B = kite.StructuralDisorder(lattice, concentration=0.1)
struc_disorder_B.add_vacancy('B')
disorder_structural = [struc_disorder_A, struc_disorder_B]

Structural disorder

Before discussing this class of disorder, it is important to mention that in the pre-release version, it is no possible to perform the automatic scale of the spectra for hopping disorder. In this case, it is necessary to add an extra parameter to the configuration class:

configuration = kite.Configuration(divisions=[nx, ny], length=[lx, ly], boundaries=[True, True],is_complex=False, precision=1,spectrum_range=[-10, 10])

The following example shows a definition of our most general type of disorder, which includes both onsite disorder terms and bond modifications. This type of disorder can be added as an object of the class StructuralDisorder. The procedure for adding the structural disorder is the same of adding a hopping term to the Pybinding lattice object, with a single difference that the bond disorder is not bounded to the hopping term starting from the [0, 0] unit cell, which is the case of the hopping term in pybinding.

For the sake of clarity, let us first define sublattices that will compose the disorder. In this case we are not restricted to a single unit cell:

node0 = [[+0, +0], 'A'] # define a node in a unit cell [i, j] selecting a single sublattice
node1 = [[+0, +0], 'B']
node2 = [[+1, +0], 'A']
node3 = [[+0, +1], 'B']
node4 = [[+0, +1], 'A']
node5 = [[-1, +1], 'B']

After the definition of a parent StructuralDisorder object, we can select the desired pattern:


struc_disorder = kite.StructuralDisorder(lattice, concentration=0.2) # define an object based on the lattice with a certain concentration

struc_disorder.add_structural_disorder(
    # add bond disorder in the form [from unit cell], 'sublattice_from', [to_unit_cell], 'sublattice_to', value:
    (*node0, *node1, 0.5),
    (*node1, *node2, 0.1),
    (*node2, *node3, 0.5),
    (*node3, *node4, 0.3),
    (*node4, *node5, 0.4),
    (*node5, *node0, 0.8),
    # in this way we can add onsite disorder in the form [unit cell], 'sublattice', value
    ([+0, +0], 'B', 0.1)
)
# It is possible to add multiple different disorder types which should be forwarded to the config_system function
# as a list.
another_struc_disorder = kite.StructuralDisorder(lat, concentration=0.6)
another_struc_disorder.add_structural_disorder(
    (*node0, *node1, 0.05),
    (*node4, *node5, 0.4),
    (*node5, *node0, 0.02),
    ([+0, +0], 'A', 0.3)
)

Before exporting the settings to the hdf file, it is possible to define multiple disorder realizations which will be superimposed to the clean system.

The following script has a a minimal example of how to configure the structural disorder

with the resulting density of states

image

Post-processing tools

General description

As presented in previous sections, KITE calculates the Chebyshev moments of a given expansion and stores them in the same hdf file that was generated by the configuration script. These moments are used by the post-processing tool named KITE-tools to reconstruct the requested physical quantities specified in the configuration script, such as the density of states or the optical conductivity.

In a basic setup, the python configuration script specifies the desired values for the parameters, which are exported to the hdf file with other settings. In this case, the hdf file contacting the Chebyshev moments calculated by KITE works as an input, KITE-tools read the parameters and the Chebyshev moments from the hdf file and uses them to calculate the desired quantities and export them in data files. If a parameters is not specified, a default value is used.

However, most of these quantities depend on a set of parameters (Temperature, Fermi energy, …) that are not necessary for the calculation of Chebyshev moments. Therefore, the user can modify them and recalculate quantities with KITE-tools without the need of recomputing the Chebyshev moments, the time-consuming part of the calculations. For example: if the user wants to compute the optical conductivity at several Fermi energies, the Chebyshev moments are computed once and KITE-tools can be used to obtain the optical conductivities for all Fermi energies.

For this purpose, the user has the flexibility to override the parameters from the python script by specifying them in the command line interface. As an example, the setting

./KITE-tools archive.h5 --CondOpt -F 1.2 -O -2 3 400

KITE-tools will calculate the optical conductivity with a Fermi energy of 1.2 (in units specified in the configuration file) for 400 values of the frequency in the interval -2 and 3 (in the same units), even if the configuration file present a different values.

Compilation

KITE-tools is compiled in a very similar way to KITE, using the CMake framework for portability.

cd tools/build
cmake ..
make

CMake will automatically search for the required libraries (HDF5 and Eigen3) and generate the adequate Makefile.

Usage

Default usage

Its default usage is very simple:

./KITE-tools archive.h5

where archive.h5 is the HDF file that stores the output of KITE. If KITE-tools does not find this output, it will return an error. The output of KITE-tools is a set of .dat files, one for each of the requested quantities. KITE-tools may be executed without any additional parameters; all the unspecified parameters required for the calculation will be set to sensible default values. At the moment, KITE-tools is able to compute the following quantities:

  • Local density of states (LDOS)
  • Angle-resolved photoemission spectroscopy (experimental) (ARPES)
  • Density of states (DOS)
  • DC conductivity (CondDC)
  • Optical conductivity (CondOpt)
  • Second-order optical conductivity (CondOpt2)

The SingleShot DC conductivity does not require the post-processing through KITE-tools.

Advanced usage

KITE-tools supports a set of command-line instructions to force it to use user-specified parameters for each of the quantities mentioned in the previous section. The syntax is as following:

./KITE-tools archive.h5 --quantity_to_compute1 -key_1 value_1 -key_2 value_2 --quantity_to_compute2 -key_3 value_3 ...

Each function to compute is specified after the double hyphens — and the parameters of each function is specified after the single hyphen -. The list of available commands is as follows:

Function Parameter Description
–LDOS -N Name of the output file
–LDOS -M Number of Chebyshev moments
–LDOS -K Kernel to use (jackson/green). green requires broadening parameter. Example: -K green 0.01
–LDOS -X Exclusive. Only calculate this quantity
.
–ARPES -N Name of the output file
–ARPES -E min max num Number of energy points
–ARPES -F Fermi energy
–ARPES -T Temperature
–ARPES -V Wave vector of the incident wave
–ARPES -O Frequency of the incident wave
–ARPES -X Exclusive. Only calculate this quantity
.
–DOS -N Name of the output file
–DOS -E Number of energy points
–DOS -M Number of Chebyshev moments
–DOS -K Kernel to use (jackson/green). green requires broadening parameter. Example: -K green 0.01
–DOS -X Exclusive. Only calculate this quantity
.
–CondDC -N Name of the output file
–CondDC -E Number of energy points used in the integration
–CondDC -M Number of Chebyshev moments
–CondDC -T Temperature
–CondDC -S Broadening parameter of the Green’s function
–CondDC -d  Broadening parameter of the Dirac delta
–CondDC -F min max num Range of Fermi energies. min and max may be omitted if only one is required
–CondDC -t  Number of threads
–CondDC -I If 0, CondDC uses the DOS to estimate the integration range
–CondDC -X Exclusive. Only calculate this quantity
.
–CondOpt -N Name of the output file
–CondOpt -E Number of energy points used in the integration
–CondOpt -M Number of Chebyshev moments
–CondOpt -T Temperature
–CondOpt -F Fermi energy
–CondOpt -S Broadening parameter of the Green’s function
–CondOpt -O min max num Range of frequencies
.
–CondOpt2 -N Name of the output file
–CondOpt2 -E Number of energy points used in the integration
–CondOpt2 -M Number of Chebyshev moments
–CondOpt2 -R Ratio of the second frequency relative to the first one
–CondOpt2 -P if set to 1: writes all the different contributions to separate files
–CondOpt2 -T Temperature
–CondOpt2 -F Fermi energy
–CondOpt2 -S Broadening parameter of the Green’s function
–CondOpt2 -O min max num Range of frequencies

All the values specified in this way are assumed to be in the same units as the ones used in the configuration file. All quantities are double-precision numbers except for the ones representing integers, such as the number of points. This list may be found in KITE-tools, run

KITE-tools --help

Output

In the table below, we specify the name of the files that are created by KITE-tools according to the calculated quantity and the format of the data file.

Quantity File Column 1 Column 2 Column 3
Local Density of States ldos{E}.dat lattice position LDOS [Re]
ARPES arpes.dat k-vector ARPES [Re]
Density of States dos.dat energy DOS [Re] DOS [Im]
Optical Conductivity optical_cond.dat Frequency Opt. Cond [Re] Opt. Cond [Im]
DC Conductivity condDC.dat Fermi energy Cond [Re] Cond [Im]
Second-order optical conductivity nonlinear_cond.dat Frequency NL Cond [Re] NL Cond [Im]
  • All linear conductivities are in units of e2/h
  • Both Planck’s constant and electron charge are set to 1.
  • LDOS outputs one file for each requested energy. The energy is in the E in the file name.

For more details on the type of calculations performed during post-processing, check Resources where we discuss our method.

The single shot DC conductivity does not need any post-processing as it is an energy dependent calculation where the conductivity is calculated on the fly. In this particular case, the data is extracted directly from the hdf file with the following python script:

import h5py #read h5 files
import numpy as np
file_name = 'archive.h5' #h5 file
file_input = h5py.File(file_name, 'r+')

# extract the single shot
single_shot = file_input['Calculation']['singleshot_conductivity_dc']['SingleShot']
np.savetxt('cond_singleshot.dat', np.c_[single_shot[:, 0], single_shot[:, 1]])

Examples

Example 1

./KITE-tools h5_file.h5 --DOS -E 1024

Processes the .h5 file as usual but ignores the number of energy points in the density of states present there. Instead, KITE-tools will use the value 1024 as specified in the example.

Example 2

./KITE-tools h5_file.h5 --CondDC -E 552 -S 0.01

Processes the .h5 file but uses 552 points for the energy integration and a broadening parameter of 0.01.

Example 3

./KITE-tools h5_file.h5 --CondDC -T 0.4 -F 500

Calculates the DC conductivity using a temperature of 0.4 and 500 equidistant Fermi energies spanning the spectrum of the Hamiltonian.

Example 4

./KITE-tools h5_file.h --CondDC -F -1.2 2.5 30 --CondOpt -T 93

Calculates the DC conductivity using 30 equidistant Fermi energies in the range [-1.2, 2.5] and the optical conductivity using a temperature of 93.

Editing the HDF file

What is an HDF file?

Some extracts from the HDF group tutorial:

Hierarchical Data Format 5 (HDF5) is a unique open source technology suite for managing data collections of all sizes and complexity.

HDF5 has features of other formats but it can do much more. HDF5 is similar to XML in that HDF5 files are self-describing and allow users to specify complex data relationships and dependencies. In contrast to XML documents, HDF5 files can contain binary data (in many representations) and allow direct access to parts of the file without first parsing the entire contents.

HDF5 also allows hierarchical data objects to be expressed in a natural manner (similar to directories and files), in contrast to the tables in a relational database. Whereas relational databases support tables, HDF5 supports n-dimensional datasets and each element in the dataset may itself be a complex object. Relational databases offer excellent support for queries based on field matching, but are not well-suited for sequentially processing all records in the database or for selecting a subset of the data based on coordinate-style lookup.

Editing the file

As discussed in the postprocessing documentation, it is possible to calculate a quantity at different conditions with the same moments of an expansion. In these case, one need to change some paramenters in the hdf file that are settings for the post-processing tool. By editing the .h5 files, we can change the temperature of a conductivity calculation or the number of points in energy that are wanted.

For that purpose, we provide a simple python script that rewrites specific parts of our .h5 files. As discussed above, the .h5 contains hierarchical data objects that are similar to the structure of directories and files.

When modifying a paramenter like temperature, we need to locate in the .h5 file the quantity that is going to be calculated and modify its temperature. The script describes how to list the parameters associated to each quantity and how to edit one parameter.

file_name = 'archive.h5'
f = h5py.File(file_name, 'r+')     # open the file

# List all groups
print('All groups')
for key in f.keys():  # Names of the groups in HDF5 file.
    print(key)
print()

# Get the HDF5 group
group = f['Calculation']

# Checkout what keys are inside that group.
print('Single group')
for key in group.keys():
    print(key)
print()
#if you want to modify other quantity, check de list and change the subgroup below
# Get the HDF5 subgroup
subgroup = group['conductivity_dc']

# Checkout what keys are inside that subgroup.
print('Subgroup')
for key in subgroup.keys():
    print(key)
print()

new_value = 70
data = subgroup['Temperature']  # load the data
data[...] = new_value  # assign new values to data
f.close()  # close the file

# To confirm the changes were properly made and saved:

f1 = h5py.File(file_name, 'r')
print(np.allclose(f1['Calculation/conductivity_dc/Temperature'].value, new_value))