LITERATURE of block encryptions during the write phase of

LITERATURE
SURVEY

ABSTRACT

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 Novel Non-Volatile Memory (NVM) technologies
are gaining significant attentions from semiconductor industry in the
competition of universal memory development. We use Resistive Random Access
Memory (R-RAM) as example to discuss the implication of emerging nonvolatile
memory for tools and architectures. Three aspects, including device and memory
cell modeling, device/circuit co-design consideration and novel memory
architecture, are discussed in details. The goal of these discussions is to
design a high-density, low-power, high performance nonvolatile memory with
simple architecture and minimized circuit design complexity.

 LEO (Low overhead Encryption ORAM (Oblivious RAM)) is an
efficient Path ORAM encryption architecture that addresses the high write
overheads of ORAM integration in NVMs, while providing security equivalent to
the baseline Path ORAM. LEO reduces NVM cell writes by securely decreasing the
number of block encryptions during the write phase of a Path ORAM access. LEO
uses a secure, two-level counter mode encryption framework that
opportunistically eliminates re-encryption of unmodified blocks, reducing NVM
writes. Our evaluations show that on average, LEO decreases NVM energy by 60%,
improves lifetime by 1.51 times, and increases performance by 9% over the
baseline Path ORAM.

 In this literature survey we discuss about LEO
ad other methods (RRAM) in which non-volatile memory is enhanced and improved
to improve system performance as well as secure the data in NVMs.

 

 

INTRODUCTION

 

 Resistance-class non-volatile memories (NVMs)
such as phase change memory (PCM)   and
resistive RAM (RRAM)   are potential DRAM
alternatives because of their scalability, energy, and density advantages.
Whereas data persistence is a desirable property in NVMs, it poses security
vulnerabilities by exposing data to confidentiality attacks. Data encryption
preserves data confidentiality in NVMs; however, the memory access pattern can
reveal confidential information about the encrypted data. Oblivious RAM (ORAM)
is a cryptographic primitive that thwarts access-pattern-based attacks by
concealing the true memory access pattern. ORAM obfuscates information about
(i) the address accessed, (ii) the access type (read or write), and (iii) the
data being read/written. Recent research favors Path ORAM for its efficiency
and simplicity.

Path ORAM organizes the memory into
a binary tree composed of intermediate nodes and terminal nodes (leaves). Each
node (including the leaves) is a bucket containing a fixed number of slots to
store encrypted data blocks. Whereas the logical address (program address after
page table translation) of the data blocks is randomly mapped to one of the
leaves, the data block can reside in any bucket, i.e., any node on the path from
the root to the mapped leaf. A logical address (LA) access (read or write) to
the Path ORAM is composed of a (i) read phase, in which all encrypted data
blocks on the path from the root to the mapped leaf are fetched to the
processor and decrypted, followed by a (ii) write phase, in which the fetched
blocks are re-encrypted and written back to the path.

 LEO (Low overhead Encryption ORAM) is a secure and optimal encryption
framework for NVM Path ORAM. LEO minimizes the redundant re-encryption of
unmodified blocks during the write phase of a Path ORAM access, decreasing NVM
cell writes. LEO reduces redundant re-encryptions securely by mandating that
all buckets along an accessed path should experience block re-encryptions equal
to the highest count of new/modified blocks written to an individual bucket on
that path. In the buckets, the new/modified blocks, and if required, some
randomly selected unmodified blocks are re-encrypted to achieve uniform
re-encryption count across all buckets; the remaining unmodified blocks are not
re-encrypted. LEO preserves the security of the encryption architecture in the
baseline Path ORAM and does not leak any additional information beneficial to
the adversary.

As the traditional memory
technologies, e.g., DRAM, SRAM, and Flash, are approaching the end of their
lives, a new concept called “Universal Memory” rises above the horizon. The
expected characteristics of a universal memory include high-density (low-cost),
high-speed (for both read and write operations), low-power (both access and
standby powers), random-accessibility, non-volatility and unlimited endurance.

 Resistive random access memory (R-RAM) can
generally denote all memory technologies that rely on the resistance change to
store the information. Many R-RAM technologies with various storage mechanisms
have been extensively studied, including (but not limited to)
space-charge-limited current (SCLC), filament, programmable-metallization-cell
(PMC), Schottky contact and traps (SCT), etc. R-RAM not only has all the characteristics
of Magnetic RAM including non-volatility, high speed, high endurance and zero
standby power, but also can achieve high density.

 

SURVEY

 

 Let us discuss about our first method that is Low Overhead Encryption
ORAM. It increases the performance as well as the security of NVMs.

 To discuss this, a threat model is considered
and a brief intro to path ORAM is given based on this threat model.

 Our trusted computing base (TCB) consists of
the processor and all on-chip data, while the off-chip memory and the
processor-memory bus are not trusted. The adversary can passively monitor
information (data, address, and command) on the memory bus and from the
external memory. The data is encrypted; however, the attacker can analyze the
plaintext addresses and commands, i.e., memory access patterns to expose
confidential information about the encrypted data. ORAM is effective in
countering access-pattern-based attacks by randomizing and obfuscating the
memory access pattern.

 

ORAM Constructions

Trivial construction

 A trivial ORAM simulator construction, for
each read or write operation, reads from and writes to every single element in
the array, only performing a meaningful action for the address specified in
that single operation. The trivial solution thus, scans through the entire
memory for each operation. This scheme incurs a time overhead of {displaystyle Omega (n)} ?(n) for each memory
operation, where n is the size of the memory.

A
simple ORAM scheme

 A simple version of a statistically secure
ORAM compiler constructed by Chung and Pass is described in the following
along with an overview of the proof of its correctness. The compiler on
input n and a program ? with
its memory requirement n, outputs an equivalent oblivious
program ??.

If the input
program ? uses r registers, the output
program ?? will need r+n/?+polylogn {displaystyle
r+n/{alpha }+{ ext{poly}}log {n}} registers, where ?>1 {displaystyle alpha >1} is a
parameter of the construction. ??uses {displaystyle O(n{ ext{ poly}}log n)}O(npolylogn) memory
and its (worst-case) access overhead is O(polylogn) {displaystyle O({ ext{poly}}log n)}.

The ORAM compiler
is very simple to describe. Suppose that the original program ? has instructions for basic mathematical and control
operations in addition to two special instructions {displaystyle {mathsf {read}}(l)}read(l) and {displaystyle {mathsf {write}}(l,v)}write(l,v),
where {displaystyle {mathsf
{read}}(l)}read(l) reads the value at location l and {displaystyle {mathsf {write}}(l,v)}write(l,v) writes
the value v to l. The ORAM compiler, when
constructing ??, simply replaces
each read and writes instructions with
subroutines Oread and Owrite and keeps the rest of the
program the same. It may be noted that this construction can be made to work
even for memory requests coming in an online fashion.

Memory
organization of the oblivious program

 The program ?? stores
a complete binary tree T of depth {displaystyle d=log(n/alpha )}d=log(n/?) in its memory. Each
node in T is represented by a binary string of length at
most d. The root is the empty string, denoted by ?. The left and right children of a node represented by
the string {displaystyle gamma }? are {displaystyle gamma _{0}}?0 and {displaystyle gamma _{1}}?1
respectively. The program ?? thinks of the
memory of ? as being partitioned into blocks,
where each block is a contiguous sequence of memory cells of size ?. Thus, there are at most {displaystyle lceil n/alpha
ceil }(n/?) blocks in total. In
other words, the memory cell r corresponds to block {displaystyle b=lfloor r/alpha
floor }b=(r/?).

At any point of
time, there is an association between the blocks and the leaves in T.
To keep track of this association, ?? also stores a
data structure called position map, denoted by {displaystyle Pos}Pos, using {displaystyle O(n/alpha )}O(n/?) registers. This data
structure, for each block b, stores the leaf of T associated
with b in {displaystyle
Pos(b)}Pos(b).

Each node
in T contains an array with at most K triples.
Each triple is of the form {displaystyle
(b,Pos(b),v)}(b,Pos(b),v), where b is a block
identifier and v is the contents of the block. Here, K
is a security parameter and is {displaystyle O({ ext{poly}}log n)}O(polylogn).

Now let’s discuss
about the R-RAM (Resistive RAM) technique to improve the quality and efficiency
of RAM.

BASICS OF R-RAM

 Although R-RAM technology involves many
different storage mechanisms, there are only two “conventional” operation types
in R-RAM design: unipolar switching and bipolar switching. Within this context,
unipolar operation executes the programming/erasing by using short and long
pulse, or by using high and low voltage with the same voltage polarity. In
contrast, bipolar operation is achieved by short pulses with opposite voltage
polarity. One typical unipolar switching example appears in filament-based
R-RAM device. A filament or conducting path is formed in an insulating
dielectric after applying a sufficiently high voltage. Once the filament is
formed, it may be set (leading to a low resistance) or reset (leading to a high
resistance), by appropriate voltages. One typical bipolar switching example is
PMC device, which is composed of two solid metal electrodes – one relatively
inert the other electrochemically active. A thin electrolyte film is allocated
between two electrodes. When a negative bias is applied to the inert electrode,
metal ions in the electrolyte and some originating from the positive active
electrode flow into the electrolyte and are reduced by the inert electrode.
Finally, the ions form a small metallic “nanowire between the two electrodes.
As a result, the resistance between two electrodes is dramatically reduced.
When erasing the cell, a positive bias is applied on the inert electrode. Metal
ions will migrate back into the electrolyte, and eventually to the
negatively-charged active electrode. The nanowire is broken and the resistance
increases again. For unipolar R-RAM memory, a diode in series with data storage
cell structure can be used as selection device (1D1R. The selection device in
bipolar R-RAM memory can be NMOS transistor or non-ohmic device (NOD. Memory
cell with NOD can achieve high array density. However, it results in sneak path
which has three or more cells in series. In such design, the voltage across the
selected cell must be much higher than the one across the other cells in the
sneak path to guarantee proper functionality.

CONCLUSION

 LEO is a secure, optimal encryption
architecture for cost-effective integration of ORAM with NVMs to thwart access-pattern-based
data confidentiality attacks. LEO reduces redundant re-encryptions of unchanged
blocks during the write phase of an ORAM access, which reduces expensive NVM
writes in practice. LEO ensures security equivalent to the baseline ORAM by mandating
similar block re-encryption count in all buckets on an accessed path, equal to
the highest number of modified blocks in an individual bucket during that ORAM
access. LEO uses a two-level counter design to realize this efficient
re-encryption framework that decreases NVM energy, improves lifetime, and
enhances overall system performance.

 Similarly we have also discussed R-RAM method
to increase the efficiency of NVMs so that memory access is easy and quick.
Combining this two in a single NVM would make it more efficient and hence
increase NVM quality and performance which is what is taking place in the
current world of computer architecture.

 

x

Hi!
I'm Kara!

Would you like to get a custom essay? How about receiving a customized one?

Check it out