Commit 61d85765 authored by Xunnamius (Zara)'s avatar Xunnamius (Zara)

usecases in, small edits all over the place

parent 583c4984
......@@ -26,7 +26,7 @@ the active cipher or the inactive cipher. However, there is no technical
limitation preventing various different nuggets encrypted with three, four, or
more unique ciphers.
we use POSIX message queues to indicate intent to switch. A production-ready
We use POSIX message queues to indicate intent to switch. A production-ready
implementation would be greatly simplified by adding an ``intent'' parameter to
the POSIX \textit{read()} and \textit{write()} system calls, allowing
SwitchCrypt to more exactly map individual I/O operations to specific areas of
......@@ -52,9 +52,10 @@ three different configurations: a ``fast'' mode with parameters
$R_{max}$=$28$, $H_I$=$2$, $I_C$=$10$)}, and a ``secure'' mode with parameters
\texttt{FreestyleSecure($R_{min}$=$20$, $R_{max}$=$36$, $H_I$=$1$, $I_C$=$12$)}.
Thanks to Freestyle's output randomization, we can skip the overhead of
tracking, detecting, and handling overwrites when nuggets are using it,
offsetting the 1.6x to 3.2x slowdown compared to the ChaCha20~\cite{Freestyle}.
Thanks to Freestyle's output randomization (see \secref{design}), we can skip
the overhead of tracking, detecting, and handling overwrites when nuggets are
using it, offsetting the 1.6x to 3.2x performance loss of using Freestyle versus
ChaCha20~\cite{Freestyle}.
\PUNT{\subsubsection{Implementing Cipher Switching}
......
......@@ -66,8 +66,8 @@ In this section we answer the following questions:
\figref{tradeoff-no-ratios} shows the security score versus median normalized
latency tradeoff between different stream ciphers when completing our sequential
and random I/O workloads. Trends for median hold when looking at tail latencies
as well. Each line represents one workload. From left to right: 4K, 512K, 5M,
and 40M respectively. Each symbol represents one of our ciphers. From left to
as well. Each line represents one workload. From left to right: 4KB, 512KB, 5MB,
and 40MB respectively. Each symbol represents one of our ciphers. From left to
right: ChaCha8, ChaCha20, Freestyle Fast, Freestyle Balanced, and Freestyle
Secure. As expected, of the ciphers we tested, those with higher security scores
result in higher latency for I/O operations while ciphers with less desirable
......@@ -75,10 +75,10 @@ security properties result in lower latency. The relationship between these
concerns is not simply linear, however, which exposes a rich security vs
latency/energy tradeoff space.
Besides the 4K workload, the shape of each workload follows a similar
superlinear latency-vs-security trend, hence we will mostly focus on 40MB and 4K
Besides the 4KB workload, the shape of each workload follows a similar
superlinear latency-vs-security trend, hence we will mostly focus on 40MB and 4KB
workloads going forward. Due to the overhead of metadata management and the fast
completion time of the 4K workloads (\ie{little time for amortization of
completion time of the 4KB workloads (\ie{little time for amortization of
overhead}), ChaCha8 and ChaCha20 take longer to complete than the higher scoring
Freestyle Fast. This advantage is not enough to make Freestyle Balanced or
Secure complete faster than the ChaCha variants, however.
......@@ -86,7 +86,7 @@ Secure complete faster than the ChaCha variants, however.
Though ChaCha8 is generally faster than ChaCha20, there is some variability in
our timing setup when capturing extremely fast events occurring close together
in time. This is why ChaCha8 sometimes appears with higher latency than
ChaCha20 for normalized 4K workloads. ChaCha8 is not slower than ChaCha20.
ChaCha20 for normalized 4KB workloads. ChaCha8 is not slower than ChaCha20.
\subsection{Reaching Between Static Configuration Points}\label{subsec:2}
......@@ -113,14 +113,14 @@ and 3:1 described above}).
The point of this experiment is to determine if SwitchCrypt can effectively
transition the backing store between ciphers without devastating performance.
For the 40M, 5M, and 512K workloads (40M is shown), we see that SwitchCrypt can
achieve dynamic security/energy tradeoffs reaching points not accessible with
prior work, all with minimal overhead.
For the 40MB, 5MB, and 512KB workloads (40MB is shown), we see that SwitchCrypt
can achieve dynamic security/energy tradeoffs reaching points not accessible
with prior work, all with minimal overhead.
Again, due to the overhead of metadata management for non-Freestyle ciphers (see
\secref{implementation}) and the fast completion time of the 4K workloads
\secref{implementation}) and the fast completion time of the 4KB workloads
preventing SwitchCrypt from taking advantage of amortization, ChaCha8 and
ChaCha20 take longer to complete than the higher scoring Freestyle Fast for 4K
ChaCha20 take longer to complete than the higher scoring Freestyle Fast for 4KB
reads. This advantage is not enough to make the ratios involving Freestyle
Balanced or Secure complete faster than the ChaCha ratio variants, however.
......@@ -144,21 +144,21 @@ means these penalties are paid more often, ballooning latency.
and Selective strategies with the same configuration of ratios as
\figref{tradeoff-with-ratios}.
For the 40M, 5M, and 512K workloads (40M is shown), we see that Mirrored and
For the 40MB, 5MB, and 512KB workloads (40MB is shown), we see that Mirrored and
Selective \emph{read} workloads and the Selective \emph{write} workload achieve
parity with the Forward strategy experiments. This makes sense, as the only
overhead for Selective and Mirrored reads is determining which part of the
backing store to commit data to, a trivial process. The same applies to
Selective writes.
For the 4K Mirrored and Selective \emph{read} workloads and the Selective
For the 4KB Mirrored and Selective \emph{read} workloads and the Selective
\emph{write} workload, we see behavior similar to that in
\figref{tradeoff-with-ratios}, as expected.
Mirrored writes across all workloads are very slow. This is to be expected,
since the data is being mirrored across all areas of the backing store. In our
experiments, the backing store can be considered partitioned in half. This
overhead is most egregious for the 4K Mirrored write workload. This makes
overhead is most egregious for the 4KB Mirrored write workload. This makes
Selective preferable to Mirrored; however, Selective can never converge the
backing store to a single cipher configuration or survive the loss of an entire
region (see: \secref{usecases}).
......@@ -167,21 +167,21 @@ region (see: \secref{usecases}).
From the results of the previous three experiments, we calculate that Forward
switching has between \TODO{X} and \TODO{Y} latency overhead compared to
baseline I/O, with average overhead at \TODO{XX} for 40M, 5M and 512K workloads
and average overhead at \TODO{YY} for 4K read workloads and \TODO{YYY} for 4K
write workloads. There is no spatial overhead with the Forward switching
strategy.
baseline I/O, with average overhead at \TODO{XX} for 40MB, 5MB and 512KB
workloads and average overhead at \TODO{YY} for 4KB read workloads and
\TODO{YYY} for 4KB write workloads. There is no spatial overhead with the
Forward switching strategy.
Similarly, we calculate that Selective switching has between \TODO{X} and
\TODO{Y} latency overhead compared to baseline I/O, with average overhead at
\TODO{XX} for 40M, 5M and 512K workloads and average overhead at \TODO{YY} for
4K read workloads and \TODO{YYY} for 4K write workloads. Read overhead. Spatial
overhead was half of all writable space on the backing store.
\TODO{XX} for 40MB, 5MB and 512KB workloads and average overhead at \TODO{YY}
for 4KB read workloads and \TODO{YYY} for 4KB write workloads. Read overhead.
Spatial overhead was half of all writable space on the backing store.
Finally, we calculate that Mirrored switching has between \TODO{X} and \TODO{Y}
read latency overhead and between \TODO{X} and \TODO{Y} write overhead compared
to baseline I/O, with average overhead at \TODO{XX}/\TODO{XXX} for 40M, 5M and
512K read/write workloads and average overhead at \TODO{YY}/\TODO{YYY} for 4K
to baseline I/O, with average overhead at \TODO{XX}/\TODO{XXX} for 40MB, 5MB and
512KB read/write workloads and average overhead at \TODO{YY}/\TODO{YYY} for 4KB
read/write workloads. Spatial overhead was half of all writable space on the
backing store.
......
\section{SwitchCrypt Case Studies}\label{sec:usecases}
\TODO{You need some sort of overview paragraph that tells people what is to come
in this section and why they should care about it. For example, you want to say
something about these case studies cover a wide range of situations including...
They also demonstrate uses of both temporal and spatial switching... Basically,
make the argument that these case studies provide good coverage of all the
things that you discussed earlier in the paper.}
In this section, we provide case studies and empirical results demonstrating the
practical utility of cipher switching. We cover a wide range of situations,
highlighting concerns like configuration convergence, trading off writable
space, meeting latency goals, keeping within an energy budget, etc. We also
demonstrate the utility of both temporal and spatial switching strategies,
exploring the range of conditions under which certain strategies are optimal.
\subsection{Balancing Security Goals with a Constrained Energy Budget}
......@@ -50,7 +50,7 @@ slightly more power in the short term, we stay within our energy budget and
finish before the devices dies. Further, when we get our device to a charger,
SwitchCrypt can converge nuggets back to Freestyle Balanced.
On average, using Forward cipher switching results in a \TODO{XXX} total energy
On average, using Forward cipher switching resulted in a \TODO{XXX} total energy
use reduction.
\subsection{Variable Security Regions}
......@@ -61,26 +61,24 @@ single cipher. We demonstrate \emph{Variable Security Regions} (VSR), where we
can choose to encrypt select files or portions of files with different keys and
ciphers below the filesystem level.
The goal is that if only a small percentage of the data needs the strongest
encryption, then only a small percentage of the data should have that associated
overhead. Using prior techniques, either all the data would be stored with high
overhead, the critical data would be stored without sufficient security, or the
data would have to be split among separate files and stored across partitioned
stores.
Communicating classified materials, corporate secrets, etc. require the highest
level of discretion when handled, yet sensitive information like this can
appears within a (much) larger amount of data that we value less. In this
scenario, a user wants to indicate one or more regions of a file are more
sensitive than the others. For example, perhaps banking transaction information
is littered throughout a document; perhaps passwords and other sensitive
information exists within several much larger files.
We begin by writing 10 5MB and 4KB files to unique SwitchCrypt instances using
ChaCha8 and again on instances using Freestyle Balanced. We repeat this on a
SwitchCrypt instance using Selective switching with a 3:1 ratio of ChaCha8
nugget I/O operations versus Freestyle Balanced operations. We repeat this
experiment three times.
Storing classified materials, corporate secrets, etc. require the highest level
of discretion, yet sensitive information like this can appears within a (much)
larger amount of data that we value less. But if only a small percentage of the
data needs the strongest encryption, then only a small percentage of the data
should have that associated overhead. In this scenario, a user wants to indicate
one or more regions of a file are more sensitive than the others. For example,
perhaps banking transaction information is littered throughout a document;
perhaps passwords and other sensitive information exists within several much
larger files. Using prior techniques, either all the data would be stored with
high overhead, the critical data would be stored without sufficient security, or
the data would have to be split among separate files and managed across
partitioned stores.
We begin by writing 10 5MB and 4KB data (simulating larger and smaller VSR
files) to unique SwitchCrypt instances using ChaCha8 and again on instances
using Freestyle Balanced. We repeat this on a SwitchCrypt setup using Selective
switching with a 3:1 ratio of ChaCha8 nugget I/O operations versus Freestyle
Balanced operations. We repeat this experiment three times.
\begin{figure}[ht] \textbf{VSR Use Case: ChaCha8 vs Freestyle Secure Sequential
4KB, 5MB Performance}\par\medskip
......@@ -92,14 +90,14 @@ experiment three times.
\end{figure}
In \figref{usecase-vsr-bar}, we see the sequential read and write performance of
4K and 5M workloads when nuggets are encrypted exclusively with ChaCha8 or
Freestyle Balanced. Between them, we see SwitchCrypt Selective switching 3:1
ratio I/O results.
4KB and 5MB workloads when nuggets are encrypted exclusively with ChaCha8 or
Freestyle Balanced. Between them, we see Selective switching 3:1 ratio I/O
results.
Our goal is to use VSRs to keep our sensitive data secure while keeping the
performance and energy use benefits of using a fast cipher for the majority of
I/O operations. On average, using SwitchCrypt Selective switching versus prior
work results in a \TODO{XXX} reduction in latency.
Our goal is to use VSRs to keep our sensitive data secure while keeping
the performance and battery life benefits of using a fast cipher for the
majority of I/O operations. On average, using SwitchCrypt Selective switching
versus prior work results in a \TODO{XXX} reduction in latency.
\subsection{Responding to End-of-Life Slowdown in Solid State Drives}
......@@ -117,8 +115,8 @@ offset some of the performance loss by switching the ciphers of high traffic
nuggets to the fastest cipher available using Forward switching.
We begin by writing 10 40MB files to SwitchCrypt per each cipher as a baseline.
We then introduce a delay into SwitchCrypt I/O of $20ms$ and repeat the
experiment three times.
We then introduce a delay into SwitchCrypt I/O of $20ms$, simulating drive
slowdown, and repeat the experiment three times.
\begin{figure}[ht] \textbf{SSD EoL Use Case: Latency-Security Tradeoff vs
Goals}\par\medskip
......@@ -135,7 +133,7 @@ changed, we see increased latency in the delayed workloads.
Our goal is to remain under the latency ceiling while remaining above the
security floor. Thanks to Forward switching, accesses to highly trafficked areas
of the drive can remain performant even during EoL.
of the drive can remain performant even during drive end-of-life.
\subsection{Custody Panic: Securing Device Data Under Duress}
......@@ -144,17 +142,17 @@ advantage of more energy-efficient high-performance ciphers while retaining the
ability to quickly converge the entire backing store to a single high-security
cipher leveraging SSD Instant Secure Erase (ISE).
Nation-state and other ``adversaries'' have extensive compute resources,
knowledge of side-channels, and access to technology like quantum computers.
Suppose a scientist were attempting to re-enter her country through a border
entry point when she is stopped. Further suppose her laptop containing sensitive
priceless research data is confiscated from her custody. Being a security
researcher, she has a chance to trigger a remote wipe, where the laptop uses
Instant Secure Erase to reset its internal storage, permanently destroying all
her data. While she certainly doesn't want her data falling into the wrong
hands, she cannot afford to lose that data either. In such a scenario, it would
be useful if, instead of destroying the data, the storage layer could switch
itself to a more secure state as quickly as possible.
Nation-state and other adversaries have extensive compute resources, knowledge
of obscure side-channels (\eg{Heartbleed}), and access to technology like
quantum computers. Suppose a scientist were attempting to re-enter her country
through a border entry point when she is stopped. Further suppose her laptop
containing sensitive priceless research data is confiscated from her custody.
Being a security researcher, she has a chance to trigger a remote wipe, where
the laptop uses Instant Secure Erase to reset its internal storage, permanently
destroying all her data. While she certainly doesn't want her data falling into
the wrong hands, she cannot afford to lose that data either. In such a scenario,
it would be useful if, instead of destroying the data, the storage layer could
switch itself to a more secure state as quickly as possible.
\begin{figure}[ht] \textbf{Custody Panic Use Case: Security Goals vs Time
Constraint}\par\medskip
......@@ -184,5 +182,3 @@ and the Mirrored strategy, we can quickly and practically converge the backing
store to this locked down state. With prior work, data is either too weakly
encrypted or the device becomes too slow for daily use (latency ceiling). In
exchange, we trade off half of our drive's writeable space.
\TODO{Again, need some summary of what we just saw in this section. What are the lessons learned from these four case studies? How do they relate to the other points in the paper?}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment