DEADLINES
Part 1 Due: Sunday, April 24, 2022
Due: Sunday, May 1, 2022
Version History:
Submitting the first part of lab 2 on time is worth 10% of your lab 2 final grade and will be graded all-or-nothing. As in the case of lab 1, we will only visually inspect your implementation at this point. We will NOT run any unit tests. However, you are strongly advised to ensure that your code passes the tests. Of course, when you submit your solution for the entire lab, we will run all unit tests (and additional tests also).
In this lab assignment, you will write a set of operators for SimpleDB to implement table modifications (e.g., insert and delete records), selections, and joins. These will build on top of the foundation that you wrote in Lab 1 to provide you with a database system that can perform simple queries over multiple tables.
Additionally, we ignored the issue of buffer pool management in Lab 1: we have not dealt with the problem that arises when we reference more pages than we can fit in memory over the lifetime of the database. In Lab 2, you will design an eviction policy to flush stale pages from the buffer pool.
You still do not need to implement transactions or locking in this lab.
The remainder of this document gives some suggestions about how to start coding, describes a set of exercises to help you work through the lab, and discusses how to hand in your code. This lab requires you to write a fair amount of code, so we encourage you to start early!
You will need to add these new files to your release. The easiest way to do this is to untar the new code in the same directory as your top-level simpledb directory, as follows:
$ cp -r CSC553-lab1 CSC553-lab2
$ wget http://dice.cs.depaul.edu/courses/553/labs/lab2/CSC553-lab2-supplement.tar.gz
tar -xvzf CSC553-lab2-supplement.tar.gz
As before, we strongly encourage you to read through this entire document to get a feel for the high-level design of SimpleDB before you write code.
We suggest exercises along this document to guide your implementation, but
you may find that a different order makes more sense for you. As before,
we will grade your assignment by looking at your code and verifying that
you have passed the test for the ant targets test
and
systemtest
. See Section 3.4 for a complete discussion of
grading and list of the tests you will need to pass.
Here's a rough outline of one way you might proceed with your SimpleDB implementation; more details on the steps in this outline, including exercises, are given in Section 2 below.
Filter
and Join
and
verify that their corresponding tests work. The Javadoc comments for
these operators contain details about how they should work. We have given you implementations of
Project
and OrderBy
which may help you
understand how other operators work.
BufferPool
. You do not need to worry about
transactions at this point.
Insert
and Delete
operators.
Like all operators, Insert
and Delete
implement
DbIterator
, accepting a stream of tuples to insert or delete
and outputting a single tuple with an integer field that indicates the
number of tuples inserted or deleted. These operators will need to call
the appropriate methods in BufferPool
that actually modify the
pages on disk. Check that the tests for inserting and
deleting tuples work properly.
Note that SimpleDB does not implement any kind of consistency or integrity checking, so it is possible to insert duplicate records into a file and there is no way to enforce primary or foreign key constraints.
systemtest
target, which is the goal of this lab.
Finally, you might notice that the iterators in this lab extend the
Operator
class instead of implementing the DbIterator
interface. Because the implementation of next/hasNext
is often repetitive, annoying, and error-prone, Operator
implements this logic generically, and only requires that you implement
a simpler readNext. Feel free to use this style of
implementation, or just implement the DbIterator
interface if you prefer.
To implement the DbIterator interface, remove extends Operator
from iterator classes, and in its place put implements DbIterator
. Personally, for me, implementing the DbIterator was more intuitive as this interface was familiar in Lab1.
Predicate
that is specified as part of its constructor. Hence,
it filters out any tuples that do not match the predicate.
JoinPredicate
that is passed in as part of its constructor.
We only require a simple nested loops join, but you may explore more
interesting join implementations. Describe your implementation in your lab
writeup.
Removing tuples: To remove a tuple, you will need to implement
deleteTuple
.
Tuples contain RecordIDs
which allow you to find
the page they reside on, so this should be as simple as locating the page
a tuple belongs to and modifying the headers of the page appropriately.
Adding tuples: The insertTuple
method in
HeapFile.java
is responsible for adding a tuple to a heap
file. To add a new tuple to a HeapFile, you will have to find a page with
an empty slot. If no such pages exist in the HeapFile, you
need to create a new page and append it to the physical file on disk. You will
need to ensure that the RecordID in the tuple is updated correctly.
To implement HeapPage, you will need to modify the header bitmap for methods such as insertTuple() and deleteTuple(). You may find that the getNumEmptySlots() and isSlotUsed() methods we asked you to implement in Lab 1 serve as useful abstractions. Note that there is a markSlotUsed method provided as an abstraction to modify the filled or cleared status of a tuple in the page header.
Note that it is important that the HeapFile.insertTuple() and HeapFile.deleteTuple() methods access pages using the BufferPool.getPage() method; otherwise, your implementation of transactions in the next lab will not work properly.
Implement the following skeleton methods in src/simpledb/BufferPool.java:
These methods should call the appropriate methods in the HeapFile that belong to the table being modified (this extra level of indirection is needed to support other types of files — like indices — in the future).
At this point, your code should pass the unit tests in HeapPageWriteTest and
HeapFileWriteTest. We have not provided additional unit tests for
HeapFile.deleteTuple()
or BufferPool
.
Insert
and Delete
operators.
For plans that implement insert
and delete
queries,
the top-most operator is a special Insert
or Delete
operator that modifies the pages on disk. These operators return the number
of affected tuples. This is implemented by returning a single tuple with one
integer field, containing the count.
tableid
specified in its constructor. It should
use the BufferPool.insertTuple()
method to do this.
tableid
specified in its constructor. It
should use the BufferPool.deleteTuple()
method to do this.
Delete
. Furthermore, you
should be able to pass the InsertTest and DeleteTest system tests.
numPages
. Now, you will choose a page eviction
policy and instrument any previous code that reads or creates pages to
implement your policy.
When more than numPages pages are in the buffer pool, one page should be evicted from the pool before the next is loaded. The choice of eviction policy is up to you; it is not necessary to do something sophisticated. A simple policy is to evict the least frequently used. You may further choose to combine with recency but it requires somewhat more bookkeeping. Describe your policy in the lab writeup.
Notice that BufferPool
asks you to implement
a flushAllPages()
method. This is not something you would ever
need in a real implementation of a buffer pool. However, we need this method
for testing purposes. You should never call this method from any real code.
Because of the way we have implemented ScanTest.cacheTest, you will
need to ensure that your flushPage and flushAllPages methods
do not evict pages from the buffer pool to properly pass
this test.
flushAllPages should call flushPage on all pages in the BufferPool,
and flushPage should write any dirty page to disk and mark it as not
dirty, while leaving it in the BufferPool.
The only method which should remove page from the buffer pool is
evictPage, which should call flushPage on any dirty page it evicts.
flushPage()
method and additional helper
methods to implement page eviction in:
If you did not implement writePage()
in
HeapFile.java above, you will also need to do that here.
For part 1 of this lab, you do not need to implement writePage().
At this point, your code should pass the EvictionTest system test.
Since we will not be checking for any particular eviction policy, this test works by creating a BufferPool with 16 pages (NOTE: while DEFAULT_PAGES is 50, we are initializing the BufferPool with less!), scanning a file with many more than 16 pages, and seeing if the memory usage of the JVM increases by more than 5 MB. If you do not implement an eviction policy correctly, you will not evict enough pages, and will go over the size limitation, thus failing the test.
You have now completed the code for this lab. Good work!
The following code implements a simple join query between two tables, each
consisting of three columns of integers. (The file
some_data_file1.dat
and some_data_file2.dat
are
binary representation of the pages from this file). This code is equivalent
to the SQL statement:
SELECT * FROM some_data_file1, some_data_file2 WHERE some_data_file1.field1 = some_data_file2.field1 AND some_data_file1.id > 1For more extensive examples of query operations, you may find it helpful to browse the unit tests for joins, and filters.
package simpledb; import java.io.*; public class jointest { public static void main(String[] argv) { // construct a 3-column table schema Type types[] = new Type[]{ Type.INT_TYPE, Type.INT_TYPE, Type.INT_TYPE }; String names[] = new String[]{ "field0", "field1", "field2" }; TupleDesc td = new TupleDesc(types, names); // create the tables, associate them with the data files // and tell the catalog about the schema the tables. HeapFile table1 = new HeapFile(new File("some_data_file1.dat"), td); Database.getCatalog().addTable(table1, "t1"); HeapFile table2 = new HeapFile(new File("some_data_file2.dat"), td); Database.getCatalog().addTable(table2, "t2"); // construct the query: we use two SeqScans, which spoonfeed // tuples via iterators into join TransactionId tid = new TransactionId(); SeqScan ss1 = new SeqScan(tid, table1.getId(), "t1"); SeqScan ss2 = new SeqScan(tid, table2.getId(), "t2"); // create a filter for the where condition Filter sf1 = new Filter( new Predicate(0, Predicate.Op.GREATER_THAN, new IntField(1)), ss1); JoinPredicate p = new JoinPredicate(1, Predicate.Op.EQUALS, 1); Join j = new Join(p, sf1, ss2); // and run it try { j.open(); while (j.hasNext()) { Tuple tup = j.next(); System.out.println(tup); } j.close(); Database.getBufferPool().transactionComplete(tid); } catch (Exception e) { e.printStackTrace(); } } }
Both tables have three integer fields. To express this, we create
a TupleDesc
object and pass it an array of Type
objects indicating field types and String
objects
indicating field names. Once we have created this TupleDesc
, we initialize
two HeapFile
objects representing the tables. Once we have
created the tables, we add them to the Catalog. (If this were a database
server that was already running, we would have this catalog information
loaded; we need to load this only for the purposes of this test).
Once we have finished initializing the database system, we create a query
plan. Our plan consists of two SeqScan
operators that scan
the tuples from each file on disk, connected to a Filter
operator on the first HeapFile, connected to a Join
operator
that joins the tuples in the tables according to the
JoinPredicate
. In general, these operators are instantiated
with references to the appropriate table (in the case of SeqScan) or child
operator (in the case of e.g., Join). The test program then repeatedly
calls next
on the Join
operator, which in turn
pulls tuples from its children. As tuples are output from the
Join
, they are printed out on the command line.
To submit your code, please create a CSC553-lab2.tar.gz and submit it to the class D2Lsubmissions folder. You may submit your code multiple times; we will use the latest version you submit that arrives before the deadline (before 11pm on the due date). Please also submit your individual writeup as a PDF, plain text file (.txt), .doc, or .docx.
Make sure your code is packaged so the instructions outlined in section 3.3 work.
Please also submit your runtimes for the three queries in the contest. While the contest is optional, it is mandatory that your SimpleDB prototype be capable of executing these queries. You can also post on the class message boardif you feel you have run into a bug.
50% of your grade will be based on whether or not your code passes the system test suite we will run over it. These tests will be a superset of the tests we have provided. Before handing in your code, you should make sure it produces no errors (passes all of the tests) from both ant test and ant systemtest.
Important: before testing, we will replace your build.xml, HeapFileEncoder.java, and the entire contents of the test/ directory with our version of these files! This means you cannot change the format of .dat files! You should therefore be careful changing our APIs. This also means you need to test whether your code compiles with our test programs. In other words, we will untar your tarball, replace the files mentioned above, compile it, and then grade it. It will look roughly like this:
$ gunzip CSC553-lab2.tar.gz $ tar xvf CSC553-lab2.tar $ cd ./CSC553-lab2 [replace build.xml, HeapFileEncoder.java, and test] $ ant test $ ant systemtest [additional tests]If any of these commands fail, we'll be unhappy, and, therefore, so will your grade.
An additional 50% of your grade will be based on the quality of your writeup and our subjective evaluation of your code.