asdf

# Oh god

Everything on this blog is so old and so silly

# Oh god

Everything on this blog is so old and so silly.

# [09-15] Terra Review (Plus review of Halide, NetKAT, since I joined this blog late)

**Terra**

Terra is a language designed to do fast, low level computations, built on the lightweight high-level language of Lua. This “dual language” of Terra and Lua allows you to have the high level abstractions of a higher order programming langauge, and the speed and efficiency of a low level programming language. Terra is statically typed and lets you do memory management (mcuh like C), allowing for high performance. The benchmarks the paper presents is compared to efficiently written C code, and Terra does very well. They lean heavily on “meta-programming” Terra via Lua, and achieve interoperability between the two languages.

High level comments:

I’m curious about the purpose of this language. Was this meant to be used practically? They present two applications – matrix multiplication and a DSL built on top of Terra/Lua, as well as a few examples for how to practically program templates. But how well does this actually solve scientific computation problems and practice?

Unlike the NetKAT paper, this paper seems to present little in the way of “reasoning effectively about code”, and emphasizes performance. One drawback of it is that it leans heavily on metaprogramming and has two languages rather than just one, and so one needs to reason about both languages. It’s not clear that it makes headway on allowing one easier/cleaner ways to reason about code, and it seems possible to me that the dual languages architecture does not help in this regard (but I am not sure about this)

It’s not actually clear to me who is meant to use this. It does seem to be practice oriented / extensible (the language they create to do image processing is more expressive than Halide), but is it possible to build other practical languages on top of this structure? Is it easy to do in practice? Can one reason about the code effectively?

I’m actually not sure why they chose Lua of all languages to build Terra on. My impression of Lua is that it’s just a lighter-weight version of Python without all the libraries. It’s not clear to me what properties Lua has that make it desirable.

This paper does seem to achieve good benchmarks in performance, and provide a clean way to program using “high level abstractions” and “low level performance” by meta-programming and building a low level language on top of a high level one. Lua and Terra can effortlessly convert into each other, making the interoperability between two languages in the “two language problem” less of a major concern.

**NetKAT:**

- What domain does the paper address?
- Network programming and SDN.

- What issues does the paper identify as problems with expressing programs in this domain?
- Previously, network programming required a lot of low level code, and programming language solutions were very ad hoc and did not have clear underlying principles to them. Many languages also required programming directly on the switches and components of the network, without any organized global structure or principles — even when the desired behavior of the network was best described by some global structure on the flow of packets.

- What solution does the paper propose for addressing these problems?
- NetKAT provides a principled and clean way to reason about network behavior, pinning down a clean set of semantics in a simplified version of network programming and allowing programmers to reason about networks using a clean underlying mathematical structure with a sound equational theory.

- What does the advantages does the solution provide, in terms of correctness, performance, and expressiveness?
- It lets people reason using the Kleene* Algebra with Tests, which is well studied and sound and complete. Advantages to this include ease of reasoning, simplicity, and extensibility, all of which are heuristics that indicate that this is the “correct” abstraction for the simplified network model presented.

- What does one give up by using this language?
- Actually, I don’t konw. It’s unclear whether it actually tackles the needs of networking programmers, since this is not mentioned in the paper, but it’s possible to me that it does.

- How does the paper evaluate the language? What are the strengths and weaknesses of the language with respect to the evaluation? What are the strengths and weaknesses of the evaluations?
- The paper evaluates the language by demonstrating the equational theory and arguing that the mathematics captures the structure of simple network programs. In essence, it’s evaluating the language from a programming languages point of view, valuing ease of reasoning over any performance or practical concerns.
- The strength is that there is a sound equational theory that lets people decide, among many things, policy equality.
- The weakness is that there is no mention of the practical problems network programmers actually face, and whether it is expressive enough or if the primitives are correct for those who actually do network programming. This is barely mentioned at all in the paper, in fact.

- How does this language compare with other programming solutions in this domain?
- Unlike Frenetic, which actually shows some evaluation of (admittedly simple) network programs, this paper barely presents any practical implementations at all. It places much more emphasis on the PL theory side, and less on the practice.

- What is the balance of theory and practice in this research paper? How does this affect the current usefulness of the language? How does this affect the
*future*usefulness of the language?- This paper is theoretical, though it’s not clear to me whether it’s practical – the authors never say.

- What properties of the networking domain make it particularly appealing to programming languages people?
- The networking domain has an underlying graph structure, with a very nice underlying semantics.

- What kinds of guarantees can good language/logic design provide for this domain that would be impossible otherwise?
- (I’m not sure, actually). I would guess that a clean equational theory, a style of programming that fits well with people’s internal representation of networks, primtives that make common networking program problems easy, and a good abstraction that doesn’t sacrifice performance or efficiency (or whatever metric it is that networking people care about)

- Who is right? Are the programming languages people inventing constructs that are nice for them but not necessarily useful, or are the networks people not being imaginative enough?
- Well, I suspect this as a “false dichotomy”. Certainly the programming languages people are inventing constructs and exploring the space of what’s possible in network programming, and this is related-but-not-directly-in-line with the practical needs of networking folks. I’m actually not sure what the network programming folks have objections with here, so it’s not clear to me whether they are being reasonable or not.

I would hazard a guess that there may simply be too much of a “cultural divide” or “understanding gap” between PL and networkers (and there may actually be common ground that can be found if they worked together), or else that the two fields actually are disjoint and moving in different directions despite allegedly working on the same problem of creating network programming languages.

- Well, I suspect this as a “false dichotomy”. Certainly the programming languages people are inventing constructs and exploring the space of what’s possible in network programming, and this is related-but-not-directly-in-line with the practical needs of networking folks. I’m actually not sure what the network programming folks have objections with here, so it’s not clear to me whether they are being reasonable or not.

**Halide:**

- What domain does the paper address?
- Image processing and in particular, stencil computations.

- What issues does the paper identify as problems with expressing programs in this domain?
- Optimizing code to do image processing has a ton of little tricks and dark arts, that are all very ad hoc and leans heavily on humans. Performance is a big concern in image processing, and it’s hard to do this without using humans to do unprincipled little tricks.

- What solution does the paper propose for addressing these problems?
- Halide separates the algorithm and the schedule, and provides a functional language in which one can effectively program image processing computations. It also provides an incredibly powerful autotuner to generate high performance code.

- What does the advantages does the solution provide, in terms of correctness, performance, and expressiveness?
- Halide reduces the vast space of optimizations in image processing to 5 core primitives, and has the autotuner find an extremely fast implementation of the human-written algorithm. It has enormous gains in performance.

- What does one give up by using this language?
- Halide gives up some expressiveness in favor of performance. Because the autotuner optimizes the code, the human loses the power to hand-optimize, which may work better in edge cases.

- How does the paper evaluate the language? What are the strengths and weaknesses of the language with respect to the evaluation? What are the strengths and weaknesses of the evaluations?
- The paper evaluates the language by exploring its applicability to real world problems, and cross-comparing its performance and speed to expert written solutions for 5 specific problems in image processing. The evaluation seems to be extremely practice-oriented.
- The weaknesses of this evaluation: do these 5 specific problems capture all the cases one wants to use this code in practice for? Is the language actually expressive enough for its intended use case? Evaluation is heavily performance-oriented – does creating such a great performance-optimized language sacrifice other potential concerns in the image processing domain?

- How does this language compare with other programming solutions in this domain?
- They present a systematic model of fundamental tradeoffs in Stencil Computations, a clean scheduling representation, a compiler that allows the separation of the compiler and “how to execute the program”, and a very powerful autotuner (up to 5x human-optimized code) to infer high performance schedules using a genetic algorithm, stochastic search, and some domain specific knowledge.
- They create a great autotuner which is amazingly fast and doesn’t rely on experts, which is key. No longer does the human need to hand-optimize the schedule and make the tradeoffs – Halide will do this automatically using its search algorithm.
- This language is not as expressive as previous programming solutions, though it may not matter for most practical implementations.

# List of Publications

**Publications:**

*Timothy Chu,*Michael Cohen, Jakub Pachocki, Richard Peng. Constant Arboricity Graph Sparsifiers. (Updated link).

**Abstract:**We show that every graph is spectrally similar to the union of a O(1) forests. Moreover, we show that Spielman-Srivastava sparsifiers are the union of O(log n) forests. This result can be used to estimate boundaries of small subsets of vertices in nearly optimal query time.

*Timothy Chu*, Jason Li, Gary Miller. The Equivalence of the Two Metrics Used to Cluster Data in Machine Learning.

In preparation

Alex X. Chen, *Timothy Chu*, Nathan Pinsker. Dynamic Longest Increasing Subsequence. (arXiv)

**Abstract: **In this paper, we construct a new algorithm to efficiently compute the longest increasing subsequence of a dynamically changing sequence. This is currently the most efficient algorithm for this problem.

**Other Results:**

*Timothy Chu*. Triangulating the Equilateral Triangle: A Novel Proof of Tutte’s Result (pdf)

**Abstract**: In this paper, we provide a new proof that an equilateral triangle cannot be dissected into finitely many smaller equilateral triangles, no two of which share two vertices. We do this without the use of Electrical Networks

*Timothy Chu. *A Bit Exchange Protocol for Finding the First Difference in Two Bit Strings (pdf)

**Abstract: **In this paper, we construct a new bit exchange protocol that allow Alice and Bob, each of whom is given a length n bit string, to find the first differing bit in their strings using only n + O(1) total bits. This improves on a result presented in Mark Braverman’s Topics in Information Theory course at Princeton University.

*Timothy Chu. *A Generalization of Fiedler’s Theorem on Nodal Domains for the Cycle Graph. Under construction.

**Abstract: **The Fiedler Theorem on Nodal Domains states that given a graph, the number of disconnected components in the subgraph induced by vertices with positive values on the eigenfunction of the Laplacian, is bounded by . In this paper, we prove a specific case of a slight generalization: that for the cycle graph and any fixed combination of the first eigenfunctions, the number of disconnected components in the subgraph induced by the vertices with positive values on the combination of eigenfunctions is bounded by .

*Timothy Chu. *The Minimum Perimeter and Diameter of a Repeatedly Folded Square. Under construction.

**Abstract: **If one were to fold the unit square 2014 times, what is the figure with the maximum perimeter of the resulting figure? This is a well known problem that can be solved by noting that folding a paper can never increase its perimeter. A more interesting problem is to ask what happens what the smallest perimeter (or diameter) of a repeatedly folded square is, and this turns out to be much trickier to prove. In this paper, we resolve the question as to the minimum perimeter and diameter of a square folded an even number of times, and along the way we provide a new, elementary proof of a theorem in combinatorial geometry first due to L. Fejes Toth.

*Timothy Chu. *The Painter’s Algorithm For Two Dimensional Convex Objects. Under construction.

**Abstract: **We show that the Painter’s Algorithm in image processing never fails in two dimensions, when the objects the painter algorithm attempts to render are convex two-dimensional bodies.

# Blurb

## About Me

Hi! My name’s Tim Chu, and I lived in East Campus at MIT until June 2014. I currently work at Google, where I’m employed as a software engineer/analyst on the Google search team.

One of my research interests is in spectral graph theory. I like algorithms, algebraic combinatorics, and algebraic optimization.

Outside of academics, I read, stay active, and games. Pokemon Snap and Starcraft are my favorites video games. I also like to play card games like Bang and Dominion.

## Teaching

At the MIT Educational Studies Program, I taught math/algorithms (some of which I remember), and Lagrangian mechanics (none of which I remember). I also co-ran interactive math lessons for elementary school kids via MITXplore, a program founded by graduate students in the MIT Mechanical Engineering department.

In 2012, I went in Accra, Ghana, under an NSF grant to teach Olympiad math and C# programming to high school kids.

I’ve graded for the MIT Theory of Computation class, the USA Mathematical Olympiad, and the training camp for the US International Mathematics Olympiad team.

## Leadership

[Here’s an old emergency link dump. These links are replicated in the post above. (Link).(arXiv) (pdf) (pdf)]