1. Based on 9/7/2016's lecture material, what range of integers should a 16-bit integer type represent? Check your answer by running
typemax(Int16) in Julia.
2. Same as problem 1 for 32-bit integers.
3. The standard 32-bit floating-point type uses 1 bit for sign, 8 bits for exponents, and 23 bits for the mantissa.
4. The standard 16-bit floating-point type uses 1 bit for sign, 5 bits for exponents, and 10 bits for the mantissa. What size error do you expect in a 16-bit computation of 9.4 - 9 - 0.4? Figure out how to do this 16-bit calculation in Julia and verify your expectation.
5. Find the roots of to four significant digits.