Zeppelin 0.7.2-1801 Release Notes
The notes below relate specifically to the HPE Ezmeral Data Fabric distribution of Apache Zeppelin. You may also be interested in the Apache Zeppelin 0.7.2 changelog and the Apache Zeppelin project homepage.
Version | 0.7.2 | ||||||||
Release Date | February 2018 | ||||||||
Source on GitHub | https://github.com/mapr/zeppelin, https://github.com/mapr/livy | ||||||||
GitHub Release Tag | 0.7.2-mapr-1801 | ||||||||
Docker Image Name and Tags |
|
Zeppelin on the Data Fabric system is a component of the Data Science Refinery. This release of Zeppelin is in version 1.1 of Data Science Refinery product.
The Data Science Refinery product is packaged as a Docker container. Data Fabric ecosystem components included in the Docker image are the same as those in the EEP 4.1 release. See EEP 4.1.0 Components and OS Support for details on product version numbers.
You can run the Docker image on the following operating systems:
- Linux (CentOS 7.x, Ubuntu 14, Ubuntu 16)
- Windows 10 Pro (64-bit)
- Mac OS X 10.11
The following are the verified browsers:
- Chrome 57
- Firefox 56.0
- Microsoft Edge 40
- Safari 9.0
Data Fabric product documentation is available at Zeppelin on MapR (MapR 6.1.0).
New in this Release
This release of Zeppelin on the Data Fabric system includes the following new features:
- Support for the Spark interpreter, configured to launch Spark jobs in YARN client mode
- Enhancements in installing custom Python environments for the Livy and Spark interpreters
- Improvements in launching multiple Zeppelin containers on the same host
Fixes
This release by Data Fabric includes the following fixes on the base Apache release. For complete details, refer to the commit log for this project in GitHub.
The following table lists the fixes for Zeppelin:
Commit | Date (YYYY-MM-DD) | Comment |
---|---|---|
625acf6 | 2018-01-11 | MZEP-94: Support for different versions of Python for the Spark interpreter |
aaa8e92 | 2018-01-11 | PACC-18: Fix regression for PACC-12 |
dc9cb61 | 2018-01-09 | PACC-18: Fix incorrect home directory after docker exec run on
Ubuntu |
8297a9e | 2017-12-25 | PACC-12: Remove obsolete fix for MZEP-66 |
3698203 | 2017-12-25 | MZEP-97: Enable Spark interpreter to run in YARN client mode |
1e6ccc0 | 2017-12-14 | Restore Core 6.0.0 version in Dockerfile |
0418ced | 2017-12-13 | Update EEP and Data Fabric core versions in Dockerfiles |
c6538ab | 2017-12-12 | MZEP-99: Fix no graphs being displayed the first time the Spark Pyspark Tutorial examples are opened |
ee11e2f | 2017-12-07 | MZEP-98: Restore default examples in the Zeppelin Tutorial |
7d15fd9 | 2017-12-05 | MZEP-63: Explicitly set Zeppelin working directory |
0fc3079 | 2017-12-01 | Fix mapr-setup.sh URL in PACC build scripts |
b8152f2 | 2017-11-30 | MZEP-63: Enable Spark interpreter |
b8e6693 | 2017-11-27 | Refactor the script that builds Docker images |
The following table lists fixes for Livy:
Commit | Date (YYYY-MM-DD) | Comment |
---|---|---|
bbf0155 | 2018-01-15 | MZEP-102: Fix issue when no LIVY_RSC_PORT_RANGE is
specified |
90bd035 | 2018-01-12 | MZEP-102: Provide ability to specify custom port range in
livy.rsc.launcher.port.range |
528e2b7 | 2018-01-11 | MZEP-94: Support different versions of Python for the Spark interpreter |
bef6778 | 2017-12-07 | MZEP-65: Minor fix in start-in-container.sh |
ccaed0f | 2017-12-05 | MZEP-63: Explicitly set Livy working directory |
c2ddf74 | 2017-11-24 | Fixes needed to set up a Conda environment for PySpark |
Known Issues and Limitations
- MZEP-17: The HBase interpreter cannot be used to query HPE Ezmeral Data Fabric Database Binary tables
- MZEP-79: Legends in plots do not display correctly when running the Matplotlib (Python/PySpark) example from the Zeppelin Tutorial
- MD-2397: Zeppelin cannot connect to Drill through the JDBC driver on a secure Data Fabric cluster when Zeppelin has Kerberos authentication enabled
- MZEP-86: You cannot run Zeppelin as user 'root'
- MZEP-110: You cannot use a custom R environment with Zeppelin
- See Data Fabric PACC Known Issues for issues that apply to running Data Fabric Docker images.