First speech in English

In this week, I did first speech in public technical conference.
Every year, Google Cloud has public conference, which is google cloud next and i attended as a breakout session speaker. I spoke about logging with stack driver logging product.

Before this speech, i had number of speech for customer and internal team in English. But it was not public speech. It was my first speeches in public tech conference. I did many speeches in such a big tech conference in Korean. Biggest speech size has over 1000+ attendees. This my first speech just had around 200~300 attendees. But I was very nervous. I spent a lot of time to prepare, writing script, self rehearsal. Eventually I did 50% of success only. Contents was well prepared. Problem was only my English. Accent and pronunciation was very bad. My daughter told me that she can not understand. 😦

Before speech I prepared script but it was useless. Amount of script was so huge and hard to remember. So I gave up to memorize the contents and I just did number of practice.

Positive thing was that i got number of appreciations from attendees after my speech. It means that contents itself was not bad. I realized that my contents may work in outside of Korea.

However it was very good experience for me. After this speech i have more self-confidence and I will apply more in internal & external events with English. It will help me to grow up more.

Actually I haven’t had a plan to speech in this conference. But our ex-manager(Jenny) encourage me to be there and one of my colleague (Harry) volunteered as a Speaker. It makes me to be motivated.

When I watch again this video, it makes me shame but it also makes me to be proud of my challenge.

Short memory in April.2019 after google next

Structured logging (SLF4J + JSON)

Recently i have an interest in logging system. In modern system architecture, log is not just simple text message. It has more meaningful and can be used for data analytics. For this reason the log data is exported to data analytics system such as ELK , BigQuery etc.

In that case, we will have a problem. In a line of log, it can combine multiple information, which can be mapped into database. If we write log with simple text, it will be very hard to parse.

With these reasons, structured logging can be one of the solutions and it is also recommended as a logging best practices. I already had an experience in structure logging with JSON format by using google cloud stack driver logging. But it has dependency to google stack driver. So i tried to find other alternative solution from open source. Surprisingly there were few solutions guidance. However, in java there were number of options.

JSON logging itself is not difficult, if you simply add JSON layout appender, it will works.This is logback.xml. In spring boot project, this file should be in /src/main/resource

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="json" class="ch.qos.logback.core.ConsoleAppender">
   <layout class="ch.qos.logback.contrib.json.classic.JsonLayout">
       <jsonFormatter
           class="ch.qos.logback.contrib.jackson.JacksonJsonFormatter">
           <prettyPrint>true</prettyPrint>
       </jsonFormatter>
       <timestampFormat>yyyy-MM-dd' 'HH:mm:ss.SSS</timestampFormat>
   </layout>
</appender>

<logger name="jsonLogger" level="TRACE">
   <appender-ref ref="json" />
</logger>
</configuration>

This is part of pom.xml for maven build. The thing that we need to be aware, please check version of libraries. I spent almost 1 hour cause of lower version.

<dependencies>
:
   <dependency>
       <groupId>ch.qos.logback.contrib</groupId>
       <artifactId>logback-json-classic</artifactId>
       <version>0.1.5</version>
   </dependency>

   <dependency>
       <groupId>ch.qos.logback.contrib</groupId>
       <artifactId>logback-jackson</artifactId>
       <version>0.1.5</version>
   </dependency>

   <dependency>
       <groupId>com.fasterxml.jackson.core</groupId>
       <artifactId>jackson-databind</artifactId>
       <version>2.9.3</version>
   </dependency>
</dependencies

However, after this set up, it can use json structured log. Before call logging, it needs to import slf4j packages.

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

And let’s log
static Logger logger = LoggerFactory.getLogger("jsonLogger");
public static void main(String[] args) {

SpringApplication.run(DemoApplication.class, args);
logger.debug("Debug message");

This is output

{
 "timestamp" : "2019-03-06 01:23:11.738",
 "level" : "DEBUG",
 "thread" : "main",
 "logger" : "jsonLogger",
 "message" : "Debug message",
 "context" : "default"
}

As you can see, the log message is printed as a “message” element in JSON document.

But it is not enough, if we use logging for data analytics, it should have multiple fields.

How can we add multiple field. Sl4j itself has very interesting class – MDC. It is basically K/V based map. And if you fill key and value into MDC. It will be printed as a part of log.

This is code

MDC.put("userid","terry");
logger.info("this is log");

And this is output

{
 "timestamp" : "2019-03-06 01:23:11.757",
 "level" : "INFO",
 "thread" : "main",
 "mdc" : {
   "userid" : "terry"
 },
 "logger" : "jsonLogger",
 "message" : "this is log",
 "context" : "default"
}

I put key “userid” with value “terry”. The record was printed as a part of MDC element in JSON log document.

Let’s add one more element like this

 MDC.put("userid","terry");
MDC.put("client","galaxy s8");

logger.info("this is second log")

And this is output

{
 "timestamp" : "2019-03-06 01:23:11.758",
 "level" : "INFO",
 "thread" : "main",
 "mdc" : {
   "userid" : "terry",
   "client" : "galaxy s8"
 },
 "logger" : "jsonLogger",
 "message" : "this is second log",
 "context" : "default"
}

MDC is permanent among same thread. So if u don’t delete the value, it will continuously print the value to log. To remove the data, it can use MDC.delete to delete one key/value pair or, you can use MDC.clear() to delete all.

However, this way is also not beautiful. Because we cannot change root element name “mdc”. And it cannot have multiple root.

I slightly checked number of document, it can be supported by Logstash appender which is part of ELK stack. I will invest more time to find more proper solution. As I mentioned in the beginning of the article, I don’t prefer to locked in by specific vendors