ELK + REDIS - Duplicating data

2

Talk to people,

I need a help ...

I have the following Logstash configuration files:

agent.conf

 input {
  log4j {
    type => "bdj"
    port => 25827
  }
}

filter{
    json{
        source => "message"
    }

}

output {
  stdout { 
    codec => rubydebug
  }
  redis {
    host => "127.0.0.1"
    data_type => "list"
    key => "logstash"
  }
}

This agent.conf receives the logs via tcp and forwards the redis.

central.conf

input {
  redis {
    host => "localhost"
    type => "redis-input"
    data_type => "list"
    key => "logstash"
  }
}

filter{
  json{
    source => "message"
  } 
}

output {
  stdout { }
  elasticsearch {
    hosts => "localhost"
    index => "logstash-%{+YYYY.MM.dd}"
  }

}
The central.conf , in turn, captures the redis data and forwards to the elastichsarch.

The problem is that the data is being duplicated, as if in a loop, or something like that.

I'm running logstash on a Debian, as a service.

root@logs:~# uname -a
Linux logs 3.2.0-4-amd64 #1 SMP Debian 3.2.78-1 x86_64 GNU/Linux

Any light?

    
asked by anonymous 20.05.2016 / 23:48

1 answer

1

Is the duplicate record not the ghost log as it does with the SCALA server?

"A workaround for avoiding duplicate log records after logstash restart is to set the sincedb_path parameter in the file plug-in to / dev / null, thereby indicating to logstash that it ignores the trace of the last position monitored from the archive and always start monitoring from the end of the file, however, this will cause logstash to ignore all updates made to the log file while the logstash agent is down. "

See this IBM help:

Duplicate Records log on the SCALA server

    
28.05.2016 / 02:22