Hi, I would check your user Preferences timezone. Click your name in the top right > Preferences > Default System Timezone. If you have it set to a timezone, it will convert the time for you. Matt
... View more
Is the cluster in maintenance mode? On the manager indexer run: splunk show maintenance-mode Check to see if any buckets are stuck in fixup tasks? If so, resolve issue. indexer clustering > Indexes > Bucket Status
... View more
I see... you need to make the nmon app global or accessible to those users Click apps > Manage apps > NMON app > Permissions Read/Wright all Make app accessible to users. Are your users able to access the NMON app? If you don't want the app global, the users will only have the knowledge objects available within the NMON app.
... View more
Are you seeing this with Linux systems? I had an issue like this with our Linux logs forwarding to our syslog server. Turned out being an extra space at the end of our syslog outgoing template. After the \n" someone had \n "
... View more
You need to add the nmon indexes to your user roles, or create a new role for the nmon users. Go to: Settings > roles > "your user role" > Indexes tab > "select indexes" You should see the following indexes for the nmon data: os-unix-nmon-config os-unix-nmon-events os-unix-nomon-internal os-unix-nmon-metircs
... View more
I would check the server name on the UF and search what you find in the Forwarder Management on the deployment server. It sounds like everything is working. Forwarder management > click on your server class > search the server name found on UF To get it to show in the monitoring console: Monitoring console > settings > Forwarder Monitoring Setup > Rebuild Forwarder assets. Hope this helps. Matt
... View more
If you want to change the time in search you can try the following: Add this below main search |eval time_format=strftime(_time, "%Y-%m-%d %H:%M:%S") |eval time_epoch=strptime(time_format, "%Y-%m-%d %H:%M:%S") |eval time_cst=time_epoch-21600 |eval _time=strftime(time_cst, "%Y-%m-%d %H:%M:%S")
... View more
You could try something like this: index=xxx | stats count by Team, status |eval field="status=" .status. " ". "count=" .count |stats values(field) as stats by Team
... View more
Make sure your Splunk user has the proper permissions to read the certs. web.conf enableSplunkWebSSL = 1 privKeyPath = /opt/splunk/etc/auth/mycert.key serverCert = /opt/splunk/etc/auth/mycert.pem Depending on the method you used, you must combine the server certificate, the private key, and the public certificate, in that order, into a single file. The combined file must be in privacy-enhanced mail (PEM) format. cat <server certificate file> <server private key file> <certificate authority certificate file> > <combined server certificate file> https://docs.splunk.com/Documentation/Splunk/9.0.0/Security/HowtoprepareyoursignedcertificatesforSplunk
... View more
Looks like something is blocking it. check this out https://community.splunk.com/t5/Security/quot-Server-Error-quot-for-a-fresh-Splunk-install/m-p/447283
... View more
I would check the splunkd logs on the UF to troubleshoot. restart the splunk service on the UF and tail the logs. tail -f /opt/splunkforwarder/var/log/splunk/splunkd.log
... View more
So its hard to help without seeing the data but you could try something like this. main search that pulls the two event ID in 24 hrs |stats values(EventID) as Group_ID by user EventID |eval removal=if(Group_ID=4756 AND Group_ID=4729, "Yes", "No") |search removal=Yes Something like this should work. Matt
... View more
Hi, So you can easily change this at search time with a regex. Try something like this: Your main search |rex field=_raw "src\=(?<src>\d+\.\d+\.\d+\.\d+)"
... View more
How are you indexing the JSON data? Are you indexing the JSON fields individually at index time? What do your props and transforms look like? Check that your props.conf has INDEXED_EXTRACTIONS = json Note: If you set INDEXED_EXTRACTIONS=JSON, check that you have not also set KV_MODE = json for the same source type, which would extract the JSON fields twice, at index time and again at search time. I have had similar issues when I created a custom indexed field at index time and did not put it in the fields.conf. If you are indexing the fields at index time, you need to tell Splunk they need to be treated as indexed fields.
... View more
I had this issue too and noticed Splunk was falling behind when scanning large file before ingesting. I ended up increasing the pipelines on the forwarders and the issue when away. Bumped to 3 where resources allowed. [general] parallelIngestionPipelines = 2 Also note, you will get this error if you have a source coming in with delayed logs. I think Splunk is alerting on this now so that is why you see the error with the updates. I still get this error on logs are are only coming in every couple of hours.
... View more
Noticed one issue with your TIME_FORMAT. Looks like your You have: TIME_FORMAT = %Y-%m-%d %H:%M:%S Should be: TIME_FORMAT = "%Y-%m-%d %H:%M:%S" For the parsing issue, I have seen issues like this when we had a sourcetype from a different app with the same sourcetype name. I would run the btool to double check that is not the issue. Another thing you could try is breaking on the gz at the end of the log. That is assuming that value is in every event LINE_BREAKER = .csv.gz
... View more