All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

nevermind doesn't work  
Hi,   Sorry for the confusion , I just pasted a single input stanza , however I have 8 different monitoring stanza's in my inputs.conf and they are all working and ingesting the data. crcSalt = ... See more...
Hi,   Sorry for the confusion , I just pasted a single input stanza , however I have 8 different monitoring stanza's in my inputs.conf and they are all working and ingesting the data. crcSalt = <DATETIME> What It Does: This setting includes the file's last modification time in the checksum calculation. Use Case: It's useful when you want Splunk to reindex the file if the file's last modified timestamp changes, even if the content stays the same. So for my usecase I need to ingest the complete csv file data daily , so used crcSalt = <DATETIME>. (Im doing right or wrong , please correct) Small set of data means only getting few rows data from the csv file and not the complete csv data. Can you please help.   Thank you    
how to check index and volume parameters and index size
1. You obviously can't read data from 8 files if you have input set for just one of them 2. Leave the crcSalt setting alone. It is very very rarely needed. Usually you should rather set initCrcLe... See more...
1. You obviously can't read data from 8 files if you have input set for just one of them 2. Leave the crcSalt setting alone. It is very very rarely needed. Usually you should rather set initCrcLength if the files have common header/preamble 3. What do you mean by "small set of data is being ingested"? 4. Did you check splunk list monitor and splunk list inputstatus
Are you looking for something like this?   index=itsi_summary | eval kpiid = mvappend(kpiid, itsi_kpi_id) | stats latest(alert_value) as alert_value latest(alert_severity) as health_score by ... See more...
Are you looking for something like this?   index=itsi_summary | eval kpiid = mvappend(kpiid, itsi_kpi_id) | stats latest(alert_value) as alert_value latest(alert_severity) as health_score by kpiid kpi | join type=left kpiid [| inputlookup service_kpi_lookup | stats latest(title) as title by kpis._key | rename kpis._key as kpiid ] | search title IN ("<Service Names>") kpi!="ServiceHealthScore"  
Hi, Im currently working on ingesting 8 csv files from a path using inputs.conf on a UF. And the data is getting ingested . The issue is these 8 csv files are overwritten daily by new data by a aut... See more...
Hi, Im currently working on ingesting 8 csv files from a path using inputs.conf on a UF. And the data is getting ingested . The issue is these 8 csv files are overwritten daily by new data by a automation script so the data inside the csv file is changed daily.   I want to ingest the complete csv data daily into Splunk , but what I can see is only a small set of data is getting ingested but not the complete csv file data.   My inputs.conf is  [monitor://C:\file.csv] disabled = false sourcetype = xyz index = abcd crcSalt = <DATETIME>   Can someone please help me , whether Im using the correct input or not?   The ultimate requirement is to ingest the complete csv data from the 8 csv files daily into Splunk.   Thank you.
How to check the splunk lsit monitor/ where etc
This is the error message from splunk server ERROR UserManagerPro [727840 TcpChannelThread] - Requesting user info through AQR returned an error Error in Attribute query request, AttributeQueryTrans... See more...
This is the error message from splunk server ERROR UserManagerPro [727840 TcpChannelThread] - Requesting user info through AQR returned an error Error in Attribute query request, AttributeQueryTransaction err=No error, AttributeQueryTransaction descr=Method Not Allowed, AttributeQueryTransaction statusCode=405 for user: ......... This is from access log (http) 401: "GET /services/authentication/current-context HTTP/1.1" 401 148 "-" "python-requests/2.31.0" - - - 19ms And the audit log it said not valid user=n/a, action=validate_token, info=JsonWebToken validation failed
Thanks 
Well... there is a possibility of defining an output using a short-ttl DNS name (dyn-DNS), it's not something I'd recommend. Static addresses definitely make your life easier.
Interesting Fields is just a GUI feature that shows fields present in at least 10 (15?) percent of events. Just because field is not listed there doesn't mean it's not being parsed out from the event... See more...
Interesting Fields is just a GUI feature that shows fields present in at least 10 (15?) percent of events. Just because field is not listed there doesn't mean it's not being parsed out from the event. Actually with renderXml=true you get xml-formatted events from which all fields should be automatically parsed.
Hi @Mojal  @marnall  I am facing the same issue with my Splunk Cluster. Were y'all able to find any workarounds/solutions? P.S: I have deployed the splunk cluster via splunk-operator in m... See more...
Hi @Mojal  @marnall  I am facing the same issue with my Splunk Cluster. Were y'all able to find any workarounds/solutions? P.S: I have deployed the splunk cluster via splunk-operator in my kubernetes environment.
Do you mean chained searches?
Amazing, worked like a charm.   Thanks!
This got me close enough to what I needed.  In my effort to streamline and reduce clutter I oversimplified the issue in my original post.  In any case though, thank you for the help!
Hello! I am trying to collect 3 additional Windows Event logs and I have added them in the inputs.conf, for example   [WinEventLog://Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provid... See more...
Hello! I am trying to collect 3 additional Windows Event logs and I have added them in the inputs.conf, for example   [WinEventLog://Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Admin] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true    Admin, Autopilot, and Operational, were added the same way. I also added in props.conf   [WinEventLog:Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Admin] rename = wineventlog [WinEventLog:Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Autopilot] rename = wineventlog [WinEventLog:Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Operational] rename = wineventlog     The data are coming in, however, none of the fields are parsed as interesting fields. Is there something I am missing? I looked through some of the other conf file, but I think I am in over my head to make a new section in props? I thought the base [WinEventLog] would take care of the basic breaking up of interesting fields like EventID, so I am a bit lost.
How can I implement a post process search using the Dashboard Studio framework?  I can see that there is excellent documentation for doing this XML (Searches power dashboards and forms - Splunk Docu... See more...
How can I implement a post process search using the Dashboard Studio framework?  I can see that there is excellent documentation for doing this XML (Searches power dashboards and forms - Splunk Documentation), but I can't seem to find relevant information for how to do this in the markdown for Dashboard Studio. Note: I am not attempting to use a savedSearch.
Is this still the case? I have an EC2 instance that has dynamic ips and I would like to set up a splunk forwarder. Am I still able to get the logs over to the correct data lake?
Thank you all for your help. I found the problem with my inputs.conf; it was right in the front of me but just didn't see it. In my inputs.conf, for some reason, I had a stanza "host = <indexer-name>... See more...
Thank you all for your help. I found the problem with my inputs.conf; it was right in the front of me but just didn't see it. In my inputs.conf, for some reason, I had a stanza "host = <indexer-name>".  So all logs were getting to the indexer but under my indexer name except /var/log/messages and cron; hence I wasn't "seeing" them. I need to check why those files (messages and cron) were coming under my real UF name,  maybe because, they probably have host name in the logs. The good part is, I learnt few new troubleshooting tips; thanks to you all, appreciate your help. 
Thanks for the clarification! I am running this by the team now...