All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Check out 'charting.fieldColors' in the Dashboards and Visualization manual.
Ok. Two things. 1. Your props.conf stanza is wrong. monitor: is a type of input and you're not supposed to use it outside inputs.conf. Props should contain stanzas for sourcetype, source or host. 2... See more...
Ok. Two things. 1. Your props.conf stanza is wrong. monitor: is a type of input and you're not supposed to use it outside inputs.conf. Props should contain stanzas for sourcetype, source or host. 2. mysqld_error is one of the builtin sourcetypes. Unfortunately, it doesn't contain any timestamp recognition settings so Splunk tries to guess. Firstly I'd check if all hosts report the timestamps in events in a consistent manner.
Hi @gn694  If you are on-prem then you can set the HttpInputDataHandler component to DEBUG mode (but dont do it for long!) - this will record the contents of HEC payloads in _internal which might he... See more...
Hi @gn694  If you are on-prem then you can set the HttpInputDataHandler component to DEBUG mode (but dont do it for long!) - this will record the contents of HEC payloads in _internal which might help you work out if its raw or event endpoints. Edit the log level via Settings->Server Settings->Server Logging - search for "HttpInputDataHandler" and change to DEBUG. Shortly after you will get logs like this:   Search: index=_internal sourcetype=splunkd log_level=debug component=HttpInputDataHandler In my example, the top one was using the event endpoint and the bottom using the raw endpoint. The logs sent to the event endpoint will always have an "event" field in the body_chunk value, along with other fields like time/host/source etc. Try this and let me know how you get on!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello,  Is there a specific way to set the color to specific colors based on the specific field? I have a stacked column chart with four possible fields used (nonconformance, part_not_installed, inc... See more...
Hello,  Is there a specific way to set the color to specific colors based on the specific field? I have a stacked column chart with four possible fields used (nonconformance, part_not_installed, incomplete_task, shortage in this case) and I use a token system to only show specific fields in my chart based on those selections.  It seems that the color order is set by default. So when I remove the first option the color changes to the new top field name. Is there an easy way to lock those to the consistently be the same colors for each potential option. So locking field color with field value.    example images of the change.  vs when the fields change   
Wait a second. Earlier you wrote (which I had missed) that your other extracted fields do work. Since you're using search-time extractions, that means that you have props defined on your search head ... See more...
Wait a second. Earlier you wrote (which I had missed) that your other extracted fields do work. Since you're using search-time extractions, that means that you have props defined on your search head tier. Do you also have those props on the correct component in ingestion path? What is your architecture and where do you have which settings?
We are a big customer. We hit a big issue in upgrading from 9.1.7 to 9.2.4 and it took a long time for the issue to be resolved. We have a large stack with many indexers. Our current operating syste... See more...
We are a big customer. We hit a big issue in upgrading from 9.1.7 to 9.2.4 and it took a long time for the issue to be resolved. We have a large stack with many indexers. Our current operating system is RedHat 7; we are in the process of migrating to RedHat 8. On upgrade from 9.1.7 to 9.2.4, one of the indexer clusters that ingests the most amount of data, suddenly had aggregation and parsing queues filled at 100% during our peak logging hours. The indexers were not using much more cpu or memory it’s just that the queues were very full. It turns out that Splunk has enabled profiling starting in 9.2: specifically cpu time profiling. These settings are controlled in limits.conf: https://docs.splunk.com/Documentation/Splunk/9.2.4/Admin/Limitsconf. There are 6 new profiling metrics and these are all enabled by default. In addition, the agg_cpu_profiling runs a lot of time of day routines.  A lot. There are several choices for clocksource in RedHat https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/7/html/reference_guide/chap-Timestamping#sect-Hardware_clocks It turns out that we had set our clock source to use the clocksource “hpet” some number of years ago. This clocksource, while high precision, is much slower than using “tsc”.  Once we switched to using tsc, the problem with our aggregation and parsing queues at 100% during peak hours was fixed. Even if you don't have the clock source issue, the change in profiling is something to be aware of in the upgrade to 9.2
Having the same problem had to go back to the <p></p> method that was previously in the interim.
@gcusello Will this fix the issue where it returns "no results"?  my alert would still fire due to this condition  
@livehybrid  Thank you for the response  yeah I'm still trying to understand its seems like a lot despite my description of the issue   my run on cron schedule is setup to this 0 6 * * 2-6 ... See more...
@livehybrid  Thank you for the response  yeah I'm still trying to understand its seems like a lot despite my description of the issue   my run on cron schedule is setup to this 0 6 * * 2-6 tues~saturday. where Monday and Sunday is excluded to run the search.
@livehybrid- This curl tool sounds useful. And @Zoe_  you just need to add | outputlookup <your-lookup-name> at the end of @livehybrid 's query.
@livehybrid  json_array_to_mv - that's sounds interesting. 
@gcusello  in my search query i thought it showed that I have a lookup containing all the holidays that I wanted to have mute. so yes I do have it. just wanted to question this line NOT (dat_wda... See more...
@gcusello  in my search query i thought it showed that I have a lookup containing all the holidays that I wanted to have mute. so yes I do have it. just wanted to question this line NOT (dat_wday="saturday" OR date_wday="sunday") why sat and sunday? I have my cron schedule to search  0 6 * * 1-5  so its monday-friday so that should cover it? could I just  Index=xxx <xxxxxxx> |eval Date=strftime(_time,"%Y-%m-%d") NOT [| lookup holidays.csv HolidayDate as Date output HolidayDate] | eval should_alert=if((isnull(HolidayDate)), "Yes", "No") | table Date should_alert | where should_alert="Yes
@Andre_- FYI, I haven't tried these config on my side so may need to read about them on spec file & Splunk docs. Also, I'm not sure how metrics based queries will be used for role based restriction.... See more...
@Andre_- FYI, I haven't tried these config on my side so may need to read about them on spec file & Splunk docs. Also, I'm not sure how metrics based queries will be used for role based restriction.   # props.conf.example [em_metrics] METRICS_PROTOCOL = statsd STATSD-DIM-TRANSFORMS = user, queue, app_id, state # transforms.conf.example [statsd-dims:user] REGEX = (\Quser:\E(?<user>.*?)[\Q,\E\Q]\E])   I hope this helps!!! Kindly upvote if it does!!!
@davidco- Did you check connectivity from Spark server to Splunk service on splunk HEC port? * via telnet or curl    
I have a server pushing audit logs data to a syslog, to login to the server you need SAML. My question is: how do I pull data of successful logins and unsuccessful logins of those SAML users in Splun... See more...
I have a server pushing audit logs data to a syslog, to login to the server you need SAML. My question is: how do I pull data of successful logins and unsuccessful logins of those SAML users in Splunk?   Thank you.
I was afraid of that.  Makes it hard for me because I don't have access to the source side of things for most things coming into HEC.
@WorapongJ- Yes in both case you will loose data.   And I know you are trying to understand the impact of it on Splunk. But there is usually a recovery option available for KVstore/Mongo depending ... See more...
@WorapongJ- Yes in both case you will loose data.   And I know you are trying to understand the impact of it on Splunk. But there is usually a recovery option available for KVstore/Mongo depending on what has happened or what's the issue.   I hope this helps!!!
@gn694- I don't think there is any direct way or internal logs you can use this for this what you need. Unless you can see the difference in data in terms of fields indexed OR you check on the sourc... See more...
@gn694- I don't think there is any direct way or internal logs you can use this for this what you need. Unless you can see the difference in data in terms of fields indexed OR you check on the source side.  
Is there any way to tell whether data coming into Splunk's HEC was sent to the event or raw endpoint? You can't really tell from looking at the events themselves, so I was hoping there was a way to ... See more...
Is there any way to tell whether data coming into Splunk's HEC was sent to the event or raw endpoint? You can't really tell from looking at the events themselves, so I was hoping there was a way to tell based on something like the token, sourcetype, source, or host. I have tried searching the _internal index and have not found anything helpful.
The query did not error, but also 0 events. Any other way? I have created a lookup table.