All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ehhh... There are several things wrong here. Firstly, you should onboard your data properly. For now you think you're having problems with field extractions but you should make sure that: 1) Your d... See more...
Ehhh... There are several things wrong here. Firstly, you should onboard your data properly. For now you think you're having problems with field extractions but you should make sure that: 1) Your data is properly split into separate events 2) Your timestamp is properly recognized 3) Your fields are properly extracted (in this case they most probably be extracted using regexes by anchoring them to known "tags" like your <ref> string) Additionally, unless you absolutely can't avoid it, you should never use wildcards at the beginning of your search term and avoid using them in the middle of your search term due to performance reasons and consistency of the results. In your case the wildcard is in the middle of the search term but due to it being surrounded by major breakers (the pointy braces) it will be treated as a beginning of a search term. That's a very very bad idea because Splunk has to read all the events you have and can't limit itself to only find events using the indexes it built from parts of your events. So get your data onboarded properly and the search will be something like index=my_index my_ref=$Ref$* And that will be enough
Well, you can't show a single value in a pie chart. It makes no sense So think about what you really want counted. Also remember that the counting aggregation functions over specific fields (cou... See more...
Well, you can't show a single value in a pie chart. It makes no sense So think about what you really want counted. Also remember that the counting aggregation functions over specific fields (count and dc) count each value even within multivalued fields. That can produce results you'd not expect. A run-anywhere example: | makeresults count=100 | streamstats count | eval count=tostring(count) | eval digits=split(count,"") | stats count as eventcount count(digits) as digitcount It will generate a list of 100 numbers, then it will split the numbers rendered into text into separate digits. And finally it will count both the overall events (which predictably will be 100 since that's how many events we generated) and digits which will say 192 because you had 9 single-digit numbers, 90 two-digit numbers and one three-digit number which got split into 192 single digits spread among 100 events. So be wary when using the count() and dc() aggregation functions.
I don't know what it has to do with ITSI but in general, if you want to present something in a table, you use a table. If you want to manually render some strings, you just use eval to concatenate mu... See more...
I don't know what it has to do with ITSI but in general, if you want to present something in a table, you use a table. If you want to manually render some strings, you just use eval to concatenate multiple strings together, add something to them and so on. Depending on your use case, you could use the table but use some clever styling with CSS to render the table the way you want. So either use something like | eval host_with_description = host . description or be more precise about what you want to achieve.
First and foremost - verify if: 1) The events are generated at the source machine at all - run a wireshark there and see if the packets appear on the wire. If not - here's your culprit - troubleshoo... See more...
First and foremost - verify if: 1) The events are generated at the source machine at all - run a wireshark there and see if the packets appear on the wire. If not - here's your culprit - troubleshoot your Kiwi. 2) If they are being sent, check with tcpdump on the receiving end. 3) If you can see the packets on the wire, check firewall rules and rp_filter.  
Splunk has its limitations. One of them is not very pretty handling of structured data (which is understandable to a point). So if you use either automatic extractions or the spath command, to parse ... See more...
Splunk has its limitations. One of them is not very pretty handling of structured data (which is understandable to a point). So if you use either automatic extractions or the spath command, to parse whole event you'll get a multivalued field. From that field you have to get your first value either by means of mvindex() function or by mvexpanding the event and selecting just first result. Alternatively you can call spath with a specific path within your json structure. Like | spath path=data.initiate_state{0}.path{0} You can even get all first path elements from all initstate_state elements by | spath path=data.initiate_state{}.path{0}
The first one that shows" in data.initiate_state[].path[] And yes, the other array elements are not as meaningful as the first element.
It is very unclear what you mean by "the first one that shows".  Your screenshot shows that your input contains several JSON arrays data.events[], data.initiate_state[], data.initiate_state[].communi... See more...
It is very unclear what you mean by "the first one that shows".  Your screenshot shows that your input contains several JSON arrays data.events[], data.initiate_state[], data.initiate_state[].community[], data.initiate_state[].path[], etc. (It is important to illustrate raw JSON data, not Splunk's "beautified view", much less screenshot of "beautified view".  You can reveal raw data by clicking "Show as raw text" in search window.  Anonymize as needed.) I am also curious what is the use case to only wanting/needing "the first one that shows" from a data structure that is meant to contain multiple values?  Are other elements in the array not meaningful?  In a JSON array, every element is assumed to be equally weighed semantically.  How do you determine that "the first" is significant and the rest is not?  If there is truly some semantic insignificance of the rest of an array, you should exert every bit of your influence on developers to restructure data so you don't have bad semantics.  If you are uncertain, you should consult developers/manuals to clarify how data should be used. This much said, it is still unclear what is the meaning of "first one that shows."  Array data.initiate_state[].path[] is nested in array data.initiate_state[].  Do you want "first one that shows" in every element of data.initiate_state[]?  Of do you want "first one that shows" in data.initiate_state[].path[] in the "first one that shows" in data.initiate_state[]?
Hi @Shubham.Kadam, Thanks for asking your question on the community. I wanted to let you know that since your first post contained a significant amount of code, it was flagged for spam. I cleared t... See more...
Hi @Shubham.Kadam, Thanks for asking your question on the community. I wanted to let you know that since your first post contained a significant amount of code, it was flagged for spam. I cleared that up for you and your post is now live. I'm doing some searching to see if I can find any helpful existing content to share with you.  I've also reached out to the Apple CSM for you, so you could also be hearing directly from them. 
I've never used Kiwi syslog, but you can use the netcat (nc) utility to send test syslog messages to the SC4S server first and check, netcat needs to be installed.    UDP test echo "My Test UDP sy... See more...
I've never used Kiwi syslog, but you can use the netcat (nc) utility to send test syslog messages to the SC4S server first and check, netcat needs to be installed.    UDP test echo "My Test UDP syslog message" | nc -w1 -u <YOUR SC4S Server> 514 OR locally from the SC4S server echo "My Test UDP syslog message" | nc -w1 -u localhost 514 And see if any messages are sent to the Splunk/HEC Also check SC4S to see if data is being sent, when you send data from the Kiwi system   sudo tcpdump -i any udp port 514   Other things to check:  Check the /opt/sc4s/env_file - these are the default ports, but I can't remember if you need to add these as they should be default, may be worth adding these and restarting and see if that could be the cause.   SC4S_LISTEN_DEFAULT_TCP_PORT=514 SC4S_LISTEN_DEFAULT_UDP_PORT=514   Check the logs  podman logs SC4S  You said the firewall is ok but might be worth disabling it temporarily.
Hi @Gustavo.Marconi, Thanks for asking your question on the community. Did you happen to find any new information or a solution for your question? If so, please share what you learned as a reply he... See more...
Hi @Gustavo.Marconi, Thanks for asking your question on the community. Did you happen to find any new information or a solution for your question? If so, please share what you learned as a reply here. If not, you can try contacting your AppD Rep or even AppD Support. How do I submit a Support ticket? An FAQ 
I am trying to generate one event from of list of similar events. I want to remove the _check and add these to one field separated by comas. I am generating a critical event that lists all the host t... See more...
I am trying to generate one event from of list of similar events. I want to remove the _check and add these to one field separated by comas. I am generating a critical event that lists all the host that are not showing. example: HOST                                           SEVERITY Bob_1009_check                   Critical Jack_1002_check                  Critical John_1001_check                  Critical   So when I am done I want it to be: HOST   (or some other field name)                   SEVERITY                   DESCRIPTION Bob_1009, Jack_1002, John_1001              Critical                       (Bob_1009, Jack_1002, John_1001) are no longer up, please review your logs. I have trimmed the host accurately but I cannot figure out how to get a table of host to show in a side by side list to add into a description field I want to generate in an alert.  I DO NOT WANT a table. I want them side by side comma separated or semicolon separated.    
Hi @isoutamo  If i use | stats Count  i am getting value .But i want to show in pie chart. So i checked in the visualization it showing as no result.
Hi @steve.diaz, Your post has received a few replies. Please continue the conversation to get your questions answered or let us know if you found a solution already, if so, share it! 
@corti77  Is Splunk web running on the default port (8000)? netstat -ano | findstr 8000 Are there any firewalls or network configurations blocking access to port 8000? If the above solution helps... See more...
@corti77  Is Splunk web running on the default port (8000)? netstat -ano | findstr 8000 Are there any firewalls or network configurations blocking access to port 8000? If the above solution helps, an upvote is appreciated.  
Hi If you want only total count then just drop "by ..."  away from your 1st example. If needed add  | where isnotnull(content.scheduleDetails.lastRunTime) before stats. r. Ismo
@corti77  Check the _internal index for the logs in web_service.log. Do you see anything prior to the stopping ? Location: $SPLUNK_HOME/var/log/splunk/web_service.log  If the above solution helps... See more...
@corti77  Check the _internal index for the logs in web_service.log. Do you see anything prior to the stopping ? Location: $SPLUNK_HOME/var/log/splunk/web_service.log  If the above solution helps, an upvote is appreciated.     
Hi there should be more information on _internal log. Just query from it like index=_internal LM* OR expired That should show you more information about y our issue. r. Ismo
This works on my laptop (macOS + Splunk 9.2.1) See details from here https://marketplace.visualstudio.com/items?itemName=Splunk.splunk I have set next values on settings.json Splunk Rest Url -- h... See more...
This works on my laptop (macOS + Splunk 9.2.1) See details from here https://marketplace.visualstudio.com/items?itemName=Splunk.splunk I have set next values on settings.json Splunk Rest Url -- https://localhost:8089 Token auth has enabled and I have generate own token for this Then just create file e.g. Splunk-SPL-test.splnb index=_internal | stats count by component Run it and you see events and can select also visualisation etc. 
Probably the wrong board, choices were limited. In our dev environment we have a 3 node sh cluster, a 3 node idx cluster, an ES sh and a few other anciliary machines (DS, deployer, UF's HF's, LM, CM... See more...
Probably the wrong board, choices were limited. In our dev environment we have a 3 node sh cluster, a 3 node idx cluster, an ES sh and a few other anciliary machines (DS, deployer, UF's HF's, LM, CM, etc). All instances use the one LM.  On the SHC we are unable to search, getting the subject line message, yet on the ES SH we can search fine and no error message. The nodes of the SHC are "phoning home" to the LM. Licensing settings (indexer name, manager server uri) have been verified as correct. All nodes show having connected to the LM within the last minute-ish. Not sure where to look from here.
Hi All,   How to count field values.The field extracted and showing 55 .When i use below query: | stats count by content.scheduleDetails.lastRunTime it will give all the values with counts ... See more...
Hi All,   How to count field values.The field extracted and showing 55 .When i use below query: | stats count by content.scheduleDetails.lastRunTime it will give all the values with counts | stats dc(content.scheduleDetails.lastRunTime) AS lastRunTime its showing 55 counts. my output as: content.scheduleDetails.lastRunTime     Count 02/FEB/2024 08:22:19 AM 9 02/FEB/2024 08:21:19 AM 63 03/FEB/2024 08:22:19 AM 7   Expected output as only total count of the field: 79