All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yeah the app is not great at deduplicating the notables it sends to SOAR. Ideally you would want this app to run a search, find result with some key field X, then create only one container with one a... See more...
Yeah the app is not great at deduplicating the notables it sends to SOAR. Ideally you would want this app to run a search, find result with some key field X, then create only one container with one artifact containing that result. Subsequent searches in the app will create a new artifact in the same container, but this is unwanted. One way around this is to set up your generating search so that it appends the results to a whitelist which is used in later executions of the search to remove the results already seen. E.g. imagine you have a unique field of "id" in your results. You want only one container+artifact per value of "id". 1. Make a lookup containing one "id" column. E.g. search_whitelist.csv 2. Change your search to append and exclude ids: | <your search> | search NOT [| inputlookup search_whitelist.csv | table id] | outputlookup search_whitelist.csv append=true 3. (optional but recommended) - make another search which removes old entries from the search_whitelist.csv if it gets too big. E.g. | inputlookup search_whitelist.csv | sort - id | head 10000 | outputlookup search_whitelist.csv  
Hi @calvinmcelroy, This doc contains a few instructions related to scenarios as you mentioned: Install a Windows universal forwarder - Splunk Documentation About the least-privileged user For se... See more...
Hi @calvinmcelroy, This doc contains a few instructions related to scenarios as you mentioned: Install a Windows universal forwarder - Splunk Documentation About the least-privileged user For security purposes, avoid running the universal forwarder as a local system account or domain user, as it provides the user with high-risk permissions that aren't needed. When you install version 9.1 or higher of the universal forwarder, the installer creates a virtual account as a "least- privileged" user called splunkfwd, which provides only the capabilities necessary to run the universal forwarder. Since local user groups are not available on the domain controller, the GROUPPERFORMANCEMONITORUSERS flag is unavailable, which might affect WMI/perfmon inputs. To mitigate input issues, when you're installing with the installer, the default account is the local system on the domain controller. If you choose a different account to run the universal forwarder during installation, the universal forwarder service varies based on your choice: If you choose Local System, the universal forwarder runs Windows administrator full privilege. If you choose a domain account with Windows administrator privilege, the universal forwarder runs Windows administrator full privilege. If you choose a domain account without Windows administrator privilege, you select the privilege. Once you choose a non-administrator user to run the universal forwarder, this user becomes a "least privilege user" with limited permissions on Windows. Also, take a look at this point:  Permission Function SeBackupPrivilege Check to grant the least privileged user READ(not WRITE) permissions for files. SeSecurityPrivilege Check to allow the user to collect Windows security event logs. SeImpersonatePrivilege Check to enable the capability to add the least privilege user to new Windows users/groups after the universal forwarder installation. This grants more permissions to the universal forwarder to collect data from secure sources.   Happy Splunking, Rafael Santos Please,  don't forget to accept this solution if it fits your needs
We use a Deployment server to manage config of our UF fleet. Recent changes to privileges on clients are preventing the UF from restarting it's service after new config or systemclass has been downlo... See more...
We use a Deployment server to manage config of our UF fleet. Recent changes to privileges on clients are preventing the UF from restarting it's service after new config or systemclass has been downloaded. The company doesn't want to provide Splunk with a DA-level account or something similar.  What is the best "Least Privilege" way for the Splunk UF to be able to restart it's own service and collect needed logs within a windows domain?
Please help me on the below items: #1) | chart count(WriteType) over Collection by WriteType | sort Collection for above query  can we add conditon as below:  (i am facing issue here) | chart co... See more...
Please help me on the below items: #1) | chart count(WriteType) over Collection by WriteType | sort Collection for above query  can we add conditon as below:  (i am facing issue here) | chart count(WriteType) over Collection by WriteType |where c in("test","qa") | sort Collection  #2): can we add one more field after WriteType as below: | chart count(WriteType) over Collection by WriteType, c |where c in("test","qa")
Ehhh... There are several things wrong here. Firstly, you should onboard your data properly. For now you think you're having problems with field extractions but you should make sure that: 1) Your d... See more...
Ehhh... There are several things wrong here. Firstly, you should onboard your data properly. For now you think you're having problems with field extractions but you should make sure that: 1) Your data is properly split into separate events 2) Your timestamp is properly recognized 3) Your fields are properly extracted (in this case they most probably be extracted using regexes by anchoring them to known "tags" like your <ref> string) Additionally, unless you absolutely can't avoid it, you should never use wildcards at the beginning of your search term and avoid using them in the middle of your search term due to performance reasons and consistency of the results. In your case the wildcard is in the middle of the search term but due to it being surrounded by major breakers (the pointy braces) it will be treated as a beginning of a search term. That's a very very bad idea because Splunk has to read all the events you have and can't limit itself to only find events using the indexes it built from parts of your events. So get your data onboarded properly and the search will be something like index=my_index my_ref=$Ref$* And that will be enough
Well, you can't show a single value in a pie chart. It makes no sense So think about what you really want counted. Also remember that the counting aggregation functions over specific fields (cou... See more...
Well, you can't show a single value in a pie chart. It makes no sense So think about what you really want counted. Also remember that the counting aggregation functions over specific fields (count and dc) count each value even within multivalued fields. That can produce results you'd not expect. A run-anywhere example: | makeresults count=100 | streamstats count | eval count=tostring(count) | eval digits=split(count,"") | stats count as eventcount count(digits) as digitcount It will generate a list of 100 numbers, then it will split the numbers rendered into text into separate digits. And finally it will count both the overall events (which predictably will be 100 since that's how many events we generated) and digits which will say 192 because you had 9 single-digit numbers, 90 two-digit numbers and one three-digit number which got split into 192 single digits spread among 100 events. So be wary when using the count() and dc() aggregation functions.
I don't know what it has to do with ITSI but in general, if you want to present something in a table, you use a table. If you want to manually render some strings, you just use eval to concatenate mu... See more...
I don't know what it has to do with ITSI but in general, if you want to present something in a table, you use a table. If you want to manually render some strings, you just use eval to concatenate multiple strings together, add something to them and so on. Depending on your use case, you could use the table but use some clever styling with CSS to render the table the way you want. So either use something like | eval host_with_description = host . description or be more precise about what you want to achieve.
First and foremost - verify if: 1) The events are generated at the source machine at all - run a wireshark there and see if the packets appear on the wire. If not - here's your culprit - troubleshoo... See more...
First and foremost - verify if: 1) The events are generated at the source machine at all - run a wireshark there and see if the packets appear on the wire. If not - here's your culprit - troubleshoot your Kiwi. 2) If they are being sent, check with tcpdump on the receiving end. 3) If you can see the packets on the wire, check firewall rules and rp_filter.  
Splunk has its limitations. One of them is not very pretty handling of structured data (which is understandable to a point). So if you use either automatic extractions or the spath command, to parse ... See more...
Splunk has its limitations. One of them is not very pretty handling of structured data (which is understandable to a point). So if you use either automatic extractions or the spath command, to parse whole event you'll get a multivalued field. From that field you have to get your first value either by means of mvindex() function or by mvexpanding the event and selecting just first result. Alternatively you can call spath with a specific path within your json structure. Like | spath path=data.initiate_state{0}.path{0} You can even get all first path elements from all initstate_state elements by | spath path=data.initiate_state{}.path{0}
The first one that shows" in data.initiate_state[].path[] And yes, the other array elements are not as meaningful as the first element.
It is very unclear what you mean by "the first one that shows".  Your screenshot shows that your input contains several JSON arrays data.events[], data.initiate_state[], data.initiate_state[].communi... See more...
It is very unclear what you mean by "the first one that shows".  Your screenshot shows that your input contains several JSON arrays data.events[], data.initiate_state[], data.initiate_state[].community[], data.initiate_state[].path[], etc. (It is important to illustrate raw JSON data, not Splunk's "beautified view", much less screenshot of "beautified view".  You can reveal raw data by clicking "Show as raw text" in search window.  Anonymize as needed.) I am also curious what is the use case to only wanting/needing "the first one that shows" from a data structure that is meant to contain multiple values?  Are other elements in the array not meaningful?  In a JSON array, every element is assumed to be equally weighed semantically.  How do you determine that "the first" is significant and the rest is not?  If there is truly some semantic insignificance of the rest of an array, you should exert every bit of your influence on developers to restructure data so you don't have bad semantics.  If you are uncertain, you should consult developers/manuals to clarify how data should be used. This much said, it is still unclear what is the meaning of "first one that shows."  Array data.initiate_state[].path[] is nested in array data.initiate_state[].  Do you want "first one that shows" in every element of data.initiate_state[]?  Of do you want "first one that shows" in data.initiate_state[].path[] in the "first one that shows" in data.initiate_state[]?
Hi @Shubham.Kadam, Thanks for asking your question on the community. I wanted to let you know that since your first post contained a significant amount of code, it was flagged for spam. I cleared t... See more...
Hi @Shubham.Kadam, Thanks for asking your question on the community. I wanted to let you know that since your first post contained a significant amount of code, it was flagged for spam. I cleared that up for you and your post is now live. I'm doing some searching to see if I can find any helpful existing content to share with you.  I've also reached out to the Apple CSM for you, so you could also be hearing directly from them. 
I've never used Kiwi syslog, but you can use the netcat (nc) utility to send test syslog messages to the SC4S server first and check, netcat needs to be installed.    UDP test echo "My Test UDP sy... See more...
I've never used Kiwi syslog, but you can use the netcat (nc) utility to send test syslog messages to the SC4S server first and check, netcat needs to be installed.    UDP test echo "My Test UDP syslog message" | nc -w1 -u <YOUR SC4S Server> 514 OR locally from the SC4S server echo "My Test UDP syslog message" | nc -w1 -u localhost 514 And see if any messages are sent to the Splunk/HEC Also check SC4S to see if data is being sent, when you send data from the Kiwi system   sudo tcpdump -i any udp port 514   Other things to check:  Check the /opt/sc4s/env_file - these are the default ports, but I can't remember if you need to add these as they should be default, may be worth adding these and restarting and see if that could be the cause.   SC4S_LISTEN_DEFAULT_TCP_PORT=514 SC4S_LISTEN_DEFAULT_UDP_PORT=514   Check the logs  podman logs SC4S  You said the firewall is ok but might be worth disabling it temporarily.
Hi @Gustavo.Marconi, Thanks for asking your question on the community. Did you happen to find any new information or a solution for your question? If so, please share what you learned as a reply he... See more...
Hi @Gustavo.Marconi, Thanks for asking your question on the community. Did you happen to find any new information or a solution for your question? If so, please share what you learned as a reply here. If not, you can try contacting your AppD Rep or even AppD Support. How do I submit a Support ticket? An FAQ 
I am trying to generate one event from of list of similar events. I want to remove the _check and add these to one field separated by comas. I am generating a critical event that lists all the host t... See more...
I am trying to generate one event from of list of similar events. I want to remove the _check and add these to one field separated by comas. I am generating a critical event that lists all the host that are not showing. example: HOST                                           SEVERITY Bob_1009_check                   Critical Jack_1002_check                  Critical John_1001_check                  Critical   So when I am done I want it to be: HOST   (or some other field name)                   SEVERITY                   DESCRIPTION Bob_1009, Jack_1002, John_1001              Critical                       (Bob_1009, Jack_1002, John_1001) are no longer up, please review your logs. I have trimmed the host accurately but I cannot figure out how to get a table of host to show in a side by side list to add into a description field I want to generate in an alert.  I DO NOT WANT a table. I want them side by side comma separated or semicolon separated.    
Hi @isoutamo  If i use | stats Count  i am getting value .But i want to show in pie chart. So i checked in the visualization it showing as no result.
Hi @steve.diaz, Your post has received a few replies. Please continue the conversation to get your questions answered or let us know if you found a solution already, if so, share it! 
@corti77  Is Splunk web running on the default port (8000)? netstat -ano | findstr 8000 Are there any firewalls or network configurations blocking access to port 8000? If the above solution helps... See more...
@corti77  Is Splunk web running on the default port (8000)? netstat -ano | findstr 8000 Are there any firewalls or network configurations blocking access to port 8000? If the above solution helps, an upvote is appreciated.  
Hi If you want only total count then just drop "by ..."  away from your 1st example. If needed add  | where isnotnull(content.scheduleDetails.lastRunTime) before stats. r. Ismo
@corti77  Check the _internal index for the logs in web_service.log. Do you see anything prior to the stopping ? Location: $SPLUNK_HOME/var/log/splunk/web_service.log  If the above solution helps... See more...
@corti77  Check the _internal index for the logs in web_service.log. Do you see anything prior to the stopping ? Location: $SPLUNK_HOME/var/log/splunk/web_service.log  If the above solution helps, an upvote is appreciated.