All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I maintain IPinfo's Splunk App: https://splunkbase.splunk.com/app/4070 Our customers have recently reported that our application doesn't work when Splunk Enterprise Security is enabled. For context... See more...
I maintain IPinfo's Splunk App: https://splunkbase.splunk.com/app/4070 Our customers have recently reported that our application doesn't work when Splunk Enterprise Security is enabled. For context, our application uses one of two modes to interact with our data: 1) queries our API directly 2) downloads our datasets locally using a public HTTPS endpoint The failure only happens in the second mode, when we have to make REST calls to coordinate the download of our data. One key finding in my early investigation is that our Splunk application communicates using raw non-SSL-verified HTTPS requests (i.e. using the requests Python library with verify=False), authenticated by session keys. Splunk Entreprise Security seems to prevent these types of communication. To bypass this restriction, I converted everything over to the Splunk Python SDK, which bypasses all of these SSL issues. I quickly realized that, to leverage the Splunk Python SDK in all scenarios and with consistency, it would just be easier to use bearer tokens throughout, so the second change I made was leveraging bearer tokens for REST communications. Despite these two changes, the application still doesn't work with Splunk Entreprise Security enabled. It works without a problem when it is disabled (for example, when testing in the Docker Splunk dev environment). I've also tried to build a simple debug handler based on splunk.rest.BaseRestHandler. When I try to call it directly with Splunk Entreprise Security enabled, I get the following error: ERROR - HTTP 500 Error starting: Can't load script "/opt/splunk/etc/apps/ipinfo_app/bin/debug_endpoint.py" -- Error starting: Can't load script "/opt/splunk/etc/apps/ipinfo_app/bin/debug_endpoint.py" I haven't been able to track this particular error in Splunk forums or other forums on the Internet.  If anyone has insight on this problem, I would appreciate any help. Thank you!        
Hi @MichaelM1, Does your test script fail at ~1000 connections when sending a handshake directly to the intermediate forwarder input port and not your server script port? Completing a handshake and ... See more...
Hi @MichaelM1, Does your test script fail at ~1000 connections when sending a handshake directly to the intermediate forwarder input port and not your server script port? Completing a handshake and sending no data while holding the connection open should work. The splunktcp input will not reset the connection for at least (by default) 10 minutes (see the inputs.conf splunktcp s2sHeartbeatTimeout setting). It still seems as though there may be a limit at the firewall specific to your splunktcp port, but the firewall would be logging corresponding drops or resets. The connection(s) from the intermediate forwarder to the downstream receiver(s) shouldn't directly impact new connections from forwarders to the intermediate forwarder, although blocked queues may prevent new connections or close existing ones. Have you checked metrics.log on the intermediate forwarder for blocked=true events? A large number of streams moving through a single pipeline on an intermediate forwarder will likely require increasing queue sizes or adding pipelines.
@darrfang When I tried I think it listed the two pairs within the same row in the table, whereas the second mvexpand broke them into their own rows. I guess it depends what you're going to do with t... See more...
@darrfang When I tried I think it listed the two pairs within the same row in the table, whereas the second mvexpand broke them into their own rows. I guess it depends what you're going to do with the data but if you wanted to sort or filter you might want them expanded further?  Either way, Im glad it worked out for you - Thanks for letting me know
I am not entirely clear on your requirement. Let's assume for arguments sake that M is 4 and N is 6. If a user accesses the same account for sensitive information for 5 days in a row, does that coun... See more...
I am not entirely clear on your requirement. Let's assume for arguments sake that M is 4 and N is 6. If a user accesses the same account for sensitive information for 5 days in a row, does that count as 1 visit for all 5 days or only for days 4 and 5? And does it only count if at least 4 of the days are in the sliding window of 6 days, or when any of the 5 days are in the 6 day window?
Excellent - mvmap is very powerful when handling MV fields. One small point I noticed on your use of single quotes to wrap field names. Using single quotes is essential where field names start with ... See more...
Excellent - mvmap is very powerful when handling MV fields. One small point I noticed on your use of single quotes to wrap field names. Using single quotes is essential where field names start with numbers or have spaces or other non-ascii characters, although some characters are ok, e.g. underscore, otherwise the search just won't work. Here, you have one use substr(session_time, ...) where you are not using single quotes, and it works because it's not needed for either of the two fields names here, so it's good to be aware of when it's necessary on the right hand side of an eval. When you know the field name, you can make the call, but a command that is very useful is foreach and when using templated fields <<FIELD>>, it's generally good practice to always wrap that usage.  
Hi @livehybrid this works like magic! thanks a lot for giving  me the insights! Just wondering what's the reason at here that you did mvexpand twice just did some test seems that  if I remove `| m... See more...
Hi @livehybrid this works like magic! thanks a lot for giving  me the insights! Just wondering what's the reason at here that you did mvexpand twice just did some test seems that  if I remove `| mvexpand data_value` I can still get the same results / format
Okay, reporting back. Your advice was sound. I managed to drop the mvexpand all together by substituting in this bit of logic at the end. | stats values(dhcp_host_name) as Hostname values(Switch... See more...
Okay, reporting back. Your advice was sound. I managed to drop the mvexpand all together by substituting in this bit of logic at the end. | stats values(dhcp_host_name) as Hostname values(Switch) as Switch values(Port) as Port values(ip_add) as IP_Address values(mac_add) as MAC_Address list(session_time) as session_time by Session_ID, Auth | eval Time=mvmap('session_time', if(substr('session_time', 1, len('Session_ID'))=='Session_ID', substr(session_time, len('Session_ID')+2), null())) | table Time Hostname IP_Address MAC_Address Switch Port Auth | sort -Time   Just so you have an example of what my data might look like for one event.    Session_ID=  "7D5A007C1B294E" session_time= "7D5A007C1B294E,2025-02-11 12:56:51"                                "9DE81CAB15DD46,2025-02-06 15:22:13" By using mvmap, I itterate through session_time and check each value to find which equals that events Session_ID. Then I extract the Time field from that value.
In Securonix's SIEM, we can manually create cases through Spotter by generating an alert and then transferring those results into an actual incident on the board. Is it possible to do something simil... See more...
In Securonix's SIEM, we can manually create cases through Spotter by generating an alert and then transferring those results into an actual incident on the board. Is it possible to do something similar in Splunk? Specifically, I have a threat hunting report that I've completed, and I'd like to document it in an incident, similar to how it's done in Securonix. The goal is to extract a query from the search results, create an incident, and generate a case ID to help track the report. Is there a way to accomplish this in Splunk so that it can be added to the incident review board for documentation and tracking purposes?
I believe some developer license requests go to an approver, so this might be the cause of the delay. Is your email account associated with a Splunk subscription/license (Cloud or Enterprise) ? Ple... See more...
I believe some developer license requests go to an approver, so this might be the cause of the delay. Is your email account associated with a Splunk subscription/license (Cloud or Enterprise) ? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I would suggest checking your spam folder, but also note that the setup of your Splunk Cloud Trial stack might take a little time, how long has it been since you requested this? Has it been more than... See more...
I would suggest checking your spam folder, but also note that the setup of your Splunk Cloud Trial stack might take a little time, how long has it been since you requested this? Has it been more than a couple of hours? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Its not possible within Splunk to have a crontab for the first Sunday of the month, however.. you might be able to run it every day for first 7 days of the month (`30 9 1-7 * *`) and add the followin... See more...
Its not possible within Splunk to have a crontab for the first Sunday of the month, however.. you might be able to run it every day for first 7 days of the month (`30 9 1-7 * *`) and add the following to your search: | where strftime(now(),"%a")=="Sun" This will stop the search from continuing if it isnt Sunday... Does this help? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
hi i request a license developper but i don't received an email. my email is valid because i received un email for resetting password   thank you
Hi I check the fields are correct but i want to know that using spath for extraction from json and using props and transforms gives same result or not? As I am getting same message value that was uns... See more...
Hi I check the fields are correct but i want to know that using spath for extraction from json and using props and transforms gives same result or not? As I am getting same message value that was unstructured earlier which was coming after using below statement in conf file   [securelog_override_raw] INGEST_EVAL = message := json_extract(_raw, "message") the value in message is still same unstructured with lots of messed up data like below do i have to separate this myself in this case using search queries? 2025-02-11 20:20:46.192 [com.bootstrapserver.runtim] DEBUG Stability run result : com.cmp.bootstrapserver.runtime.internal.api.RStability@36464gf 2025-02-11 20:20:46 [com.bootstrapserver.runtim] DEBUG Stability run result :com.cmp.bootstrapserver.runtime.interndal.api.RStability@373638cgf after spath same message came from message field and now using conf file with prof and tranforms its still the same. will it extract like this only?  
hi, i activate cloud trial but i don't received an email for activation and the button access instance is disabled my is valid because i receive a mail for resetting password thank you
Hey @darrfang  How about this?   | makeresults | eval _raw="{ \"key1\": { \"key2\": { \"key3\": [ {\"data_value\": {\"aaa\": \"12345\", \"bbb\": \"23456\"}} ] } }... See more...
Hey @darrfang  How about this?   | makeresults | eval _raw="{ \"key1\": { \"key2\": { \"key3\": [ {\"data_value\": {\"aaa\": \"12345\", \"bbb\": \"23456\"}} ] } } }" | spath input=_raw output=data_value path=key1.key2.key3{}.data_value | mvexpand data_value | eval key_value=split(replace(data_value, "[\{\}\"]", ""), ",") | mvexpand key_value | rex field=key_value "\s?(?<node>[^:]+):(?<value>.*)" | table node value   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I am trying to call the applicationFlowMapUiService from Postman but getting an error. I can fetch other restui services just fine from Postman client but this one is giving me the following error: ... See more...
I am trying to call the applicationFlowMapUiService from Postman but getting an error. I can fetch other restui services just fine from Postman client but this one is giving me the following error: "HTTP ERROR 500 javax.servlet.ServletException: javax.ejb.EJBException: Identity Entity must be of type &quot;USER&quot; to retrieve user."  The API I am calling looks like:  {{controller_host}}/controller/restui/applicationFlowMapUiService/application/{{application}}?time-range={{timerange}}&mapId=-1&baselineId=-1&forceFetch=false Any ideas? That's the one you wanted me to use right?
Thank you! There's a lot of good stuff here I'll try and incorporate into the search. I will say, however, that session_time doesn't actually have duplicates. It contains a each unique timestamp for ... See more...
Thank you! There's a lot of good stuff here I'll try and incorporate into the search. I will say, however, that session_time doesn't actually have duplicates. It contains a each unique timestamp for each Session_ID.....however, as every event has its own copy of session_time, when I do mvexpand, it leads to multiple duplicates that I need to take out with dedup. Ideally, it'd be nice if each entry of session_time within each event only had the time and session_id for that specific event and not all events.... I'm going to see if maybe I can use mvmap to check each events Session_ID against the session_id part of session_time. Sorry, I know it's convoluded. Still, I'm making progress thanks to your suggestions~
Hi splunk team,  I have a question about how to extract the key-value pair from json data. Let's say for example I have two raw data like this:   # raw data1: { "key1": { "key2": { "ke... See more...
Hi splunk team,  I have a question about how to extract the key-value pair from json data. Let's say for example I have two raw data like this:   # raw data1: { "key1": { "key2": { "key3": [ {"data_value": {"aaa": "12345", "bbb": "23456"}} ] } } } # raw data 2: { "key1": { "key2": { "key3": [ {"data_value": {"ccc": "34567"}} ] } } }     how can I extract the key-value results in all the data_value, to be a table as:   node value aaa 12345 bbb 23456 ccc 34567       I current have a splunk query that could do part of it:   ```some search...``` | spath output=pairs path=key1.key2.key3{}.data_value | rex field=hwids "\"(?<node>[^\"]+)\":\"(?<value>[^\"]+)\"" | table node value pairs   but this only gives me the result of all the first data, result would look like below, that ignore the data of  "bbb":"23456". Please give me some advice on how to grab all the results, thanks!   node value pairs aaa 12345 {"aaa": "12345", "bbb": "23456"} ccc 34567 {"ccc": "34567"}      
Hi All, Trying to configure an alert that runs on the first Sunday only of every month, specifically at 9:30am. I put this as the cron expression: 30 9 1-7 * 0 If I'm reading the documentation c... See more...
Hi All, Trying to configure an alert that runs on the first Sunday only of every month, specifically at 9:30am. I put this as the cron expression: 30 9 1-7 * 0 If I'm reading the documentation correctly, that should be it. However, the alert appears to be running every Sunday of every month instead of just the first Sunday of every month. Am I doing something wrong? Can't figure it out.... Thanks!
There is a Proofpoint add-on and we have it installed, but we need kind of bulk processing capabilities. For example, list all messages from a given sender, IP etc.