All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK, let's take back the argument... upgrading from 7.x to 8.x, new servers, new infrastructure... same annoying message... We saw how to hide warning message icons in dashboards. Well, this is ... See more...
OK, let's take back the argument... upgrading from 7.x to 8.x, new servers, new infrastructure... same annoying message... We saw how to hide warning message icons in dashboards. Well, this is good! Now, how to set a threashold for the ms or remove the message at all from UI?   It becomes very very annoying!!!
Yep! Le't stay as said... if someone else wants to add something, you're welcome, tcp_Kprocessed == Kb received by the receiver as a packet of events kb == the real Kb (compressed) written on Inde... See more...
Yep! Le't stay as said... if someone else wants to add something, you're welcome, tcp_Kprocessed == Kb received by the receiver as a packet of events kb == the real Kb (compressed) written on Indexer storage Explicit and simple, tcp_Kprocessed == the Networking thruput of events packet kb == the Compressed Data written to Indexer Storage of previous packet
Please take look at this blog post : https://www.linkedin.com/pulse/unveiling-secrets-enhancing-adobe-aem-performance-through-kulkarni-xrzec/ Optimizing Adobe AEM Site Performance: A Deep Dive with ... See more...
Please take look at this blog post : https://www.linkedin.com/pulse/unveiling-secrets-enhancing-adobe-aem-performance-through-kulkarni-xrzec/ Optimizing Adobe AEM Site Performance: A Deep Dive with Splunk RZ/ SPL  
Hello Team,  I am trying to setup proxy in splunk Heavy Forwarder. I did it by setting up environment variable http_proxy, but splunk python is not honouring the environment variable setup in Linux... See more...
Hello Team,  I am trying to setup proxy in splunk Heavy Forwarder. I did it by setting up environment variable http_proxy, but splunk python is not honouring the environment variable setup in Linux machine where HF is installed. If I run python script it will get the data from proxy, if I run the same script with opt/splunk/etc cmd python it is not going to proxy.   Is there any way we can make splunk to honour environment variables. 
Dear All, Scenario--> 1AV server is having multiple endpoint reporting to it. This AV server integrated with Splunk and through the AV server we are reciving DAT version info. for all the reporting ... See more...
Dear All, Scenario--> 1AV server is having multiple endpoint reporting to it. This AV server integrated with Splunk and through the AV server we are reciving DAT version info. for all the reporting endpoints. Requirement--> Need to generate a AV monthly DAT compliance report.   The criteria for DAT compliance is 7 days. within 7 days system should be updated to latest DAT. Workdone till now--> THere is no intelligenec in data to get the latest DAT from AV-Splunk logs. Only endpoint that are updated with N DAT is coming. I used EVAL command and tied the Latest/today DAT to the today DATE (Used today_date--convert-->today_DAT). Based on that I am able to calculate the DAT compliance for 7 days keeping the today_DAT for the 8th day as reference. This splunk query is able to give correct data for whatever time frame with  the past 7 days compliance only.   Issue--> for past 30 days i.e 25th to 25th of every month, I wanted to divide the logs with 7 days time frame starting from e.g 25th dec, 1 jan,  8th jan 15th jan 22jan  till 25th Jan (last slot less than 7days) and then calculate for each 7 day time frame to know what is the overall compliance on 25th jan. Accordingly calculate the overall 25th dec, 1 jan,  8th jan till 25th Jan  data for a month to give the final report Where stuck--> current query i tried to add the "bin" command for 7 days but unable to tie the latest DAT date (today_DAT date for the 1st Jan) to 7th day for first bin then 8th Jan for second bin so on and so forth In case there is any other method/query to do the same stuff. Kindly let me know   PFA screenshot for your reference @PickleRick @ITWhisperer  @yuanliu 
@gcusello  Which one would be better running it daily or realtime can you pls suggest we are into security specific usecases
Looks like the HEC event is sending the "Request.body" field as a string literal and splunks KV_MODE=json (or INDEXED_EXTRACTION=json) is extracting it that way.  It should be possible to extract ... See more...
Looks like the HEC event is sending the "Request.body" field as a string literal and splunks KV_MODE=json (or INDEXED_EXTRACTION=json) is extracting it that way.  It should be possible to extract it from there still but if you want it extracted at search time you may need to either adjust the source sending the data to send "Request.body" as a valid json array or set up a calculated field in props.conf to get the desired fields extracted (regex is also an option using props/transforms) Here is some SPL of simulated data to give an example of what I think is going on and how Splunk is extracting the fields. | makeresults | eval _raw="{\"ParentId\": \"\", \"Request\": {\"type\": \"RequestLogDTO\", \"body\": \"[ { \\\"recordLocator\\\": \\\"RYVBNQ\\\" } ]\", \"hostname\": \"IT-SALI\" }", example="Example 1: JSON Payload sent with 'Request.body{}' as a string wrapped in quotes" | spath input=_raw | append [ | makeresults | eval _raw="{\"ParentId\": \"\", \"Request\": {\"type\": \"RequestLogDTO\", \"body\": [ { \"recordLocator\": \"RYVBNQ\" } ], \"hostname\": \"IT-SALI\" }", example="Example 2: JSON Payload sent with 'Request.body{}' as a json array (no quotes)" ``` The spath below would better represent how splunk would parse it at search time using KV_Mode=json if the array wasn't wrapped in double quotes ``` | spath input=_raw ] | fields - _time | fields + example, _raw, "Request.body", "Request.body{}.recordLocator" Output looks something like this. It is also possible to still get those fields in your search pipeline if that is the route you want to go. | makeresults | fields _ time | eval _raw="{\"ParentId\": \"\", \"Request\": {\"type\": \"RequestLogDTO\", \"body\": \"[ { \\\"recordLocator\\\": \\\"RYVBNQ\\\", \\\"depStartDate\\\": \\\"2023-12-14T14:00:19.671Z\\\", \\\"depEndDate\\\": \\\"2023-12-15T09:20:19.671Z\\\" } ]\", \"hostname\": \"IT-SALI\" }", example="Example 1: JSON Payload sent with 'Request.body{}' as a string wrapped in quotes" | spath input=_raw ``` This spath command should extract fields in "Request.body" fully and will be available with fieldnames formatted with a leading "{}" since it is an array ``` | spath input=Request.body | fields + _raw, Request.body, "{}.*"   Snapshot of the expected output From here you can rename the funky "{}.*" fields by doing the SPL | rename "{}.*" as *
@lujr ... may we know, when was the last time you used the Palo Alto networks app for, to  view and filter out network traffic activity.  if its very longer, then, probably, that feature may has bec... See more...
@lujr ... may we know, when was the last time you used the Palo Alto networks app for, to  view and filter out network traffic activity.  if its very longer, then, probably, that feature may has become obsolete / replaced with "new name".   i installed the latest app - 8.1.1 (Nov 2023 release date)... this is having this "Network Security" Dashboad.. maybe this is what you are looking for, "maybe".   
one more questions pls... are you able to use "spath" splunk command to view each json field separately? https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Spath if yes, then, yo... See more...
one more questions pls... are you able to use "spath" splunk command to view each json field separately? https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Spath if yes, then, you can use regular expressions (rex) to search portion of the field u r looking to search. 
Hi Kinda a new to splunk . Sending data to splunk via HEC. Its a DTO which contains various fields, one of them being requestBody which is a string and it contains the JSON Payload my end point is r... See more...
Hi Kinda a new to splunk . Sending data to splunk via HEC. Its a DTO which contains various fields, one of them being requestBody which is a string and it contains the JSON Payload my end point is receiving. When viewing the log event within splunk, the requestBody stays as string. I was hoping that it could be expanded so that the json fields could be searchable.  As you can see, when i click on "body", the whole line is selected. I am hoping for , for example, "RYVBNQ" to be individually selectable so that i can do searches against that.   
For those who like to learn the different error codes and their details: Possible error codes The following status codes have particular meaning for all HTTP Event Collector endpoints: Status code... See more...
For those who like to learn the different error codes and their details: Possible error codes The following status codes have particular meaning for all HTTP Event Collector endpoints: Status code HTTP status code ID HTTP status code Status message 0 200 OK Success 1 403 Forbidden Token disabled 2 401 Unauthorized Token is required 3 401 Unauthorized Invalid authorization 4 403 Forbidden Invalid token 5 400 Bad Request No data 6 400 Bad Request Invalid data format 7 400 Bad Request Incorrect index 8 500 Internal Error Internal server error 9 503 Service Unavailable Server is busy 10 400 Bad Request Data channel is missing 11 400 Bad Request Invalid data channel 12 400 Bad Request Event field is required 13 400 Bad Request Event field cannot be blank 14 400 Bad Request ACK is disabled 15 400 Bad Request Error in handling indexed fields 16 400 Bad Request Query string authorization is not enabled   more info on HEC troubleshooting: https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/TroubleshootHTTPEventCollector   thanks, have a great day! 
well, this requires some testing to find out whether this is really an issue.  if yes, then, my view is... the forwarder mgmt in DMC works as per its design. it may have some "frequency" of when to ... See more...
well, this requires some testing to find out whether this is really an issue.  if yes, then, my view is... the forwarder mgmt in DMC works as per its design. it may have some "frequency" of when to read and load the fwders info. at times some "small delay" is accepted.  if no, then, may we know, your splunk version details pls. could you pls suggest what delay you felt.. i will try to find more details for you.  thanks for learning splunk. have a great day! 
Hi @Tomasz.Nawojczyk, Thank you so much for following up and sharing the solution.
@heskez Hello there, I am having this same issue (stream.NetflowReceiver - NetFlowDecoder::decodeFlow Unable to decode flow set data. No template with id 5206) were you able to get this resolved? If... See more...
@heskez Hello there, I am having this same issue (stream.NetflowReceiver - NetFlowDecoder::decodeFlow Unable to decode flow set data. No template with id 5206) were you able to get this resolved? If so, how did you fix it? I've searched for documentation and not specific to this problem. Thanks
Does the Palo Alto Networks App no longer have a page where you can view and filter out network traffic activity?
Hello, This has been resolved by adding the following code to the pom.xml. Inside <dependecies> add following <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</arti... See more...
Hello, This has been resolved by adding the following code to the pom.xml. Inside <dependecies> add following <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>1.18.26</version> <scope>provided</scope> </dependency> Inside <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.19.1</version> <configuration> <testFailureIgnore>true</testFailureIgnore> </configuration> </plugin> The plugin part is not necessary, as we can run the following to skip all tests. mvn clean install -DskipTests=true
چرا جواب منو نمیدید؟
Error 403 means the token is incorrect or disabled.  Check that the curl command has the right token.
That's a fair question.  So, first let me break the search to a less nested form, then describe the intentions/semantics. (As always, consult syntax and usage in Search Reference.)     | makeresul... See more...
That's a fair question.  So, first let me break the search to a less nested form, then describe the intentions/semantics. (As always, consult syntax and usage in Search Reference.)     | makeresults | eval ip = split("119.0.6.159,62.0.3.75,63.0.3.84,75.0.3.80,92.0.4.159", ",") ``` data emulation above ``` | eval idx = mvrange(0,4) | foreach ip mode=multivalue [| eval interim1 = split(<<ITEM>>, "."), interim2 = mvmap(idx, printf("%.3d", tonumber(mvindex(interim1, idx)))), interim3 = mvjoin(interim2, "."), sorted_ip_padded = mvappend(sorted_ip_padded, interim3)] | eval sorted_ip_padded = mvsort(sorted_ip_padded) | foreach sorted_ip_padded mode=multivalue [| eval interim4 = split(<<ITEM>>, "."), interim5 = mvmap(idx, printf("%d", tonumber(mvindex(interim4, idx)))), interim6 = mvjoin(interim5, "."), sorted_ip = mvappend(sorted_ip, interim6)]     (I'm using a different approach to fill your homework from @tscroggins's proposal because foreach is easier to break down.  I also use a more symmetric approach so it can be explained more readily.) Here is a key card behind the formula: split An IPv4 address will result will contain a 4-element array, indexed from 0 to 3. mvrange Generate a 4-element array with values from 0 to 3. mvsort This is lexicographic sort of array elements.  For 0-padded IPv4 addresses, it is equivalent to numeric sort that you desired. printf Pad and unpad To examine the formula, first take a look at output below: idx interim1 interim2 interim3 interim4 interim5 interim6 ip sorted_ip sorted_ip_padded 0 1 2 3 92 0 4 159 092 000 004 159 092.000.004.159 119 000 006 159 119 0 6 159 119.0.6.159 119.0.6.159 62.0.3.75 63.0.3.84 75.0.3.80 92.0.4.159 62.0.3.75 63.0.3.84 75.0.3.80 92.0.4.159 119.0.6.159 062.000.003.075 063.000.003.084 075.000.003.080 092.000.004.159 119.000.006.159 Inside the foreach loop, the following steps are followed: Breakdown IPv4 into octets. (interim1, interim4) Pad/unpad each octet. (interim2, interim5) Reassemble IPv4 with/without padding. (interim3, interim6) Assemble padded/unpadded IPv4 into array. (sorted_ip_padded, sorted_ip) Now, the only nested element is padding/unpadding: mvmap(idx, printf("%.3d", tonumber(mvindex(interim1, idx))))/mvmap(idx, printf("%d", tonumber(mvindex(interim4, idx)))).  Because of the way tonumber works in SPL, it cannot be broken down further.  But the principle is not too complicated.  idx is the index of octet.  So, it is an iteration of printf("%d", octet) over each octet, where octet = mvindex(interim4, idx) wrapped in tonumber(). Hope this helps.
You may be able to use a streamstats method for this instead if you don't want to deal with lookups. Although, you may need to set up alert throttling depending on how big your search time window ... See more...
You may be able to use a streamstats method for this instead if you don't want to deal with lookups. Although, you may need to set up alert throttling depending on how big your search time window is. | search index="XXXX" invoked_component="YYYYY" "Genesys system is available" | spath input=_raw output=new_field path=response_details.response_payload.entities{} | mvexpand new_field | fields new_field | spath input=new_field output=serialNumber path=serialNumber | spath input=new_field output=onlineStatus path=onlineStatus | where serialNumber!="" | lookup Genesys_Monitoring.csv serialNumber | where Country="Egypt" | sort 0 +Country, +serialNumber, +_time | streamstats window=2 first(onlineStatus) as previous_onlineStatus, last(onlineStatus) as next_onlineStatus by serialNumber | table _time, Country, serialNumber, onlineStatus, previous_onlineStatus, next_onlineStatus | eval trigger_condition=if( NOT 'next_onlineStatus'=='previous_onlineStatus', 1, 0 ), scenario=case( 'previous_onlineStatus'=="ONLINE" AND 'next_onlineStatus'=="OFFLINE", "Host's online status went down", 'previous_onlineStatus'=="OFFLINE" AND 'next_onlineStatus'=="ONLINE", "Hosts's online status came up" ) ``` Set up alert throttle on Country, serialNumber & _time ``` | where 'trigger_condition'>0