All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@gcusello  Which one would be better running it daily or realtime can you pls suggest we are into security specific usecases
Looks like the HEC event is sending the "Request.body" field as a string literal and splunks KV_MODE=json (or INDEXED_EXTRACTION=json) is extracting it that way.  It should be possible to extract ... See more...
Looks like the HEC event is sending the "Request.body" field as a string literal and splunks KV_MODE=json (or INDEXED_EXTRACTION=json) is extracting it that way.  It should be possible to extract it from there still but if you want it extracted at search time you may need to either adjust the source sending the data to send "Request.body" as a valid json array or set up a calculated field in props.conf to get the desired fields extracted (regex is also an option using props/transforms) Here is some SPL of simulated data to give an example of what I think is going on and how Splunk is extracting the fields. | makeresults | eval _raw="{\"ParentId\": \"\", \"Request\": {\"type\": \"RequestLogDTO\", \"body\": \"[ { \\\"recordLocator\\\": \\\"RYVBNQ\\\" } ]\", \"hostname\": \"IT-SALI\" }", example="Example 1: JSON Payload sent with 'Request.body{}' as a string wrapped in quotes" | spath input=_raw | append [ | makeresults | eval _raw="{\"ParentId\": \"\", \"Request\": {\"type\": \"RequestLogDTO\", \"body\": [ { \"recordLocator\": \"RYVBNQ\" } ], \"hostname\": \"IT-SALI\" }", example="Example 2: JSON Payload sent with 'Request.body{}' as a json array (no quotes)" ``` The spath below would better represent how splunk would parse it at search time using KV_Mode=json if the array wasn't wrapped in double quotes ``` | spath input=_raw ] | fields - _time | fields + example, _raw, "Request.body", "Request.body{}.recordLocator" Output looks something like this. It is also possible to still get those fields in your search pipeline if that is the route you want to go. | makeresults | fields _ time | eval _raw="{\"ParentId\": \"\", \"Request\": {\"type\": \"RequestLogDTO\", \"body\": \"[ { \\\"recordLocator\\\": \\\"RYVBNQ\\\", \\\"depStartDate\\\": \\\"2023-12-14T14:00:19.671Z\\\", \\\"depEndDate\\\": \\\"2023-12-15T09:20:19.671Z\\\" } ]\", \"hostname\": \"IT-SALI\" }", example="Example 1: JSON Payload sent with 'Request.body{}' as a string wrapped in quotes" | spath input=_raw ``` This spath command should extract fields in "Request.body" fully and will be available with fieldnames formatted with a leading "{}" since it is an array ``` | spath input=Request.body | fields + _raw, Request.body, "{}.*"   Snapshot of the expected output From here you can rename the funky "{}.*" fields by doing the SPL | rename "{}.*" as *
@lujr ... may we know, when was the last time you used the Palo Alto networks app for, to  view and filter out network traffic activity.  if its very longer, then, probably, that feature may has bec... See more...
@lujr ... may we know, when was the last time you used the Palo Alto networks app for, to  view and filter out network traffic activity.  if its very longer, then, probably, that feature may has become obsolete / replaced with "new name".   i installed the latest app - 8.1.1 (Nov 2023 release date)... this is having this "Network Security" Dashboad.. maybe this is what you are looking for, "maybe".   
one more questions pls... are you able to use "spath" splunk command to view each json field separately? https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Spath if yes, then, yo... See more...
one more questions pls... are you able to use "spath" splunk command to view each json field separately? https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Spath if yes, then, you can use regular expressions (rex) to search portion of the field u r looking to search. 
Hi Kinda a new to splunk . Sending data to splunk via HEC. Its a DTO which contains various fields, one of them being requestBody which is a string and it contains the JSON Payload my end point is r... See more...
Hi Kinda a new to splunk . Sending data to splunk via HEC. Its a DTO which contains various fields, one of them being requestBody which is a string and it contains the JSON Payload my end point is receiving. When viewing the log event within splunk, the requestBody stays as string. I was hoping that it could be expanded so that the json fields could be searchable.  As you can see, when i click on "body", the whole line is selected. I am hoping for , for example, "RYVBNQ" to be individually selectable so that i can do searches against that.   
For those who like to learn the different error codes and their details: Possible error codes The following status codes have particular meaning for all HTTP Event Collector endpoints: Status code... See more...
For those who like to learn the different error codes and their details: Possible error codes The following status codes have particular meaning for all HTTP Event Collector endpoints: Status code HTTP status code ID HTTP status code Status message 0 200 OK Success 1 403 Forbidden Token disabled 2 401 Unauthorized Token is required 3 401 Unauthorized Invalid authorization 4 403 Forbidden Invalid token 5 400 Bad Request No data 6 400 Bad Request Invalid data format 7 400 Bad Request Incorrect index 8 500 Internal Error Internal server error 9 503 Service Unavailable Server is busy 10 400 Bad Request Data channel is missing 11 400 Bad Request Invalid data channel 12 400 Bad Request Event field is required 13 400 Bad Request Event field cannot be blank 14 400 Bad Request ACK is disabled 15 400 Bad Request Error in handling indexed fields 16 400 Bad Request Query string authorization is not enabled   more info on HEC troubleshooting: https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/TroubleshootHTTPEventCollector   thanks, have a great day! 
well, this requires some testing to find out whether this is really an issue.  if yes, then, my view is... the forwarder mgmt in DMC works as per its design. it may have some "frequency" of when to ... See more...
well, this requires some testing to find out whether this is really an issue.  if yes, then, my view is... the forwarder mgmt in DMC works as per its design. it may have some "frequency" of when to read and load the fwders info. at times some "small delay" is accepted.  if no, then, may we know, your splunk version details pls. could you pls suggest what delay you felt.. i will try to find more details for you.  thanks for learning splunk. have a great day! 
Hi @Tomasz.Nawojczyk, Thank you so much for following up and sharing the solution.
@heskez Hello there, I am having this same issue (stream.NetflowReceiver - NetFlowDecoder::decodeFlow Unable to decode flow set data. No template with id 5206) were you able to get this resolved? If... See more...
@heskez Hello there, I am having this same issue (stream.NetflowReceiver - NetFlowDecoder::decodeFlow Unable to decode flow set data. No template with id 5206) were you able to get this resolved? If so, how did you fix it? I've searched for documentation and not specific to this problem. Thanks
Does the Palo Alto Networks App no longer have a page where you can view and filter out network traffic activity?
Hello, This has been resolved by adding the following code to the pom.xml. Inside <dependecies> add following <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</arti... See more...
Hello, This has been resolved by adding the following code to the pom.xml. Inside <dependecies> add following <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>1.18.26</version> <scope>provided</scope> </dependency> Inside <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.19.1</version> <configuration> <testFailureIgnore>true</testFailureIgnore> </configuration> </plugin> The plugin part is not necessary, as we can run the following to skip all tests. mvn clean install -DskipTests=true
چرا جواب منو نمیدید؟
Error 403 means the token is incorrect or disabled.  Check that the curl command has the right token.
That's a fair question.  So, first let me break the search to a less nested form, then describe the intentions/semantics. (As always, consult syntax and usage in Search Reference.)     | makeresul... See more...
That's a fair question.  So, first let me break the search to a less nested form, then describe the intentions/semantics. (As always, consult syntax and usage in Search Reference.)     | makeresults | eval ip = split("119.0.6.159,62.0.3.75,63.0.3.84,75.0.3.80,92.0.4.159", ",") ``` data emulation above ``` | eval idx = mvrange(0,4) | foreach ip mode=multivalue [| eval interim1 = split(<<ITEM>>, "."), interim2 = mvmap(idx, printf("%.3d", tonumber(mvindex(interim1, idx)))), interim3 = mvjoin(interim2, "."), sorted_ip_padded = mvappend(sorted_ip_padded, interim3)] | eval sorted_ip_padded = mvsort(sorted_ip_padded) | foreach sorted_ip_padded mode=multivalue [| eval interim4 = split(<<ITEM>>, "."), interim5 = mvmap(idx, printf("%d", tonumber(mvindex(interim4, idx)))), interim6 = mvjoin(interim5, "."), sorted_ip = mvappend(sorted_ip, interim6)]     (I'm using a different approach to fill your homework from @tscroggins's proposal because foreach is easier to break down.  I also use a more symmetric approach so it can be explained more readily.) Here is a key card behind the formula: split An IPv4 address will result will contain a 4-element array, indexed from 0 to 3. mvrange Generate a 4-element array with values from 0 to 3. mvsort This is lexicographic sort of array elements.  For 0-padded IPv4 addresses, it is equivalent to numeric sort that you desired. printf Pad and unpad To examine the formula, first take a look at output below: idx interim1 interim2 interim3 interim4 interim5 interim6 ip sorted_ip sorted_ip_padded 0 1 2 3 92 0 4 159 092 000 004 159 092.000.004.159 119 000 006 159 119 0 6 159 119.0.6.159 119.0.6.159 62.0.3.75 63.0.3.84 75.0.3.80 92.0.4.159 62.0.3.75 63.0.3.84 75.0.3.80 92.0.4.159 119.0.6.159 062.000.003.075 063.000.003.084 075.000.003.080 092.000.004.159 119.000.006.159 Inside the foreach loop, the following steps are followed: Breakdown IPv4 into octets. (interim1, interim4) Pad/unpad each octet. (interim2, interim5) Reassemble IPv4 with/without padding. (interim3, interim6) Assemble padded/unpadded IPv4 into array. (sorted_ip_padded, sorted_ip) Now, the only nested element is padding/unpadding: mvmap(idx, printf("%.3d", tonumber(mvindex(interim1, idx))))/mvmap(idx, printf("%d", tonumber(mvindex(interim4, idx)))).  Because of the way tonumber works in SPL, it cannot be broken down further.  But the principle is not too complicated.  idx is the index of octet.  So, it is an iteration of printf("%d", octet) over each octet, where octet = mvindex(interim4, idx) wrapped in tonumber(). Hope this helps.
You may be able to use a streamstats method for this instead if you don't want to deal with lookups. Although, you may need to set up alert throttling depending on how big your search time window ... See more...
You may be able to use a streamstats method for this instead if you don't want to deal with lookups. Although, you may need to set up alert throttling depending on how big your search time window is. | search index="XXXX" invoked_component="YYYYY" "Genesys system is available" | spath input=_raw output=new_field path=response_details.response_payload.entities{} | mvexpand new_field | fields new_field | spath input=new_field output=serialNumber path=serialNumber | spath input=new_field output=onlineStatus path=onlineStatus | where serialNumber!="" | lookup Genesys_Monitoring.csv serialNumber | where Country="Egypt" | sort 0 +Country, +serialNumber, +_time | streamstats window=2 first(onlineStatus) as previous_onlineStatus, last(onlineStatus) as next_onlineStatus by serialNumber | table _time, Country, serialNumber, onlineStatus, previous_onlineStatus, next_onlineStatus | eval trigger_condition=if( NOT 'next_onlineStatus'=='previous_onlineStatus', 1, 0 ), scenario=case( 'previous_onlineStatus'=="ONLINE" AND 'next_onlineStatus'=="OFFLINE", "Host's online status went down", 'previous_onlineStatus'=="OFFLINE" AND 'next_onlineStatus'=="ONLINE", "Hosts's online status came up" ) ``` Set up alert throttle on Country, serialNumber & _time ``` | where 'trigger_condition'>0  
@tscroggins @yuanliu Yes it's really complex to understand below SPL code due to nested commands. could you please brief how is the below code is working? | makeresults | eval ip = split("119.0.6.1... See more...
@tscroggins @yuanliu Yes it's really complex to understand below SPL code due to nested commands. could you please brief how is the below code is working? | makeresults | eval ip = split("119.0.6.159,62.0.3.75,63.0.3.84,75.0.3.80,92.0.4.159", ",") | eval idx = mvrange(0,4) | foreach ip mode=multivalue [eval sorted_ip = mvappend(sorted_ip, mvjoin(mvmap(idx, printf("%.3d", tonumber(mvindex(split(<<ITEM>>, "."), idx)))), "."))] | eval sorted_ip=mvmap(mvsort(mvmap(sorted_ip, mvjoin(mvmap(split(sorted_ip, "."), substr("00".sorted_ip, -3)), "."))), mvjoin(mvmap(split(sorted_ip, "."), coalesce(nullif(ltrim(sorted_ip, "0"), ""), "0")), "."))  
Adding a by-field of "serial_number" in you final stats will display you chart like this. Similarly, instead of the stats you could do a    | chart count as count ... See more...
Adding a by-field of "serial_number" in you final stats will display you chart like this. Similarly, instead of the stats you could do a    | chart count as count over serial_number by result    and this should give you results ver similar. For an overall Pass/Fail visual across all serial number you can do a stats like this   | stats count as count by result   and the resulting chart shows something like this    
| eval test="Test" | table test passes failures
Great Hopefully this is solved then.
As I posted this question my ubuntu forwarder appeared ! Anyone here could explain me why it seems linux forwarder took longer than windows to appeared ?