All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Apparently this setting was not enabled on our deployer hence the ES upgrade still proceeded without it's enablement.
Thank you for your response. I have already tried this.  In this search I am getting multiple srcip and multiple dstip In one row. I required one row for one srcip to one dstip but alert should be  ... See more...
Thank you for your response. I have already tried this.  In this search I am getting multiple srcip and multiple dstip In one row. I required one row for one srcip to one dstip but alert should be  trigger  saperatly title wise .
@sumarri    Kindly check the below documents for reference https://docs.splunk.com/Documentation/Splunk/9.2.1/Search/SavingandsharingjobsinSplunkWeb https://docs.splunk.com/Documentation/Splunk/l... See more...
@sumarri    Kindly check the below documents for reference https://docs.splunk.com/Documentation/Splunk/9.2.1/Search/SavingandsharingjobsinSplunkWeb https://docs.splunk.com/Documentation/Splunk/latest/Security/Aboutusersandroles
Hi, I am trying to create a daily alert to email the contents of the Security Posture dashboard to a recipient. Can someone please share how I can turn the content of this Dashboard from Splunk ES ... See more...
Hi, I am trying to create a daily alert to email the contents of the Security Posture dashboard to a recipient. Can someone please share how I can turn the content of this Dashboard from Splunk ES into a search within an ALert so it can be added to an email and be sent out daily? Thanks
I do not see why you needed to do that extra extraction because Splunk should have given you a field named "request_path" already. (See emulation below.)  All you need to do is to assign a new field ... See more...
I do not see why you needed to do that extra extraction because Splunk should have given you a field named "request_path" already. (See emulation below.)  All you need to do is to assign a new field based on match.   | eval route = if(match(request_path, "^/orders/\d+"), "/order/{orderID}", null())   The sample data should give you something like level request_elapsed request_id request_method request_path response_status route info 100 2ca011b5-ad34-4f32-a95c-78e8b5b1a270 GET /orders/123456 500 /order/{orderID} Is this what you wanted? Here is a data emulation you can play with and compare with real data.   | makeresults | eval _raw = "level=info request.elapsed=100 request.method=GET request.path=/orders/123456 request_id=2ca011b5-ad34-4f32-a95c-78e8b5b1a270 response.status=500" | extract ``` data emulation above ```   Of course, if for unknown reasons Splunk doesn't give you request_path, simply add an extract command and skip all the rex which is expensive.
I tried the same thing and found the same issue. I think the blacklist config is only compatible with the cloudtrail input NOT the sqs_based_s3 input. Really unfortunate as I wanted to switch to role... See more...
I tried the same thing and found the same issue. I think the blacklist config is only compatible with the cloudtrail input NOT the sqs_based_s3 input. Really unfortunate as I wanted to switch to role based cloudtrail logging rather than aws account. Please put this on your bug litst Splunk.
Let me give this a semantic makeover using bit_shift_left (9.2 and above - thanks @jason_hotchkiss for noticing) because semantic code is easier to understand and maintain.   | eval offset = mvap... See more...
Let me give this a semantic makeover using bit_shift_left (9.2 and above - thanks @jason_hotchkiss for noticing) because semantic code is easier to understand and maintain.   | eval offset = mvappend("24", "16", "8") | eval segment_rev = mvrange(0, 3) | foreach *_ip [eval <<FIELD>> = split(<<FIELD>>, "."), <<FIELD>>_dec = sum(mvmap(segment_rev, bit_shift_left(tonumber(mvindex(<<FIELD>>, segment_rev)), tonumber(mvindex(offset, segment_rev)))), tonumber(mvindex(<<FIELD>>, 3))), <<FIELD>> = mvjoin(<<FIELD>>, ".") ``` this last part for display only ```] | fields - offset segment_rev   The sample data gives dst_ip dst_ip_dec src_ip src_ip_dec 192.168.1.100 3232235876 192.168.1.1 3232235777 Here is an emulation you can play with and compare with real data     | makeresults format=csv data="src_ip, dst_ip 192.168.1.1, 192.168.1.100" ``` data emulation above ```     Note: If it helps readability., you can skip foreach and spell the two operations separately.   | eval offset = mvappend("24", "16", "8") | eval segment_rev = mvrange(0, 3) | eval src_ip = split(src_ip, ".") | eval dst_ip = split(dst_ip, ".") | eval src_ip_dec = sum(mvmap(segment_rev, bit_shift_left(tonumber(mvindex(src_ip, segment_rev)), tonumber(mvindex(offset, segment_rev)))), tonumber(mvindex(src_ip, 3))) | eval dst_ip_dec = sum(mvmap(segment_rev, bit_shift_left(tonumber(mvindex(dst_ip, segment_rev)), tonumber(mvindex(offset, segment_rev)))), tonumber(mvindex(dst_ip, 3))) | eval src_ip = mvjoin(src_ip, "."), dst_ip = mvjoin(dst_ip, ".") ``` for display only ``` | fields - offset segment_rev        
Hi Gustavo, Excellent question and I can appreciate the interest in the discrepancies. With demo systems, we're generally not dealing with live data. The way it's generated can vary and, at times... See more...
Hi Gustavo, Excellent question and I can appreciate the interest in the discrepancies. With demo systems, we're generally not dealing with live data. The way it's generated can vary and, at times, can cause abnormalities within the context of the data. This is what is going on here.  Beyond this, in production, it's advisable to ensure your looking at the most specific time range possible to reduce the likelihood of data aggregation complexities.  e.g. looking at 24 hour data is less effective than looking at 5 minute aggregation levels.  The following two links may be beneficial too. Server Visibility: https://docs.appdynamics.com/appd/24.x/24.4/en/infrastructure-visibility/server-visibility Troubleshooting Applications: https://docs.appdynamics.com/appd/24.x/24.4/en/application-monitoring/troubleshooting-applications
Any followup on this?   I am seeing the same issue.
Hi @PoojaChand02 , It seems the screenshots were from different Splunk platforms. The first one is Splunk Enterprise but second one is  Splunk Cloud.  Splunk Cloud does not have "Data Summary" butt... See more...
Hi @PoojaChand02 , It seems the screenshots were from different Splunk platforms. The first one is Splunk Enterprise but second one is  Splunk Cloud.  Splunk Cloud does not have "Data Summary" button. You can see similar data summary using below query for host data .(You can use other types like  "hosts", "sources" or "sourcetypes". Please do not forget to replace rename command accordingly. You can also see other indexes than main. | metadata index=main type=hosts | eval lastSeen = strftime(lastTime, "%x %l:%M:%S %p") | rename host AS Host, totalCount AS Count, lastSeen AS "Last Update" | table Host, Count, "Last Update"  
Have you added the dropdown - what is the problem you are facing? Simply add the dropdown, set the 8 static options and then in your search use index=bla host=*$my_host_token$* where my_host_token... See more...
Have you added the dropdown - what is the problem you are facing? Simply add the dropdown, set the 8 static options and then in your search use index=bla host=*$my_host_token$* where my_host_token is the token for your dropdown Assuming the table below is the finite list of hosts you will have, then this should work - there are of course other ways to do this, but this is the simplest.  
Thanks @scelikok. No I don't just want the orderID. But I want to manually create the RESTful API routing pattern. for "path=/order/123456",  "route=/order/{orderID}", basically I am trying to use r... See more...
Thanks @scelikok. No I don't just want the orderID. But I want to manually create the RESTful API routing pattern. for "path=/order/123456",  "route=/order/{orderID}", basically I am trying to use regex to replace the value and create a new field in this way: if value matches \/order\/\d{12}, then convert to /order/{orderID} I have other examples like: path=/user/jason@sample.com/orders route=/user/{userID}/orders    
https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/Aboutthesearchapp
The data summary option does not exist in Splunk Cloud
Thanks, I will check out your advice in a bit. Yes, I agree that the data structure is not ideal for parsing. Unfortunately, this is output from an OpenTelemetry collector following the OpenTelemetr... See more...
Thanks, I will check out your advice in a bit. Yes, I agree that the data structure is not ideal for parsing. Unfortunately, this is output from an OpenTelemetry collector following the OpenTelemetry standard (which Spunk also embraces, though we don’t have native parsing for it yet in Splunk Enterprise), so if this takes off as the cross-vendor standard for pushing telemetry, then we are going to have to deal with ingesting in this format more and more. Or maybe it is an opportunity to suggest formatting changes to CNCF to the standard.
It is probably because your field looks like it has come from JSON and based on the link provided by @isoutamo , means that the field extractions are happening at stage 4, whereas your REPORT extract... See more...
It is probably because your field looks like it has come from JSON and based on the link provided by @isoutamo , means that the field extractions are happening at stage 4, whereas your REPORT extraction is happening at stage 3, therefore the field does not exist. You could try creating a calculated field using an eval replace expression to remove the non-domain part. You can try this in standard SPL by experimenting with your regex using | eval domain=replace('event.url', "(?:https?:\/\/)?(?:www[0-9]*\.)?(?)([^\n:\/]+)", "\1") That is NOT correct above, as I am not sure what the replacement token \1 should be with all the brackets and capturing/non-capturing groups, but you can experiment with regex101.com  
Let me clarify: When you say "color", you are talking about converting percentage text to a string representation of color code, not to color the percentage text in E-mail alert.  Correct?  In other ... See more...
Let me clarify: When you say "color", you are talking about converting percentage text to a string representation of color code, not to color the percentage text in E-mail alert.  Correct?  In other words, you want something like Name color percentage A red 71 B amber 90 C red 44 D amber 88 E red 78 Because while potentially doable (and likely will involve a custom command you need to develop externally), Splunk doesn't provide such a function to color text used in E-mail alerts.  If this is the correct requirement, look up documentation for case I further assume that your "percentage" field doesn't come with a percent sign (%); if you want that % in E-mail, you can always add it after color mapping.   | eval color = case(percentage < 80, "red", percentage < 95, "amber", true(), "green")   Here is a data emulation you can play with and compare with real data   | makeresults format=csv data="Name, percentage A, 71% B, 90% C, 44% D, 88% E, 78%" | eval percentage = replace(percentage, "%", "") ``` data emulation above ```   Hope this helps.
Hi @codewarrior, If I got it correct, your need is extract a new field named "route" and it will contain the value after "orders/".  You can capture it in your rex command, please try below; level... See more...
Hi @codewarrior, If I got it correct, your need is extract a new field named "route" and it will contain the value after "orders/".  You can capture it in your rex command, please try below; level=info request.elapsed=(?<duration>.*) request.method=(?<method>.*) request.path=(?<path>.+orders\/(?<route>.+)) request_id=(?<request_id>.*) response.status=(?<statusCode>.*)  
Before I rant, thank you for sharing valid mock data in text.  This said, this is the second time in as many consecutive days I feel like screaming at lazy developers who makes some terrible use of J... See more...
Before I rant, thank you for sharing valid mock data in text.  This said, this is the second time in as many consecutive days I feel like screaming at lazy developers who makes some terrible use of JSON arrays. (The developer might be you.  But the rant stands )  Your data would have much cleaner, self-evidenced semantics had the developer simply use this:   [ { "attributes": {"host.name":{"stringValue":"myname1"},"telemetry.sdk.name":{"stringValue":"my_sdk"}}, "metrics": {"hw.host.energy":{"dataPoints":[{"timeUnixNano":"1712951030986039000","asDouble":359}]},"hw.host.power":{"dataPoints":[{"timeUnixNano":"1712951030986039000","asDouble":26}]}} }, { "attributes": {"host.name":{"stringValue":"myname2"},"telemetry.sdk.name":{"stringValue":"my_sdk"}}, "metrics": {"hw.host.energy":{"dataPoints":[{"timeUnixNano":"1712951030987780000","asDouble":211}]}} ]   In other words, only two JSON arrays in the original data are used correctly.  resourceMetrics.resource.attributes[] and resourceMetrics.scopeMetrics.metrics[] are total abomination of the intent of JSON arrays.  Speak to your developers to see if they could change the data structure not just for Splunk, but for future maintainers of their own code and any other downstream team as well. Now that this is off my chest, I understand that it will take more than one day for developers to change code even if you convince them on day one.  Here is the SPL that I use to tabulate your data like the following: host.name.stringValue hw.host.energy{}.asDouble hw.host.energy{}.timeUnixNano hw.host.power{}.asDouble hw.host.power{}.timeUnixNano sdk.name.stringValue myname1 359 1712951030986039000 26 1712951030986039000 my_sdk myname2 211 1712951030987780000     my_sdk In this form, I have assumed that dataPoints[] is the only node of interest under resourceMetrics[].scopeMetrics[].metrics.gauge  | spath path=resourceMetrics{} | fields - _* resourceMetrics{}.* | mvexpand resourceMetrics{} | spath input=resourceMetrics{} path=resource.attributes{} | spath input=resourceMetrics{} path=scopeMetrics{} | spath input=scopeMetrics{} path=metrics{} | fields - resourceMetrics{} scopeMetrics{} | foreach resource.attributes{} mode=multivalue [eval key = mvappend(key, json_extract(<<ITEM>>, "key"))] | eval idx = mvrange(0, mvcount(key)) | eval attributes_good = json_object() | foreach idx mode=multivalue [eval attribute = mvindex('resource.attributes{}', <<ITEM>>), attributes_good = json_set_exact(attributes_good, json_extract(attribute, "key"), json_extract(attribute, "value"))] | fields - key attribute resource.attributes{} | foreach metrics{} mode=multivalue [eval name = mvappend(name, json_extract(<<ITEM>>, "name"))] | eval name = if(isnull(name), json_extract('metrics{}', "name"), name) | eval idx = mvrange(0, mvcount(name)) | eval metrics_good = json_object() | foreach idx mode=multivalue [eval metric = mvindex('metrics{}', <<ITEM>>), metrics_good = json_set_exact(metrics_good, json_extract(metric, "name"), json_extract(metric, "gauge.dataPoints"))] ``` the above assumes that gauge.dataPoints is the only subnode of interest ``` | fields - idx name metric metrics{} ``` the above transforms array-laden JSON into easily understandable JSON ``` | spath input=attributes_good | spath input=metrics_good | fields - *_good ``` the following is only needed if dataPoints[] actually contain multiple values. This is the only code requiring prior knowledge about data fields ``` | mvexpand hw.host.energy{}.timeUnixNano | mvexpand hw.host.power{}.timeUnixNano (The fields - xxx commands are not essential; they just declutter view.)  Hope this helps. This is an emulation you can play with and compare with real data: | makeresults | eval _raw = "{ \"resourceMetrics\": [ { \"resource\": { \"attributes\": [ { \"key\": \"host.name\", \"value\": { \"stringValue\": \"myname1\" } }, { \"key\": \"telemetry.sdk.name\", \"value\": { \"stringValue\": \"my_sdk\" } } ] }, \"scopeMetrics\": [ { \"metrics\": [ { \"name\": \"hw.host.energy\", \"gauge\": { \"dataPoints\": [ { \"timeUnixNano\": \"1712951030986039000\", \"asDouble\": 359 } ] } }, { \"name\": \"hw.host.power\", \"gauge\": { \"dataPoints\": [ { \"timeUnixNano\": \"1712951030986039000\", \"asDouble\": 26 } ] } } ] } ] }, { \"resource\": { \"attributes\": [ { \"key\": \"host.name\", \"value\": { \"stringValue\": \"myname2\" } }, { \"key\": \"telemetry.sdk.name\", \"value\": { \"stringValue\": \"my_sdk\" } } ] }, \"scopeMetrics\": [ { \"metrics\": [ { \"name\": \"hw.host.energy\", \"gauge\": { \"dataPoints\": [ { \"timeUnixNano\": \"1712951030987780000\", \"asDouble\": 211 } ] } } ] } ] } ] }" | spath ``` data emulation above ``` Final thoughts about data structure with self-evidence semantics: If my speculation about dataPoints[] being the only node of interest under resourceMetrics[].scopeMetrics[].metrics.gauge stands, good data could be further simplified to [ { "attributes": {"host.name":{"stringValue":"myname1"},"telemetry.sdk.name":{"stringValue":"my_sdk"}}, "metrics": {"hw.host.energy":[{"timeUnixNano":"1712951030986039000","asDouble":359}],"hw.host.power":[{"timeUnixNano":"1712951030986039000","asDouble":26}]} }, { "attributes": {"host.name":{"stringValue":"myname2"},"telemetry.sdk.name":{"stringValue":"my_sdk"}}, "metrics": {"hw.host.energy":[{"timeUnixNano":"1712951030987780000","asDouble":211}]} ] I do understand that listing hw.host.energy and hw.host.power as coexisting columns is different from your illustrated output and may not suite your needs.  But presentation can easily be adapted.  Bad data structure remains bad.
Hi @Gustavo.Marconi, I reached out to a few people and Anderson B. should be getting in touch with you about this.