All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gcusello , sorry if I misinformed,  I need to generate an alert after the third step. difference between the 2nd and 3rd is, that I need to search the details of the fail request id (whi... See more...
Hi @gcusello , sorry if I misinformed,  I need to generate an alert after the third step. difference between the 2nd and 3rd is, that I need to search the details of the fail request id (which I will get from the first step), extract data from that field, and send it via email.  And yes, I am following that tutorial my 2nd step did not work correctly
Hi Quite interesting behaviour. As HF is basically an indexer without local indexing I don't see any reason why it cannot quarantine? But interesting part is who has set it as quarantine as usually ... See more...
Hi Quite interesting behaviour. As HF is basically an indexer without local indexing I don't see any reason why it cannot quarantine? But interesting part is who has set it as quarantine as usually this is done by search peer. And as quarantine actually means that this search peer shouldn't part a searches it shouldn't affect any indexing/forwarding function. One normal way to to use quarantine is just ensuring that peer can index/transfer full queues without disturbing by searches. You probably have already read this https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Quarantineasearchpeer Have you local MC or just CMC in use? If 1st one, have you check if MC has marked is as quarantine? r. Ismo
Hi Team I'd like to know how to integrate Splunk with Jira, to send splunk alerts or raise an incidents/issue on Jira for each Splunk alert from Splunk Cloud/Splunk Enterprise. Is there any recommen... See more...
Hi Team I'd like to know how to integrate Splunk with Jira, to send splunk alerts or raise an incidents/issue on Jira for each Splunk alert from Splunk Cloud/Splunk Enterprise. Is there any recommended app or way for this integration? Best Regards
Hi @PickleRick , my first search answers to your first requirement: yo have to save it as an alert that sends results in attachment. My second search answers to your first requirement: yo have to s... See more...
Hi @PickleRick , my first search answers to your first requirement: yo have to save it as an alert that sends results in attachment. My second search answers to your first requirement: yo have to save it as an alert that sends results in attachment. Third requirement isn't clear: what's the difference with the second? Do you know SPL? did you followed the Splunk Search Tutorial (https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/WelcometotheSearchTutorial)?  Ciao. Giuseppe
Hi @uagraw01, I don't know this add-on and the source, check if it's possible to send again data , otherwise they are lost. Ciao. Giuseppe
@gcusello We are receiving the data through ActiveMQ.
Thanks bot. I think AI generated Responses should be marked as such.
Thank you @datadevops the problem is that the oneidentity change will block all other splunk applications using the native dnslookup Paolo
Hi @gcusello , this would not give me the entire details of what I require. I need to generate recurring reports based on the following details. 1. get all failed request IDs 2. iterate all the... See more...
Hi @gcusello , this would not give me the entire details of what I require. I need to generate recurring reports based on the following details. 1. get all failed request IDs 2. iterate all the request IDs to get more details 3. extract require fields from those details and show them in tabular form and generate an email. I hope now my requirements I clear
Hi @vihshah , in this case, you have to define what you need to see, e.g. the occurrences of the values of this field in a time period: sourcetype="mykube.source" "failed request" | rex "failed r... See more...
Hi @vihshah , in this case, you have to define what you need to see, e.g. the occurrences of the values of this field in a time period: sourcetype="mykube.source" "failed request" | rex "failed request:(?<request_id>[\w-]+)" | stats count BY request_id or a time distribution by request_id: sourcetype="mykube.source" "failed request" | rex "failed request:(?<request_id>[\w-]+)" | timechart count BY request_id if you wan the list of all events with outher information you can use the table command: sourcetype="mykube.source" "failed request" | rex "failed request:(?<request_id>[\w-]+)" | table _time request_id field1, field2 field3 you could also create a simple search to select the request_id in a panel and with a drilldown filter all the results wi this field in a different panel. As I said, you should define your requirements before approaching a search. Ciao. Giuseppe
Hi @uagraw01 , it depends on how you are receiving those logs: if they are syslogs that you directly receive in Splunk (in other words not using rsyslog or syslog-ng) you missed them, for this reaso... See more...
Hi @uagraw01 , it depends on how you are receiving those logs: if they are syslogs that you directly receive in Splunk (in other words not using rsyslog or syslog-ng) you missed them, for this reason is a best practice us a syslog server insted of Splunk. if they come from files or wineventlog, it depends on the retention of these data in the original systems. If you still have the files, you should try to read them again using the crcSal = <SOURCE> option. Ciao. Giuseppe
  Hello Splunker!! My Splunk Enterprise license expired on January 29th, and because of that, I have renewed the license. But I missed some events during the license expiration period. How can I ge... See more...
  Hello Splunker!! My Splunk Enterprise license expired on January 29th, and because of that, I have renewed the license. But I missed some events during the license expiration period. How can I get back missed events so they will show up in the below graph?      
HI @gcusello , request_id extracted from my first search. | rex "failed request:(?<request_id>[\w-]+)" and I don't have any filter criteria on that request_id.  I just want to trace the flow of... See more...
HI @gcusello , request_id extracted from my first search. | rex "failed request:(?<request_id>[\w-]+)" and I don't have any filter criteria on that request_id.  I just want to trace the flow of that request id. and later, I want to extract few details from once I get the whole trace. but that is third part. right now I just want to search that request id to trace the flow.
Hi @vihshah , where does the request_id to use in the search come from? what are the conditions to use in the filter? please describe them with words. Ciao. Giuseppe
Hi, Even I'm facing the same issue. My current Upgrade Readiness App is at 4.1.2, but in splunkbase, the latest version available is 4.1.1. I presume current scan from Upgrade Readiness App are wron... See more...
Hi, Even I'm facing the same issue. My current Upgrade Readiness App is at 4.1.2, but in splunkbase, the latest version available is 4.1.1. I presume current scan from Upgrade Readiness App are wrong/misleading. Can anyone address this issue ?
I'm not entirely sure if I understand what you're asking for, but it sounds like this might be resolved by defining more fields? eg: | eval status_{http_status}=http_status | timechart count(status... See more...
I'm not entirely sure if I understand what you're asking for, but it sounds like this might be resolved by defining more fields? eg: | eval status_{http_status}=http_status | timechart count(status_*) as * by endpoint Would that do the trick? 
index="(index name)" sourcetype=source type (host="host1" OR host="host2") | search NOT [| inputlookup (lookup table name ) | table username] action=success | stats values(username) as user
I need to be able to add a tip when hovering over a single value viz. It's just a basic SVV, with code like this: { "type": "splunk.singlevalue", "dataSources": { "primary": "ds_some_number" },... See more...
I need to be able to add a tip when hovering over a single value viz. It's just a basic SVV, with code like this: { "type": "splunk.singlevalue", "dataSources": { "primary": "ds_some_number" }, "title": "", "options": { "majorValue": "> sparklineValues | lastPoint()", "trendValue": "> sparklineValues | delta(-2)", "sparklineValues": "> primary | seriesByName('floor')", "unitPosition": "before", "unit": "$", "majorFontSize": 36, "backgroundColor": "transparent", "showSparklineTooltip": true }, "description": "", "context": {}, "showProgressBar": false, "showLastUpdated": false }   I need the tool tip to explain how the number is calculated to the user, so just text. It is to run in Splunk Cloud. Anybody got any insight, please?    And yes, I need this to be done in Dashboard Studio, so no need to spend your effort advising me to use simple xml dashboards!   
Hi  @gcusello ,   No, I am not able to do a secondary search, that is where I am stuck. @PickleRick , I don't have any other index and source type right now, my task is to have secondary sear... See more...
Hi  @gcusello ,   No, I am not able to do a secondary search, that is where I am stuck. @PickleRick , I don't have any other index and source type right now, my task is to have secondary search based on the request-id I retrieved
Hi,  Im trying to create a dashboard that easily presents api endpoint performance metrics  I am generating a summary index using the following search   index=my_index app_name="my_app" sourcet... See more...
Hi,  Im trying to create a dashboard that easily presents api endpoint performance metrics  I am generating a summary index using the following search   index=my_index app_name="my_app" sourcetype="aws:ecs" "line.logger"=USAGE_LOG | fields _time line.uri_path line.execution_time line.status line.clientId ``` use a regex to figure out the endpoint from the uri path``` | lookup endpoint_regex_lookup matchstring as line.uri_path OUTPUT app endpoint match | rename line.status as http_status, line.clientId as client_id | fillnull value="" http_status client_id | bin _time span=1m | sistats count as volume p50(line.execution_time) as P50 p90(line.execution_time) as P90 p95(line.execution_time) as P95 p99(line.execution_time) as P99 by _time app endpoint http_status client_id   and i can use searches like this    index=summary source=summary-my_app | timechart $t_span$ p50(line.execution_time) as P50 p90(line.execution_time) as P90 p95(line.execution_time) as P95 p99(line.execution_time) as P99 by endpoint | sort endpoint --- index=summary source=summary-my_app | timechart span=1m count by endpoint   so i can generate a dashboard using a trellis layout that maps the performance of our endpoints without having to hard-code a bunch of panels. im trying to add a chart that displays the http_status counts over time for each endpoint (similar to the latency chart). Ive tried a number of different things, but cant get it work. i know i cant use the following:    index=summary source=summary-my_app | timechart count by endpoint http_status   so thought the following might work:   index=summary source=summary-my_app | stats count by endpoint http_status _time   but this shows me the http_status counts on a single line rather than as seperate series. Does anyone know how i could get this work?