All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi , I am facing difference in count between stats and timechart for same search and same filters Stats cmd : Last 24 hours search|bin span=1d _time |stats count by Status|eventstats sum(*) as sum... See more...
Hi , I am facing difference in count between stats and timechart for same search and same filters Stats cmd : Last 24 hours search|bin span=1d _time |stats count by Status|eventstats sum(*) as sum_* |foreach * [eval "Comp %"=round((count/sum_count)*100,2)]|rename count as Count|fields - sum_count comp  7126 error 37 Noncomp 146 NonRep 54 Total 7363 Timechart :  Last 30 days  search|bin span=1d _time |timechart count by Status| addtotals| eval "Comp %"=round((Comp/Total)*100,2) | eval "Error %"=round((Error/Total)*100,2) | eval "Noncomp %"=round((Noncomp/Total)*100,2) | eval "NonRep %"=round((NonRep/Total)*100,2) | fields _time,*% comp  7126 error 36 Noncomp 146 NonRep 53 Total 7361 There is difference in count by 2 between these 2 functions.I am using a macro before the time chart or stats .Please help me with solution or cause of this issue.   
I have a scenario where i want to expand the field and show as individual events. Below is my query, which works fine for smaller intervals of time, but larger intervals its not efficient. index=... See more...
I have a scenario where i want to expand the field and show as individual events. Below is my query, which works fine for smaller intervals of time, but larger intervals its not efficient. index=app_pcf AND cf_app_name="myApp" AND message_type=OUT AND msg.logger=c.m.c.d.MatchesApiDelegateImpl | spath "msg.logMessage.matched_locations{}.locationId" | search "msg.logMessage.numReturnedMatches">0 | mvexpand "msg.logMessage.matched_locations{}.locationId" | fields "msg.logMessage.matched_locations{}.locationId" | rename "msg.logMessage.matched_locations{}.locationId" to LocationId | table LocationId I have a json array called matched_locations which has field locationId. I can have atmost 10 locationIds in a matched_locations I have thousands of events in the duration which will have this matched_locations json array. Below is example of one such event with bunch of matched_locations ########################################################### cf_app_name: myApp cf_org_name: myOrg cf_space_name: mySpace job: diego_cell message_type: OUT msg: { application: myApp correlationid: 0.af277368.1669261134.5eb2322 httpmethod: GET level: INFO logMessage: { apiName: Matches apiStatus: Success clientId: oh_HSuoA6jKe0b75gjOIL32gtt1NsygFiutBdALv5b45fe4b error: NA matched_locations: [ { city: PHOENIX countryCode: USA locationId: bef26c03-dc5d-4f16-a3ff-957beea80482 matchRank: 1 merchantName: BIG D FLOORCOVERING SUPPLIES postalCode: 85009-1716 state: AZ streetAddress: 2802 W VIRGINIA AVE } { city: PHOENIX countryCode: USA locationId: ec9b385d-6283-46f4-8c9e-dbbe41e48fcc matchRank: 2 merchantName: BIG D FLOOR COVERING 4 postalCode: 85009 state: AZ streetAddress: 4110 W WASHINGTON ST STE 100 } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } ] numReturnedMatches: 10 } logger: c.m.c.d.MatchesApiDelegateImpl } origin: rep source_instance: 1 source_type: APP/PROC/WEB timestamp: 1669261139716063000 } ########################################################### Can anyone help me with how I can expand this field efficiently? Thank you.
Hi All, I have a hostname stating \\sent134 I need to remove this \\ using regex and it should be like this:  sent134 Actual: \\sent134 Expected should be: sent134 === Please provide ... See more...
Hi All, I have a hostname stating \\sent134 I need to remove this \\ using regex and it should be like this:  sent134 Actual: \\sent134 Expected should be: sent134 === Please provide regex to remove \\ form the hostname fields. Thanks
We are using the Event Hubs modular input from the SPlunk TA for Microsoft Cloud Services. In our system, we have configured many Event Hubs inputs. However, one of those particular inputs has done ... See more...
We are using the Event Hubs modular input from the SPlunk TA for Microsoft Cloud Services. In our system, we have configured many Event Hubs inputs. However, one of those particular inputs has done some very strange things.  Most of the events received from this particular input are processed correctly however some of the events arrive in "batches", inside a "records" array. These batches can contain up to 300 child objects i.e. 300 separate events.  In the inputs.conf, we have one input configured for this Event Hub. The interval is set to 300 secs. max_wait_time and max_batch_time are left as default.  Has anyone else seen this before?   @jconger   
Hi, My datasets are much larger but these represent the crux of my hurdle...     Sourcetype= transaction fields= transaction_id, user, sourcetype= connection fields=x_transaction_id, user,... See more...
Hi, My datasets are much larger but these represent the crux of my hurdle...     Sourcetype= transaction fields= transaction_id, user, sourcetype= connection fields=x_transaction_id, user, action     Now I need to build a SPL which detects huge data sent to ext.domains in single event, for which I have all the required details in transaction sourcetype itself, but the allowed or block action is not there, those are specified under connection sourcetype.. Just need to merge the action details to the transaction sourcetype Tried with join, results are inappropriate.  Can this be done more efficiently with stats?
Hi All, We have configured safe links policy in Microsoft 365. However, we only get logs for blocked URLs not allowed URLs through the add on. Is there something that can be done to pull all URLs ... See more...
Hi All, We have configured safe links policy in Microsoft 365. However, we only get logs for blocked URLs not allowed URLs through the add on. Is there something that can be done to pull all URLs scanned by safelinks? Thanks, Prabs
Hi All, I have encrypted the user field with sha256  index=abc   sourcetype=xyz | eval domain = sha256(User) | table  domain I am able to see encrypted values under domain field Is th... See more...
Hi All, I have encrypted the user field with sha256  index=abc   sourcetype=xyz | eval domain = sha256(User) | table  domain I am able to see encrypted values under domain field Is there a splunk command to decrypt it?
We have alerts routed to Pagerduty from Splunk. We are debugging whether alerts got routed to Pagerduty. Which index should we query for Pagerduty call response code. 
Below is the current out put (raw) - specific field   node0: -------------------------------------------------------------------------- /var/: No such file or directory /var/tmp/: No such file... See more...
Below is the current out put (raw) - specific field   node0: -------------------------------------------------------------------------- /var/: No such file or directory /var/tmp/: No such file or directory /var/: blablablaba.txt node1: -------------------------------------------------------------------------- /var/: No such file or directory /var/tmp/: No such file or directory   what i need help on, is to group Node0 and Node1 as their own group, and only show IF the row below it (after the "/var") if its anything BUT "No such file or directory"   so the output will end up being: NODE0: /var/: blablablaba.txt NODE1:   thanks for the help in advance.
Hi Friends, My current situation is:  I'm monitoring the files from this path:   source="/opt/redprairie/prod/prodwms/les/log/SplunkMonitoring/*" In this path we receive 2 different .zip files. 1... See more...
Hi Friends, My current situation is:  I'm monitoring the files from this path:   source="/opt/redprairie/prod/prodwms/les/log/SplunkMonitoring/*" In this path we receive 2 different .zip files. 1.support-prodwms--<date & time>.zip 2. commandUsage_<date & time>.csv I want to monitor the first file (support-prodwms--<date & time>.zip). In side the zip file we have 15 different files.  1.probes.csv 2. tasks.csv 3.jobs.csv 4.log-files.csv so on...... Exactly I want to monitor only (2. tasks.csv & 3.jobs.csv) files from zip. remaining files I should not monitor.  Currently I'm using in input.conf: [monitor:///opt/redprairie/*/*/les/log/SplunkMonitoring/support-prodwms--*] index = pg_idx_whse_prod_events sourcetype= SPLUNKMONITORINGNEW whitelist = /tasks\.csv$ crcSalt = <string> recursive = true disabled = false _meta = entity_type::NIX service_name::WHSE environment::PROD   Kindly help me friends. I'm struggling for last 2 days on this topic.  Thanks in advance.  @gcusello @richgalloway @splunk 
Hello, I am looking for the equivalent of performing SQL like such: SELECT transaction_id, vendor FROM orders WHERE transaction_id NOT IN (SELECT transaction_id FROM events). As of right now I c... See more...
Hello, I am looking for the equivalent of performing SQL like such: SELECT transaction_id, vendor FROM orders WHERE transaction_id NOT IN (SELECT transaction_id FROM events). As of right now I can construct a list of transaction_ids for orders in one search query and a list of transaction_ids for events in another search query, but my ultimate goal is to return order logs that do not share transaction_ids with the transaction_ids of the events log. Any help is greatly appreciated, thanks!
This seems to be a bit strange: We are running Enterprise version 8.1.5 in a search head cluster. A custom app is created for our security team to manage their dashboards, etc. The strange thing is,... See more...
This seems to be a bit strange: We are running Enterprise version 8.1.5 in a search head cluster. A custom app is created for our security team to manage their dashboards, etc. The strange thing is, some of the dashboards cannot be deleted -- there is just no delete (or move) option: I've checked on all the individual nodes in the cluster, they are all in the local/ folder in the app. This is on-prem Splunk Enterprise, so I can manually delete them from all the nodes as an admin. But I would like to understand what I am missing here. I did search for answers here and found this one post. But that is in the Splunk Cloud. So I am not sure if my issue is a bug like that in the posting or not. Thanks for any thoughts / discussions. Happy holidays!
I have the following data:     { "remote_addr": "1.2.3.4", "remote_user": "-", "time_local": "24/Nov/2022:09:55:46 +0000", "request": "POST /myService.svc HTTP/1.1", "status": "200", ... See more...
I have the following data:     { "remote_addr": "1.2.3.4", "remote_user": "-", "time_local": "24/Nov/2022:09:55:46 +0000", "request": "POST /myService.svc HTTP/1.1", "status": "200", "request_length": "4581", "body_bytes_sent": "4891", "http_referer": "-", "http_user_agent": "-", "http_x_forward_for": "-", "request_time": "0.576" }     These are nginx access logs.  I have a situation where certain requests are failing and then retrying every hour or so.  I want to identify these as best I can.  So... Return results where status!=200 Group where: remote_addr matches, and request_length matches, and status matches, and body_bytes_sent matches (I'm making the presumption these would be our identical requests with same values for these) Create a table of these results showing the time_local for each occurence Order time_local within each row (from earliest to latest) This would leave rows where the above matches aren't made and I'd just want these listing on individual rows This is beyond my capabilities and I got this (not very) far:     index=index source="/var/log/nginx/access.log" | where status!=200 | stats list(time_local) by request_length | sort - list(time_local)     This is sort of what I want but doesn't do any matching.  It does group the time_local against the request_length which is how I'd like the output (but including the other fields for visibility).  Also, the sort doesn't work as it seems to sort by the first record in each row and I want it to sort WITHIN the row itself. This the output: request_length list(time_local) 26562 24/Nov/2022:16:19:20 +0000 24/Nov/2022:14:16:45 +0000 24/Nov/2022:12:15:04 +0000 24/Nov/2022:11:15:01 +0000 24/Nov/2022:15:18:02 +0000 41977 24/Nov/2022:16:19:20 +0000 24/Nov/2022:14:16:45 +0000 24/Nov/2022:12:15:04 +0000 24/Nov/2022:11:15:01 +0000 24/Nov/2022:15:18:02 +0000 24/Nov/2022:13:15:06 +0000 But I want it to look more like this... request_length status body_bytes_sent remote_addr time_local 26562 500 4899 1.2.3.4 24/Nov/2022:11:15:01 +0000 24/Nov/2022:12:15:04 +0000 24/Nov/2022:14:16:45 +0000 24/Nov/2022:15:18:02 +0000 24/Nov/2022:16:19:20 +0000 41977 500 5061 6.7.8.9 24/Nov/2022:11:15:01 +0000 24/Nov/2022:12:15:04 +0000 24/Nov/2022:13:15:06 +0000 24/Nov/2022:14:16:45 +0000 24/Nov/2022:15:18:02 +0000 24/Nov/2022:16:19:20 +0000
Make sure the 2 below scenarios are right in your file: if you are using fonts locally, make sure the font is uploaded and path is correctly linked to it. If you are calling font from web url, make s... See more...
Make sure the 2 below scenarios are right in your file: if you are using fonts locally, make sure the font is uploaded and path is correctly linked to it. If you are calling font from web url, make sure the path is correct or the site is opening the font in browser tab. Fonts Bee
Hi folks, I have an issue with a HF, I'm getting some spikes reaching the 100% when sending data to Splunk Cloud. This happens every 30 seconds approx. I think this is because of the amount of data... See more...
Hi folks, I have an issue with a HF, I'm getting some spikes reaching the 100% when sending data to Splunk Cloud. This happens every 30 seconds approx. I think this is because of the amount of data we are sending, this is also causing that all data get with a delay to Splunk Cloud, I mean the _time and indextime is different in all data because of this. So I have some questions: 1- How can I check if I'm sending a big amount of data at similar times during the day? Do you have a query I can use or a dashboard? 2- What are your recommendation to distribute the big data to be sent at different times?  I really appreciate your help on this. Thanks in advance!
Hi ,   I wanted to dashboard to monitor my complete splunk environment . I want to monitor _internal index every 5mins and if it is not sending _internal data then it should go to red otherwise it ... See more...
Hi ,   I wanted to dashboard to monitor my complete splunk environment . I want to monitor _internal index every 5mins and if it is not sending _internal data then it should go to red otherwise it should be running and green    Can we achieve this       
Hi, I have a row on a dashboard with a number of panels with metrics. However, the panels appear off-set  and the metrics are not centered: Currently, the row is defined as follows: <... See more...
Hi, I have a row on a dashboard with a number of panels with metrics. However, the panels appear off-set  and the metrics are not centered: Currently, the row is defined as follows: <row> <panel id="tn"> <title>Total</title> <html> <style> single{ width: auto; font-size=20%; } </style> </html> How can I fix this? Thanks,
We have api requests that I want to create statistics by the request but to do this I need to remove variable identifiers and any parameters. For example, with the following requestpatterns POST ... See more...
We have api requests that I want to create statistics by the request but to do this I need to remove variable identifiers and any parameters. For example, with the following requestpatterns POST /api-work-order/v1/work-orders/10611946/labours-reporting/2004131 HTTP/1.1 GET /api-work-order/v1/work-orders/10611946/labours-reporting HTTP/1.1 PUT /api-work-order/v1/work-orders/10611946 HTTP/1.1 GET /api-work-order/v1/work-orders HTTP/1.1 I need to replace the identifiers to extract: POST /api-work-order/v1/work-orders/{id}/labours-reporting/{id} GET /api-work-order/v1/work-orders/{id}/labours-reporting PUT /api-work-order/v1/work-orders/{id} GET /api-work-order/v1/work-orders   
Hi, let me try to explain my problem. I have a main search with a selected timerange (typically "last 4 hours") which is selected with the time picker. In addition, I join a subsearch where I want to... See more...
Hi, let me try to explain my problem. I have a main search with a selected timerange (typically "last 4 hours") which is selected with the time picker. In addition, I join a subsearch where I want to calculate the average of some values with a bigger time range (typically "last 7 days"). To do that I use the earliest and latest commands in the subsearch. Is it somehow possible to get/access the values of info_min_time and info_max_time (which the addinfo command produces) from the main search into the subsearch?
I would like to use a graph (Ex. Sankey) to visualize user navigation from page to page in an application. If I elaborate the requirement, I need a particular user’s navigation through the app in the... See more...
I would like to use a graph (Ex. Sankey) to visualize user navigation from page to page in an application. If I elaborate the requirement, I need a particular user’s navigation through the app in the same order that he has navigated. Which kind of Graph that I can use and what will be the appropriate query? For you to get an understanding what I have tried so far; I’ve attached an image