All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a lookup table with allowed CIDR ranges. allowed_cidr_range      applications Xyx                                             abc I need to build a alert whenever source ip does not belo... See more...
I have a lookup table with allowed CIDR ranges. allowed_cidr_range      applications Xyx                                             abc I need to build a alert whenever source ip does not belong to the allowed cidr range. Query : NOT [| lookup cidr_vpc.csv allowed_cidr_range as src_ip output allowed_cidr_range] |table _time,host,sourcetype,src_ip,dst_ip  
hi, Please check with below screenshot The indexed time and event log time both are different. Kindly let me know the solution to fix this error.    
How to make particular row bold in splunk table (with css and without jss) It can be only 4th row for an example in set of 10 rows  
Greetings, I have a query I'm working on using tstats and lookup. My lookup is named hosts_sites and has two columns, hosts and site. My sample query is below;     | tstats latest(_time) ... See more...
Greetings, I have a query I'm working on using tstats and lookup. My lookup is named hosts_sites and has two columns, hosts and site. My sample query is below;     | tstats latest(_time) as latest where index=main by host | lookup hosts_sites hosts as host OUTPUT site | table host, site, latest     How can I make sure that my table includes non-matches. I want to make sure that hosts in the lookup that were not matched are included in the table so they can be addressed/remediated
Hello, I am fairly new to using splunk. I am having some trouble understanding how to extract the fields.  My sample data looks somewhat like this: ...Event={request=Request{data={firstName=jane, ... See more...
Hello, I am fairly new to using splunk. I am having some trouble understanding how to extract the fields.  My sample data looks somewhat like this: ...Event={request=Request{data={firstName=jane, lastName=doe, yearOfBirth=1996}}}... I want to get count based on yearOfBirth. How should I do it? I tried doing stats count by request.data.yearOfBirth, and also tried simply doing stats count by yearOfBirth but neither returned any results. Am I accessing the yearOfBirth field incorrectly? How can I 
I have the following events that arrive every five minutes from a pool of servers (two servers' events shown):   Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdw... See more...
I have the following events that arrive every five minutes from a pool of servers (two servers' events shown):   Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache LRU expired : 0 Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache lifetime : 0 Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache inactive : 21157 Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache del : 297 Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache add : 21967 Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache miss : 8801 Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache hit : 79198 Aug 2 18:00:32 ServerY stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache LRU expired : 0 Aug 2 18:00:32 ServerY stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache lifetime : 1 Aug 2 18:00:32 ServerY stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache inactive : 21085 Aug 2 18:00:32 ServerY stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache del : 230 Aug 2 18:00:32 ServerY stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache add : 21861 Aug 2 18:00:32 ServerY stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache miss : 8880 Aug 2 18:00:32 ServerY stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache hit : 74540 Aug 2 18:05:23 ServerX stats.pdweb.sescache 2022-08-02-18:05:00.000-05:00I----- pdweb.sescache LRU expired : 6100 Aug 2 18:05:23 ServerX stats.pdweb.sescache 2022-08-02-18:05:00.000-05:00I----- pdweb.sescache lifetime : 0 Aug 2 18:05:23 ServerX stats.pdweb.sescache 2022-08-02-18:05:00.000-05:00I----- pdweb.sescache inactive : 71624 Aug 2 18:05:23 ServerX stats.pdweb.sescache 2022-08-02-18:05:00.000-05:00I----- pdweb.sescache del : 6122 Aug 2 18:05:23 ServerX stats.pdweb.sescache 2022-08-02-18:05:00.000-05:00I----- pdweb.sescache add : 80511 Aug 2 18:05:23 ServerX stats.pdweb.sescache 2022-08-02-18:05:00.000-05:00I----- pdweb.sescache miss : 190 Aug 2 18:05:23 ServerX stats.pdweb.sescache 2022-08-02-18:05:00.000-05:00I----- pdweb.sescache hit : 6239   The server names (in this case, "ServerX" and "ServerY") are extracted at index time as a field called "server_name". In addition, two other field extractions are performed at index time: "metric_type": In this example, the values are "LRU expired", "lifetime", "inactive", "del", "add", "miss" and "hit". "metric_value": The numeric value at the end of each event. I'm attempting to do the following: Collect the "metric_value" values aligned with the seven metric types for each server in five minute increments and display all values in a table (each row reflecting the unique time, server name, and values for each metric type) Perform arithmetic operations against four of the metric types (add - (del + inactive + lifetime)) to create a new value "current_sessions". I envision the output to look like this: _time server_name LRU expired lifetime inactive del add miss hit current_sessions 18:00:00 ServerX 0 0 21157 297 21967 8801 79198 513 18:00:00 ServerY 0 1 21085 230 21861 8880 74540 545 18:05:00 ServerX 6100 0 71624 6122 80511 190 6239 2765 ...and so on... Here's what I've put together so far:   index=foo sourcetype=bar stats_category="pdweb.sescache" | bin span=5m _time | stats values(*) AS * by server_name, metric_type, _time | table _time, server_name, metric_type, metric_value   The resulting table shows me the following: _time server_name metric_type metric_value 2022-08-02 18:00:00 ServerX LRU expired 0 2022-08-02 18:00:00 ServerX lifetime 0 2022-08-02 18:00:00 ServerX inactive 21157 2022-08-02 18:00:00 ServerX del 297 2022-08-02 18:00:00 ServerX add 21967 2022-08-02 18:00:00 ServerX miss 8801 2022-08-02 18:00:00 ServerX hit 79198 2022-08-02 18:05:00 ServerX LRU expired 0 2022-08-02 18:05:00 ServerX lifetime 1 2022-08-02 18:05:00 ServerX inactive 21085 2022-08-02 18:05:00 ServerX del 230 2022-08-02 18:05:00 ServerX add 21861 2022-08-02 18:05:00 ServerX miss 8880 2022-08-02 18:05:00 ServerX hit 74540 2022-08-02 18:00:00 ServerY LRU expired 6100 2022-08-02 18:00:00 ServerY lifetime 0 2022-08-02 18:00:00 ServerY inactive 71624 2022-08-02 18:00:00 ServerY del 6122 2022-08-02 18:00:00 ServerY add 80511 2022-08-02 18:00:00 ServerY miss 190 2022-08-02 18:00:00 ServerY hit 6239   How should I adjust my query to accommodate my requirements?
I would like to automate Splunk Logs to make sure user detail is marked. Note: We are capturing and displaying user detail in JSON Response Body.     
query 1 |mstats count(_value) as count1 WHERE metric_name="*metric1*" AND metric_type=c AND status="success" by metric_name,env,status| where count1>0 query 2 |mstats count(_value) as count2 WHE... See more...
query 1 |mstats count(_value) as count1 WHERE metric_name="*metric1*" AND metric_type=c AND status="success" by metric_name,env,status| where count1>0 query 2 |mstats count(_value) as count2 WHERE metric_name="*metric2*" AND metric_type=c AND status="success" by metric_name,env,status| where count2=0 These queries are working fine individually I need combine show results only if  count1>0 and count2=0
I have sample log in that count is there and in the same row in message are fix length log are there if same count so and count is also dynamic    For eg Date time server count server name   ... See more...
I have sample log in that count is there and in the same row in message are fix length log are there if same count so and count is also dynamic    For eg Date time server count server name   How can we get all the server count data  
Hi, So i am trying to index the log file data.log, log file is 2 days old and splunk is indexing only the latest events. Is there a way i can index the older events in data.log ?
Is there a way to populate the items in an "IN" statement with the results of a sub query?  I've tried several variations. index=x accountid IN ( [ search index=special_accounts | rename accountid ... See more...
Is there a way to populate the items in an "IN" statement with the results of a sub query?  I've tried several variations. index=x accountid IN ( [ search index=special_accounts | rename accountid as query ] ) 
Hey Gurus I have a conundrum here regarding a Dashboard Studio board I'm working on to show Infoblox zone transaction details. I'm trying to write queries that allow for either passing a grid sit... See more...
Hey Gurus I have a conundrum here regarding a Dashboard Studio board I'm working on to show Infoblox zone transaction details. I'm trying to write queries that allow for either passing a grid site name or leave it blank and show global stats. Normally, the default value for a token is "*" and that works perfectly with splunk's host wildcard. However, for some reason, you decided to use a different wildcard for the "where like" function, that being "%". This messes up a query I have when not passing a value for site. Fe. the following query works out as desired when I pass token "sf01-ibsn-c01n"  for macro_site:       where new_serial="$macro_serial$" AND like(client_resolved, "$macro_site$%")       It interpolates it as :       where new_serial="2654170934" AND like(client_resolved, "sf01-ibsn-c01n%")       Of course, when I don't pass a site, the query turns into garbage:       where new_serial="2654170934" AND like(client_resolved, "*%")         I cannot change the default value to "%", since now the host wildcard is messed up. I basically need either two conditional defaults or, perhaps, some dash/xml logic to deal with this ? Any help would be appreciated. Thank you ! 
  I run a stats command every hour to show a list of firewall rules that are getting hit in a particular way. My command works for the hourly run, but I can't get a report to keep a running total o... See more...
  I run a stats command every hour to show a list of firewall rules that are getting hit in a particular way. My command works for the hourly run, but I can't get a report to keep a running total of my firewall rule hit count. I've tried the following, but it's not working. Can anyone help here? index=rsyslog firewall-ABC [search index=rsyslog (IONET_allow_BLAH_in OR IONet_allow_BLAH_outbound) host=firewall_XYZ.nascom.nasa.gov | table source_address, destination_address, destination_port] NOT (policy_id=1 OR policy_id=2)| sistats count by policy_id, source_address, destination_address | summaryindex spool=t uselb=t addtime=t index="summary" file="RMD5eef7b35350423340_1029407874.stash_new" name="Delegation_Fails" marker=""   Thanks,   Paul  
On HF we have routing summaries in transforms.conf which are take more time and creating a bottleneck for us We have below number of routing summaries ~2000 entries for index routing ~200 entries ... See more...
On HF we have routing summaries in transforms.conf which are take more time and creating a bottleneck for us We have below number of routing summaries ~2000 entries for index routing ~200 entries for sourcetype routing Can you please provide suggestions to route the events  faster and efficiently. Sample from transform.conf. [route_sentinel_to_index] INGEST_EVAL = index:=case(\ match(_raw, "\"TENANT\":\"xxxxxx-b589-c11a968d4876\""), "nacoak_mil", \ . . .<1997 entries> . . match(_raw, "\"EVENT_TIME\":\"\d{13}\""), "unknown_events", \ true(), "unknownsentinel") [apply_sourcetype_to_sentinel] INGEST_EVAL = sourcetype:=case(\ match(_raw, "\"SYSTEM\":\"xxxx-b3a7-xxxxxx\""), "cs:fhir:prod:audit:json", \ match(_raw, "\"SYSTEM\":\"xxxxxxx-d424c20xxxx\""), "cs:railsapp_server:ambulatory:audit:json", \ . .<198 entries> . true(), "cs:sentinel:unknown:audit:json")
Hi team,   Need help if anyone faced issues on phantom playbooks not processing events after upgrade from 5.0.1v to 5.1.0v. Thanks, Sharada
I have some sources that are coming in as json, and I am experiencing odd behavior where I cannot search on a particular field, but I can only find the value when doing a search against the _raw data... See more...
I have some sources that are coming in as json, and I am experiencing odd behavior where I cannot search on a particular field, but I can only find the value when doing a search against the _raw data. So for example, I have a field let's say "cluster", and I see it is also extracted just fine in the "Interesting fields" on the lefthand side.  One of the values we'll say is "cluster-name-A".   If I search in the query bar for:     cluster="cluster-name-A" sourcetype=mysourcetype index=myindex     I get no results, however if I just do a blanket search:     cluster-name-A sourcetype=mysourcetype index=myindex     My expected results come back fine. What can I investigate here to see why it will not let me use the fieldname in our searches?
Hello,  when i table the results the results are not matching exact with the next columns. what can i add to reslove this issue. Please find the below screenshot for the results. |rex field=_raw "(... See more...
Hello,  when i table the results the results are not matching exact with the next columns. what can i add to reslove this issue. Please find the below screenshot for the results. |rex field=_raw "(TEST_DETAIL_MESSAGE\s\=)(?<MESSAGE>\w+\D+\,)" max_match=0 |rex field=_raw "(TEST_COUNT\s\=)(?<COUNT>\s\d+)" max_match=0 | table MESSAGE COUNT    
Hello, I am trying to write a search query to fetch data from different sourcetype and the common factor in all sorucetype is _time. I'm facing two issues. 1. With below search criteria, the va... See more...
Hello, I am trying to write a search query to fetch data from different sourcetype and the common factor in all sorucetype is _time. I'm facing two issues. 1. With below search criteria, the value of field CPU is constant all the time, but the actual value is different. index=indexname host=hostname sourcetype=meminfo earliest=-1d@d latest=@d | table memUsedPct | join type=inner _time [search index=indexname host=hostname sourcetype=cpuinfo | multikv | search CPU=all | eval CPU=100-pctIdle | table CPU]   2. How to show the memUsedPct and CPU in a timechart ? Regards, Karthikeyan
When using Splunk Observability with the Boutique EKS website, I set up a graph to show data from metric 'spans.duration.ns.p90', sf_service 'checkoutservice', and sf_operation '/grpc.health.v1.... See more...
When using Splunk Observability with the Boutique EKS website, I set up a graph to show data from metric 'spans.duration.ns.p90', sf_service 'checkoutservice', and sf_operation '/grpc.health.v1.Health/Check'.  With the time range set at 1 hour, I can observe a particular peak value at 2.2 million.  If I change the time range to 2 hours, this same peak value becomes 4.4 million.  Why is this data changing?
For those of you who have both indexer cluster and search head cluster, I assume you have both "deployment server" which is deploy server for indexer and "deployer" which is deploy server for search ... See more...
For those of you who have both indexer cluster and search head cluster, I assume you have both "deployment server" which is deploy server for indexer and "deployer" which is deploy server for search head cluster. What types of apps do you deploy via each of these two? What is the best practice?