All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Returns thousands of entries: index=myindex sourcetype=mysourcetype Returns all (8 atm) uuid values and all starts with '211d' index=myindex sourcetype=mysourcetype | table uuid | dedup uuid ... See more...
Returns thousands of entries: index=myindex sourcetype=mysourcetype Returns all (8 atm) uuid values and all starts with '211d' index=myindex sourcetype=mysourcetype | table uuid | dedup uuid 211d644bc2 211d788fa3 211d520cc2 etc. These returns nothing. 0 matches found for the same time period as the previous two queries: index=myindex sourcetype=mysourcetype uuid=211d* index=myindex sourcetype=mysourcetype uuid="211d*" index=myindex sourcetype=mysourcetype uuid=211d% index=myindex sourcetype=mysourcetype uuid="211d%"   Why is this? Is it an indexing issue?    
Hi There!    I would like to export more than 50k rows in lookup editor app of the results of kv store lookup file,     I tried even changing the limits.config file, but it is not working fine, ... See more...
Hi There!    I would like to export more than 50k rows in lookup editor app of the results of kv store lookup file,     I tried even changing the limits.config file, but it is not working fine, [searchresults] maxresultrows = 500000 [stats] maxresultrows = 500000 [top] maxresultrows = 500000 Can anyone say about the solution for this?   Thanks in Advance!
I've got an issue with a scheduled alert that keeps going to finalizing but never stops (if this happens on the weekend it will "finalize" all weekend until I kill it on Monday). Is there a way to ... See more...
I've got an issue with a scheduled alert that keeps going to finalizing but never stops (if this happens on the weekend it will "finalize" all weekend until I kill it on Monday). Is there a way to find out why it's getting stuck, or to set a finalizing time limit so it just stops after n seconds regardless of state? I've already set the dispatch.max_time but that doesn't appear to effect the finalizing duration.
Hello, I have a sourcetype that have a default LINE_BREAKING and SHOULD_LINEMERGE=false, like so: Per my understanding, this mean it automatically extract each line as one event. But the index... See more...
Hello, I have a sourcetype that have a default LINE_BREAKING and SHOULD_LINEMERGE=false, like so: Per my understanding, this mean it automatically extract each line as one event. But the indexed data is like this: The red event is correct with linecount=1, but most of the events have linecount=2, some have event more without line breaking. So what should I fix?
hi,   i need to create a query or where can i find this information.   i want the list of users who has run queries , for auditing purpose ,with the keyword PII on those queries which was run... See more...
hi,   i need to create a query or where can i find this information.   i want the list of users who has run queries , for auditing purpose ,with the keyword PII on those queries which was run.   Please help.    
I have 2 events Event1: Document uploaded <documentId> Event2: Document viewed <documentId>   I have generated a common "docId" field for both events. I want to create a table that lists document... See more...
I have 2 events Event1: Document uploaded <documentId> Event2: Document viewed <documentId>   I have generated a common "docId" field for both events. I want to create a table that lists document Ids that have been uploaded but not viewed.  Ex: If I have the following events, Document uploaded: 34423434 Document uploaded: 56676886 Document viewed: 56676886 I want a table that shows the below output DocumentIdsNotViewed 34423434 Thanks in advance!
Hello, I have something strange going on.  I need to monitor logs from three different systems.  thus far I have only built one system, so there are only logs from one system present. the three s... See more...
Hello, I have something strange going on.  I need to monitor logs from three different systems.  thus far I have only built one system, so there are only logs from one system present. the three systems are: sldvuspeedtest01p ptdvuspeedtest01p tsdvuspeedtest01p while it's always tempting to go crazy with regex's, I tried the more simple version first: [monitor:///opt/syslog/*speedtest*] index=isp sourcetype=speedtest whitelist= \.log$ blacklist = (default[a-zA-Z0-9\_\-]+)\.log #host_regex = \/opt\/syslog\/(.*)/ host_segment=3   but splunk will only load the syslog files if the stanza reads: [monitor:///opt/syslog/sldvuspeedtest01p].   so when I tried the previous version, and then run  "splunk _internal call /services/admin/inputstatus/TailingProcessor:FileStatus"   I see the following lines: <s:key name="/opt/syslog/sldvuspeedtest01p/syslog_2023-02-24.log"> <s:dict> <s:key name="parent">/opt/syslog/zayo_devices_new</s:key> <s:key name="type">File did not match whitelist '^\/opt\/syslog/[^/]*\.docker/syslog_[^/]*\.log$'.</s:key> </s:dict> </s:key>   There is a stanza for that "parent", but why would splunk even confuse the two?  is there a heirarchy in which monitor stanzas are loaded in that I am running awry of? There is also a separate stanza [monitor:///opt/syslog/*.docker/syslog_*.log] but it doesn't make sense why it would be referred to here either.   [monitor:///opt/syslog/zayo_devices_new] whitelist = \.log$ blacklist = (Health[a-zA-Z0-9\_\-]+)\.log index=z_catchall sourcetype = zayo_routing host_regex = zayo_devices_new/(.*)\_    
I am trying to create a query to compare thousands of thresholds given in a lookup without having to hardcode the thresholds in eval statements.  Example  Query: index=abc | stats count field1 as... See more...
I am trying to create a query to compare thousands of thresholds given in a lookup without having to hardcode the thresholds in eval statements.  Example  Query: index=abc | stats count field1 as F1, field2 as F2, field3 as F3, field4 as F4 Lookup: (thresholds.csv)   Val1 Val2 Val3 Val4 Threshold1 15 50 60 60 Threshold2 52 75 85 95   Condition: if ((F1>Val1 AND F2>VAL2 AND F3>Val3 AND F4>Val4), "Violation", "Compliant") Hopefully this makes sense. 
Hi! I'd like to know if someone can help me with this: I have 4 saved searches that gives back counts for WTD (Week-to-Date), MTD (Month), QTD (quarter) and YTD (year) per type and a dashboard th... See more...
Hi! I'd like to know if someone can help me with this: I have 4 saved searches that gives back counts for WTD (Week-to-Date), MTD (Month), QTD (quarter) and YTD (year) per type and a dashboard that calls those 4 searches that would display as columns per branch: Example: Branch dropdown: Avenue1  <--- the dashboard will have this and the numbers will change accordingly.                                   WTD    MTD   QTD    YTD PROD type 1       4             0       85             85 PROD type 3       0             0      1               1 PROD type 40     1             0      6               6 ... Total                       5            0        92         92 The Dashboard will have the following( I hardcoded the branch for now): | loadjob savedsearch="....:search:Retail_TEST_QTD" | rename CREATEDATBRANCH as Branch | lookup BranchNums Branch | search BranchNames="Avenue1" | stats count(TOTALCOUNT) as QTD by DESCRIPTION | appendcols [| loadjob savedsearch="....:search:Retail_TEST_YTD" | rename CREATEDATBRANCH as Branch | lookup BranchNums Branch | search BranchNames="Avenue1" | stats count(TOTALCOUNT) as YTD by DESCRIPTION] | appendcols [| loadjob savedsearch="....:search:Retail_TEST_MTD" | rename CREATEDATBRANCH as Branch | lookup BranchNums Branch | search BranchNames="Avenue1" | stats count(TOTALCOUNT) as MTD by DESCRIPTION] | appendcols [| loadjob savedsearch="....:search:Retail_TEST_WTD" | rename CREATEDATBRANCH as Branch | lookup BranchNums Branch | search BranchNames="Avenue1" | stats count(TOTALCOUNT) as WTD by DESCRIPTION] | rename DESCRIPTION as PROD_DESCRIPTION | table PROD_DESCRIPTION WTD MTD QTD YTD | addtotals row=f col=t labelfield=PROD_DESCRIPTION label="PROD Total:" and these are the saved searches. Saved search title:  Retail_TEST_WTD index=.... host=.... source=.... sourcetype=.... NOT CLOSEDATE=* AND (TYPE = 1 OR TYPE = 2 OR TYPE = 3 OR TYPE = 4 OR TYPE = 5 OR TYPE = 6 OR TYPE = 8 OR TYPE = 9 OR TYPE = 15 OR TYPE = 40 OR TYPE = 61 OR TYPE = 63) | dedup PARENTACCOUNT ID | eventstats count as TOTALCOUNT by TYPE, CREATEDATBRANCH | eval OPENDATE=strptime(OPENDATE,"%Y-%m-%d %H:%M:%S.%Q") | eval RANGE = "-1@w" | where OPENDATE >= (relative_time(now(),RANGE)) | eval DESCRIPTION = case(TYPE=1, "PROD Type 1", TYPE=2, "PROD Type 2", TYPE=3, "PROD Type 3", TYPE=4, "PROD Type 4", TYPE=5, "PROD Type 5", TYPE=6, "PROD Type 6", TYPE=8, "PROD Type 8", TYPE=9, "PROD Type 9", TYPE=15, "PROD Type 15", TYPE=40, "PROD Type 40", TYPE=61, "PROD Type 61", TYPE=63, "PROD Type 63") Saved search title: Retail_TEST_MTD | eval RANGE = "-1@mon"    <--- same as above but with this change Saved search title: Retail_TEST_QTD | eval RANGE = "-1@qtr" Saved search title: Retail_TEST_YTD | eval RANGE = "-1@y" The misplacement of the counts occurs only in the WTD column, which should be the following: for Prod Type 40 should be 1 count for Avenue1 Weekly (correct counts): Quarter (correct counts): But I am getting this: the 1 count that belongs in Prod Type 40 is showing in Prod 3 instead for the WTD: All other columns MTD, QTD, and YTD numbers match fine Note: the discrepancy in Prod Type 1 for the QTD and YTD is okay because Splunk is not up to date, it's 2 days behind. If anyone please , can tell me what I am doing wrong. I have cloned each of the saved searches. And I made sure the DESCRIPTION are all the same across all saves searches. Thank you,   Dyana      
I am having trouble with deduping on a Salesforce object and my "feels like" here is dedup isn't doing what I understand it to do. Short version of this is I have an object where records get updated b... See more...
I am having trouble with deduping on a Salesforce object and my "feels like" here is dedup isn't doing what I understand it to do. Short version of this is I have an object where records get updated by the system after insert. I only want the latest version of the record in my search. Since it's the system updating and not a user, LastModifiedDate doesn't get changed, only SystemModStamp, and sometimes only by a fraction of a second (but there is a noticeable difference in the SystemModStamp in search results where I have duplicated Ids). If I do | dedup Id it's not pulling the newest SystemModStamp, it's pulling the first one in the index. If I do | dedup Id,SystemModStamp I'm getting no records. Like, not just are we dropping the duplicate, we're dropping everything. I'm guessing that's because dedup by two fields isn't the composite key on a single record I thought it was? It's dropping any records with duplicated SystemModStamps? What I'm looking for is a composite key way to dedup on a single record by both Id and SystemModStamp.
  Hello I am currently managing a hybrid between Splunk and ELK (Elastisearch Logstash Kibana). Logs supporting syslog protocol are sent to ELK and logs from other sources directly to windows vi... See more...
  Hello I am currently managing a hybrid between Splunk and ELK (Elastisearch Logstash Kibana). Logs supporting syslog protocol are sent to ELK and logs from other sources directly to windows via agent. A plugin called Elasticsplunk has been installed and is stored in the path /splunk/splunk/etc/apps/elasticsplunk/. Currently I am getting the following error message and I would like you to please help me if you know in which configuration file I increase the timeout     External search command 'ess' returned error code 1. Script output = "error_message=ConnectionTimeout at "/var2/splunk/splunk/etc/apps/elasticsplunk/bin/elasticsearch/connection/http_urllib3.py", line 155 : ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host=u'localhost', port=9200): Read timed out. (read timeout=60)) ".”  
Hi Team,   We have stopped receiving data from workday since 1st march ,2023 .   When I checked the internal logs I found below : Can someone help me on this ASAP . 3/1/23 4:07:16.10... See more...
Hi Team,   We have stopped receiving data from workday since 1st march ,2023 .   When I checked the internal logs I found below : Can someone help me on this ASAP . 3/1/23 4:07:16.108 PM   2023-03-01 16:07:16,108 ERROR pid=29165 tid=MainThread file=base_modinput.py:log_error:307 | Request failed with error code (401), retries exhausted
Hi,   I have been building OS monitoring KPIs for our agency.  As part of this process we defined entities as below   itsi entity mgmt - index=indexname host=*controlm* | dedup host  | eval e... See more...
Hi,   I have been building OS monitoring KPIs for our agency.  As part of this process we defined entities as below   itsi entity mgmt - index=indexname host=*controlm* | dedup host  | eval entity_type=coalesce(entity_type, "controlm") | table host entity_type   it's listing the hosts that are associated to controlm agency.  I am taking entity rule like below Alias is host and matches *controlm* but why my KPIs are not automatically running against these hosts defined in entities?
Hello,   I am using the Adhoc search below index=indexname tag=oshost host=hostname | timechart span=15min avg(cpu_load_percent) BY host   above adhoc search is giving me what I want in the res... See more...
Hello,   I am using the Adhoc search below index=indexname tag=oshost host=hostname | timechart span=15min avg(cpu_load_percent) BY host   above adhoc search is giving me what I want in the result but   the generated search below is not providing me the expected result. index=indexname tag=oshost host=hostname | timechart span=15min avg(cpu_load_percent) BY host | `aggregate_raw_into_service(avg, cpu_load_percent)` | `assess_severity(c86ca62b-1055-4aee-8604-bd52d190a4c5, ea57c8b71c4ced13b8c6eedf, true, true, true)` | eval kpi="Control-M service monitoring KPI 7", urgency="5", alert_period="5", serviceid="c86ca62b-1055-4aee-8604-bd52d190a4c5" | `assess_urgency` | `gettime`   I also tried creating the base search with the adhoc query provided above, and configured Threshold Field name as cpu_load_percent. But it's giving N/A on service analyzer and not even getting the results in Aggregate Threshold Values. I am happy to provide more details!   Thanks
HI, I have this table with one column and 3 rows (could be more as this is a search result) and ther could be also more entries in a data set date_minute:34,host:h_a,index:prod date_minute:39,ho... See more...
HI, I have this table with one column and 3 rows (could be more as this is a search result) and ther could be also more entries in a data set date_minute:34,host:h_a,index:prod date_minute:39,host:h_b,index:prod date_minute:44,host:h_c:index:prod date_minute   host   index     <--- these are the table headers 34    h_a    prod 39    h_b    prod 44    h_c    prod if there is a line like: date_minute:44,host:h_c:index:prod,user:test the  user:test  should be added as new column (to have 4 column) What is the best way to do this?  
Hi Splunkers , Can you please advise on this ? Splunk ITSI KPIs Threshold values High 98% and medium 75% getting alert in splunk at mount usage 90%Checked these below  Thresholds,Alert Configurat... See more...
Hi Splunkers , Can you please advise on this ? Splunk ITSI KPIs Threshold values High 98% and medium 75% getting alert in splunk at mount usage 90%Checked these below  Thresholds,Alert Configuration,Data Latency,Configuration Management ,System Performance .. These look good 
I have a dashboard that displays 3 radials. I want to display the average of the numeric result of those three radials also on the dashboard. I can create references to those radials and refer to the... See more...
I have a dashboard that displays 3 radials. I want to display the average of the numeric result of those three radials also on the dashboard. I can create references to those radials and refer to their results something like ($ds_1:result.avg$ + $ds_3:result.avg$ + $ds_3:result.avg$) / 3 but I don't know how to create SPL to do just that vs looking up logs. For example, I tried      | eval avg=($ds_1:result.avg$ + $ds_3:result.avg$ + $ds_3:result.avg$) / 3 | table avg     this didn't work. For context, each of those referred datasources outputs avg(field) as avg.  How can I achieve what I'm after?
Hello all, How to add  another column from the same index with stats function? | makeresults count=1 | addinfo | eval days=mvrange(info_min_time, info_max_time, "1d") | mvexpand days | eval _time=d... See more...
Hello all, How to add  another column from the same index with stats function? | makeresults count=1 | addinfo | eval days=mvrange(info_min_time, info_max_time, "1d") | mvexpand days | eval _time=days | join type=outer _time [ search index="*appevent" Type="*splunk" | bucket _time span=day | stats count by _time] | rename count as "Total" | eval "New_Date"=strftime(_time,"%Y-%m-%d") | table "New_Date" "Total"| fillnull value=0 "Total"   I have used join because I need 30 days data even with 0. Please suggest. 
Hi All ,   We have a sensitive field that we mask regularly ,but a use case has come where we have to store the particular filed as it is (without masking) based on a  field value .does anyone ha... See more...
Hi All ,   We have a sensitive field that we mask regularly ,but a use case has come where we have to store the particular filed as it is (without masking) based on a  field value .does anyone have faced any case like that before ? current scenario  customer_data ==xxxx  specific_filed=abc customer_data ==xxxx  specific_filed=def customer_data ==xxxx  specific_filed=ghi expected output  based on an adhoc request we have to unmask now the incoming data for field abc customer_data ==12345 specific_filed=abc customer_data ==xxxx  specific_filed=def customer_data ==xxxx  specific_filed=ghi  
We have data set up like this: {       email:JohnSmith@Company.com       Count:100 }, {       email:DavidHarris@Company.com       Count:50 }, {       email:ChuckNorris@Company.com... See more...
We have data set up like this: {       email:JohnSmith@Company.com       Count:100 }, {       email:DavidHarris@Company.com       Count:50 }, {       email:ChuckNorris@Company.com       Count:90 } I want to set up an alert where a specific person will be emailed if their alert is > 80, but I want to use the email field. So I want Chuck to get an email and John to get a separate email. David does not get an alert because the count did not break the threshold. In the alert setup, can I put $email$ in the "To: " part of the send email action??