All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, How, I would  write the regex for the  following events (3 sample events provided below). It has "," pair delimiter, but " (quotation) are missing  for one value (cit, shown in Bold) for some... See more...
Hello, How, I would  write the regex for the  following events (3 sample events provided below). It has "," pair delimiter, but " (quotation) are missing  for one value (cit, shown in Bold) for some events. Any help will be highly appreciated, thank you. "time_stamp":"2021-08-21 16:27:06 CDT","app_name":"CT-SATCUP","user_type":"TFilter","file_source_cd":"1","TFilterType":"0","ip_addr":"2300:1700:5c08:1030:6d93:7462:b15d:185c","session_id":"k/NJGhc8dU3OtYoRsrJ+pQzDdYE=","cit":"23235672174,"request_id":"32as3eee0a-0a31-6214a4e28-7e7fc700-6d792b5b203e","user_id":"cit1ddf82-bf36-40ca-84ae-7964b5680564","return_cd":"00","event_type":"SATCUP"   "time_stamp":"2021-08-21 16:27:05 CDT","app_name":"CT-SATCUP","user_type":"TFilter","file_source_cd":"1","TFilterType":"0","ip_addr":"2400:1700:5c08:1030:6d93:9462:b15d:185c","session_id":"k/NJGhc8dU3OtYoRsrJ+pQzDdYE=","cit":"232356756174","request_id":"31as3eee0a-0a31-6214a4e28-7e7fc700-6d792b5b203e","user_id":"cit1ddf82-bf36-40ca-84ae-7964b5680564","return_cd":"01","event_type":"SATCUP"   "time_stamp":"2021-08-21 16:27:08 CDT","app_name":"CT-SATCUP","user_type":"TFilter","file_source_cd":"4","TFilterType":"0","ip_addr":"2100:1700:5c08:1030:6r93:7462:b15d:185c","session_id":"k/NJGhc8dU3OtYoRsrJ+pQzDdYE=","cit":"232356756174,"request_id":"31bs3eee0a-0a31-6214a4e28-7e7fc700-6d792b5b203e","user_id":"cit1ddf82-bf36-40ca-84ae-7964b5680564","return_cd":"01","event_type":"SATCUP" 
Aside, from how splunk create a checksum for each deployment app bundle. Is there command via linux CLI that you can cross check the same numerical value created for each deployment app bundle? I kno... See more...
Aside, from how splunk create a checksum for each deployment app bundle. Is there command via linux CLI that you can cross check the same numerical value created for each deployment app bundle? I know we can check it via serverclass.xml on the deployment client side.
We have multiple TraceIDs that have same payload and this payload is part many logs for a given TraceID. Here foo1 is a common payload for multiple TraceIDs 1, 3, 4. Is it possible to search for al... See more...
We have multiple TraceIDs that have same payload and this payload is part many logs for a given TraceID. Here foo1 is a common payload for multiple TraceIDs 1, 3, 4. Is it possible to search for all unique traceIDs 1, 2 only based on the payload, then get all of the logs for these traces? Input: TraceID Type             Name        Payload 1               HEADER     first            foo1 2               HEADER     first            foo2 3               HEADER     first            foo1 4               HEADER     first            foo1 Output: TraceID Type             Name        Payload 1               HEADER     first            foo1 2               HEADER     first            foo2 You can get unique traceIds grouped by Payload using stats max(traceId) as maxTraceId, min(traceId) as minTraceId by payload Now, how do we feed the maxTraceId into another search? We need all of the logs for these TraceID 1, 2 only. These requests did not work. some_search [ search some_search | stats max(traceId) as maxTraceId by payload | fields maxTraceId ] some_search [ search some_search | streamstats max(traceId) as maxTraceId bypayload | fields maxTraceId ] some_search | where traceId IN [ search some_search | stats max(traceId) as maxtraceId by paload | fields maxtraceId ] TraceID Type             Name        Payload 1               HEADER     first            foo1 1               BODY           second     bar1 1               FOOTER      third           baz1 2               HEADER     first            foo2 2               BODY           second     bar2 2               FOOTER      third           baz2
hi everybody, i used this request with the user rest-api-reportingweb , i want write ine a kvstore lookup: | makeresults | eval Category = "HOST Blacklist" | eval activation = "09/15/21" | e... See more...
hi everybody, i used this request with the user rest-api-reportingweb , i want write ine a kvstore lookup: | makeresults | eval Category = "HOST Blacklist" | eval activation = "09/15/21" | eval target = "Un test ajout" | eval url = "http://www.test.html" | eval tester = "*test.html*" | eval key=Category.tester.target | table key,Category,activation,target,tester,url | outputlookup t_PROXY_lookup append=True override_if_empty=false key_field=key   i have this error :  Error in 'outputlookup' command: Lookup failed for collection 'Condition_List_Mcafee' in app 'Splunk_For_Cnaf_Secuteams' for user 'rest-api-reportingweb': User 'rest-api-reportingweb' with roles { rest-api-reportingweb, si_cnaf, user, wan } cannot write: /nobody/Splunk_For_Cnaf_Secuteams/collections/Condition_List_Mcafee { read : [ * ], write : [ admin, power ] }, owner: adm0-ahuli755, removable: no, modtime: 1614188730.883726000. I give permissions in  lookup definitions for this user i cant for lookup file beause for kvstore file dont appear. app/local/collections.conf : [Condition_List_Mcafee] field.Category = string field.activation = string field.target = string field.tester = string field.url = string replicate = true app/local/transforms.conf  : [t_PROXY_lookup] external_type = kvstore collection = Condition_List_Mcafee case_sensitive_match = true match_type = WILDCARD(tester) fields_list = _key,Category,url,activation,target,tester app/metadata/local.meta [transforms/t_PROXY_lookup] access = read : [ * ], write : [ admin, power, rest-api-reportingweb ] export = system owner = nobody version = 7.3.3 modtime = 1632255805.643188000  app/lookups/lookup_file_backups/Splunk_For_Cnaf_Secuteams/nobody i dont see the file in this directory    What i miss ??  Thanks for your help    best regards  Alexandre
I have a simple accelerated report that looks like this:   index=hosts | stats count by hostname ip   I now want to dashboard that but use a timechart. However, timechart doesn't work because _ti... See more...
I have a simple accelerated report that looks like this:   index=hosts | stats count by hostname ip   I now want to dashboard that but use a timechart. However, timechart doesn't work because _time wasn't included in the stats count by section. index=hosts | stats count by hostname ip | timechart span=1d count I can't just add _time to the stats section in the accelerated report because it increase the amount of rows 1000x fold. I assume creating the accelerated report with timechart will cause the same 1000x issue. So is there a way around this with accelerated reports that won't cause my accelerated summary to grow 1000x? How an I access the _time? I'm feeling like it's not possible and that this is a better scenario for traditional summary indexing. Am I wrong?
Hi Everyone, Any help would be appreciated. We have 4 Splunk instances that work together in tandem. All four servers are Virtual Machines running Red Hat Enterprise Linux 8 Splunk Enterprise 8.2.2... See more...
Hi Everyone, Any help would be appreciated. We have 4 Splunk instances that work together in tandem. All four servers are Virtual Machines running Red Hat Enterprise Linux 8 Splunk Enterprise 8.2.2. VCenter is 6.7 with 4 ESXI Host each running 6.7 as well.   The Four Splunk VMs are running very high CPU capacity at all times: 45.8 GHz 83.44 GHz 45.6 GHz 83.82 GHz   It is basically running our ESXi Hosts to full capacity. I logged onto each server and ran the top -i command and each server states very low CPU usage. Does anyone have any recommendations? Any help would be greatly appreciated.   Thank you,
If my index rolls off data at 30 days, and I run an accelerated report every day to build a summary for that day, will the summary have data going back a year eventually? Or is it limited to 30 days ... See more...
If my index rolls off data at 30 days, and I run an accelerated report every day to build a summary for that day, will the summary have data going back a year eventually? Or is it limited to 30 days because of my index setting?
I'm working on building a remote deployment for the Splunk Universal Forwarder with PDQ Deploy on our Windows 10 computers.  I can run the initial splunk forwarder .msi installation without issue, bu... See more...
I'm working on building a remote deployment for the Splunk Universal Forwarder with PDQ Deploy on our Windows 10 computers.  I can run the initial splunk forwarder .msi installation without issue, but when I try to run the .spl file to sync the computer to our Splunk cloud environment, it errors out every time. The command I'm using works fine when I run it locally, but I get "login failed" when I run it through PDQ. cd "C:\Program Files\SplunkUniversalForwarder\bin" splunk install app \splunkclouduf.spl -auth username:password Is there a tweak I can make to the command or another way to accomplish the sync to our cloud environment? Thanks in advance!
Can not access cloud trial instance with sc_admin account This is the 2nd time trial has been interrupted. yesterday i was able to send reset. received password reset email and then i was back in. ... See more...
Can not access cloud trial instance with sc_admin account This is the 2nd time trial has been interrupted. yesterday i was able to send reset. received password reset email and then i was back in. this morning sc_admin account was not working. I tried same process to reset password and ive tried 7 various times today and never get a password reset email. I've even called support and that person was unable to help me get back in to the trial instance. can someone please advise? Please note i have not changed password for sc_admin account either time that it changed or became inactive somehow. Consecutive days overnight the password does not allow me to access instance. This time the password reset process does not appear to be working either. I'd also like to mention that its not a very good potential customer experience when a support person on phone can't even figure out how to get you back in to instance after speaking with their channels.
Can the Splunk Python SDK be used to run multithreaded code or for multiprocessing. I have some code which uses EventingCommand interface to generate statistical information on a large set of data. I... See more...
Can the Splunk Python SDK be used to run multithreaded code or for multiprocessing. I have some code which uses EventingCommand interface to generate statistical information on a large set of data. I need to use multithreading/multiprocessing to speed it up. Is it possible to use multithreading/multiprocessing in Splunk Python SDK.
I have a index with thousands of operating systems (OS).  I want to remove unwanted operating systems (OS) from my report using wild cards as many of the unwanted share the same value as part of the ... See more...
I have a index with thousands of operating systems (OS).  I want to remove unwanted operating systems (OS) from my report using wild cards as many of the unwanted share the same value as part of the OS. Here is what I'm trying to do: earliest=-15d@d index="asset" sourcetype="Tenable:SecurityCenter:Asset" WHERE operating_system NOT "[APC*" OR "[AIX*" | stats count by operating_system I want to remove OS that have APC or AIX ( and others not listed) from the query.  But I can't use a wildcard which would mean hundreds of entries just for APC and all the versions I want to exclude.  I've tried NOT IN, NOT LIKE, != and more but either nothing is returned or what I want filtered out is not filtered and all events are returned.  Suggestions?
Greetings Splunk Community! I've looked through the pages here and haven't been fortunate to find a working answer that matches what I'm looking for. I'm trying to compare an event within the past 2... See more...
Greetings Splunk Community! I've looked through the pages here and haven't been fortunate to find a working answer that matches what I'm looking for. I'm trying to compare an event within the past 24 hours against the average events seen in the past week or month. Below are some threads which seemed similar to my question. https://community.splunk.com/t5/Splunk-Search/Using-timewarp-to-compare-average-of-last-30-days-to-current-24/m-p/557919 https://community.splunk.com/t5/Splunk-Search/Need-help-on-how-to-alert-if-daily-count-exceed-30-days-average/m-p/549636<---Unable to get this modified to work as desired Below is a screenshot of the search and output.  It appears to me that the Eval statement is just taking the count of Today and dividing it by 7. It is not producing an actual 7 day average of the past week. I feel like I'm overlooking something obvious, but at the moment it is escaping me.
Here is the query I am starting with: index=anIndex sourcetype=aSourceType ("StringA" OR "StringB") | eval type=case(like(_raw, "%StringA%"), "A", like(_raw, "%StringB%"), "B") | timechart span=10... See more...
Here is the query I am starting with: index=anIndex sourcetype=aSourceType ("StringA" OR "StringB") | eval type=case(like(_raw, "%StringA%"), "A", like(_raw, "%StringB%"), "B") | timechart span=10m count by type | eval Percentage=round((A/B)*100,2) | eval Threshold-90=90 | eval Threshold-75=75 | fields + _time, Percentage, Threshold-90, Threshold-75   First off I am not 100% sure the above query is correct but I do get data that I can chart into a dashboard.  Trying to show 'Percentage' over time. I using this in a dashboard and am using in the chart overlay (Percentage, Threshold-90, Threshold-75) and the resulting timechart graph appears to be showing the calculated Percentage correctly in 10 minute intervals.   What I am wondering if it is possible to make the calculation (Percentage) using data looking back X hours. From the way I think its working above is that at each percentage calculation for A & B in the graph it is the number of occurrences of "StringA" & "StringB" for the current 10 minutes in the graph.
hello I dont succeed to sort the events by time the format time field is for example :   1632218561 what is wrong please?     index="tutu" sourcetype="toto" | search statustext=TimedOut | so... See more...
hello I dont succeed to sort the events by time the format time field is for example :   1632218561 what is wrong please?     index="tutu" sourcetype="toto" | search statustext=TimedOut | sort - time | eval time = strftime(_time, "%d-%m-%y %H:%M") | stats last(time) as Heure, last(statustext) as statustext by desktop      
Hello, I'm building some dashboard statistics from telecom data. I have a data source as follows  : _time OfferedTime  PickedUpTime Offered="0/1" Handled="0/1" _time is populated with Offered... See more...
Hello, I'm building some dashboard statistics from telecom data. I have a data source as follows  : _time OfferedTime  PickedUpTime Offered="0/1" Handled="0/1" _time is populated with OfferedTime   User can use a Time picker that is generating a token. I'm manipulating this token by going 5 days in the past for earliest and 5 days in the future for latest in dashboard to get a wider data set than the one selected by the user. And then using variables in the search to restore time boundaries to initial selection that I use for some specific calculation (not shown in the code sample). I'm trying to Timechart some metrics and to remove all data that is out of the time range initially selected by the user :   [MYSEARCH] | addinfo | eval end=if(info_max_time=="+Infinity",now(),info_max_time)-432000 | eval beginning=if(info_min_time=="0.000",1604260193,info_min_time)+432000 | eval DateBegin = beginning | eval DateEnd = end | eval FormatTime = _time | timechart count(eval(if(strptime(OfferedTime,"%Y-%m-%d %H:%M:%S.%Q") > beginning and strptime(OfferedTime,"%Y-%m-%d %H:%M:%S.%Q") < end,Offered,null()))) as OfferedCalls count(eval(if(Handled="1" AND strptime(PickedUpTime,"%Y-%m-%d %H:%M:%S.%Q") > beginning and strptime(PickedUpTime,"%Y-%m-%d %H:%M:%S.%Q") < end AND BindType_ID!=4 AND BindType_ID!=5,Handled,null()))) as HandledCalls | where _time > beginning and _time < end   I added DateBegin / DateEnd / FormatTime as I wanted to make sure in events tab that my dates had the correct format and could be compared. _time OfferedTime DateBegin DateEnd FormatTime 21/09/2021 18:24:54,000 2021-09-21 18:24:54.0 1630926000.000 1632223379.000 1632241494.000 The result of this search is ... no results found. If I go to events tab, copy the DateBegin and DateEnd and change my search to :   | where _time > 1630926000.000 and _time < 1632223379.000   It works fine and I get the expected result... I don't understand why... If I don't put where condition at the end I get this result : _time OfferedCalls HandledCalls 2021-09-04 0 0 2021-09-05 0 0 2021-09-06 156 115 2021-09-07 215 174 2021-09-08 280 217 2021-09-09 227 176 2021-09-10 223 184 2021-09-11 0 0 2021-09-12 0 0 2021-09-13 336 254 2021-09-14 285 220 2021-09-15 228 172 2021-09-16 243 177 2021-09-17 273 197 2021-09-18 0 0 What I'm trying to get is :  _time OfferedCalls HandledCalls 2021-09-06 156 115 2021-09-07 215 174 2021-09-08 280 217 2021-09-09 227 176 2021-09-10 223 184 2021-09-11 0 0 2021-09-12 0 0 2021-09-13 336 254 2021-09-14 285 220 2021-09-15 228 172 2021-09-16 243 177 2021-09-17 273 197   Basically getting rid of the data before / after my date range (beginning / end) without losing the 0 values which are inside the time range. I tried to play with various functions to replace 0 with NULL outside the range but couldn't manage to have this apply only outside my time range,. If anybody has an idea on how to solve this issue that would be great. Thanks in advance !
Hi there, My app's setup page leverages react, reactDOM, and bluebird stored under vendor folder. Previously the setup page worked well on both Chrome and Safari. But today both browser returned err... See more...
Hi there, My app's setup page leverages react, reactDOM, and bluebird stored under vendor folder. Previously the setup page worked well on both Chrome and Safari. But today both browser returned errors like this when loading the setup page. I tried to re-upload those three dependency files but it remained the same.     However, I can still load and access the setup page via Chrome with security mode turned off.   Could anyone help on this? How can I get rid of these errors when loading with normal browsers? Thank you!
AppD is detecting transactions using custom include rules, in tiers not contained in the scope, for the rules that are detecting the transactions. That is happening, despite me having higher priorit... See more...
AppD is detecting transactions using custom include rules, in tiers not contained in the scope, for the rules that are detecting the transactions. That is happening, despite me having higher priority custom include rules, that should detect those transactions. The rules that are detecting the transactions, should not be detecting anything in the tier they are detecting the transactions in, because the scope does not include the tier where the transaction is being detected. But even so... the custom rule that I have in place, SHOULD override those rules, based on priority, even if the tier was included in the scope of the rules which are detecting the transactions. The only way I have been able to get my custom include rule to detect the transactions, is to completely disable the 2 lower priority custom include rules, which are masking my rule (but should not be). I have tried both including the tier (rate-service-3534853), in the scope (AllTierse), and excluding the tier (rate-service-3534853) in the scope (AllTierse), and the effect is the same... both rules: 'default-Servlet-catchall,' and 'Default-Spring Bean - Catchall,' continue to detect transactions in the 'rate-service-3534853' tier... and in doing so, are masking my higher priority custom include rule. What gives?   ('AllTierse' scope showing that 'rate-service-3534853' is excluded from the scope.  But tried it both included, and excluded.  Made no difference.  Rules using this scope are invoked, regardless. (Custom Match rule for 'default-Servlet-catchall' rule.   Low priority, '1'.) (Custom Match rule for 'Default-Spring Bean - Catchall' rule.   Low priority, '1'.) (Transaction Detection snapshots showing 'default-Servlet-catchall' rule, and 'Default-Spring Bean - Catchall' detecting transactions in the 'rate-service-3534853' tier. (Configuration showing that  'default-Servlet-catchall' rule, and 'Default-Spring Bean - Catchall' are not even applied to the 'rate-service-3534853' tier... and yet, both of those rules are masking the rules I have highlighted in the screenshot below).
Hi I was able to install and configure the AMP for Endpoints Event Inputs App for all Event Types and Groups. However, not sure why, when I do a search in Splunk, index=* sourcetype="cisco:amp:event... See more...
Hi I was able to install and configure the AMP for Endpoints Event Inputs App for all Event Types and Groups. However, not sure why, when I do a search in Splunk, index=* sourcetype="cisco:amp:event", I can only see AMP4E events like from 8 hours ago, I am not able to see any of the recent AMP4E events
I'm trying to extract 1 fields from a log line. Just trying to extract the email. I cant extract a single field  and i get an error saying my rex has exceeded configured match_limit, consider raisin... See more...
I'm trying to extract 1 fields from a log line. Just trying to extract the email. I cant extract a single field  and i get an error saying my rex has exceeded configured match_limit, consider raising the value in limits.conf. Any suggestion of where I am doing wrong? Is that possible is my rex(as below) not right? Used the splunk cloud field extractor   Error in 'rex' command: regex="(?ms)^\d+\-\d+\-\d+\w+\d+:\d+:\d+\.\d+\+\d+:\d+\s+\d+\.\d+\.\d+\.\d+\s+\w+\-\w+\s+\d+\s+\-\s+\[\w+\s+\w+="\d+"\]\s+\w+\s+\w+\s+\w+\s+\w+:\s+<\d+>\d+\s+\d+\-\d+\-\d+\w+\d+:\d+:\d+\.\d+\-\d+:\d+\s+\w+\-\w+\-\w+\s+\-\s+\-\s+\w+>@<\s+\{\s+"\w+":\s+"\d+\.\d+",\s+"\w+":\s+"\w+",\s+"\w+":\s+"\d+\-\d+\-\d+\w+\d+:\d+:\d+\.\d+\w+",\s+"\w+":\s+"\w+\d+\w+\d+\w+\d+\w+/\d+\w+\d+\w+\d+\w+\d+\w+/\d+\w+\d+\w+=",\s+"\w+":\s+\{\s+"\w+":\s+"\w+\d+\w+\-\w+",\s+"\w+":\s+"\d+\.\d+\.\d+\.\d+",\s+"\w+":\s+"(?P<sss>[^"]+)" has exceeded configured match_limit, consider raising the value in limits.conf   Sample logs 2018-10-14T12:55:30.418+00:00 10.3.4.150 syslog-ng 176 - [meta sequenceId="100000"] Error processing log message: <14>1 2018-10-21T08:55:30.791523-04:00 CB-ID-SCT - - RemoteLogging>@< { "logVersion": "1.0", "category": "AUDIT", "timeStamp": "2021-09-21T12:53:16.879Z", "id": "vy1m6dhu0xlrRdo0se5IJmWQnR8mPb+QpeFcILHySTU=", "context": { "tenantId": "ZZNXA0OELD-STA", "originatingAddress": "54.189.24.789", "principalId": "tundern@gmail.com", "sessionId": "cd419cd2-fge7f-5671-98c0-87d8b1e035dd", "globalAccessId": "42ga93ea-x5a9-81c8-4a87-b32b9abc3fa2", "applicationType": "SAML", "applicationName": "Cali Baco localSafetys - WC02", "policyName": "Global Policy for STA" }, "details": { "type": "ACCESS_REQUEST", "state": "Accepted", "action": "auth", "credentials": [{ "type": "cut", "state": "Verified" } ] } }